diff --git a/metadata/metadata b/metadata/metadata --- a/metadata/metadata +++ b/metadata/metadata @@ -1,10448 +1,10454 @@ [Arith_Prog_Rel_Primes] title = Arithmetic progressions and relative primes author = José Manuel Rodríguez Caballero topic = Mathematics/Number theory date = 2020-02-01 notify = jose.manuel.rodriguez.caballero@ut.ee abstract = This article provides a formalization of the solution obtained by the author of the Problem “ARITHMETIC PROGRESSIONS” from the Putnam exam problems of 2002. The statement of the problem is as follows: For which integers n > 1 does the set of positive integers less than and relatively prime to n constitute an arithmetic progression? [Banach_Steinhaus] title = Banach-Steinhaus Theorem author = Dominique Unruh , Jose Manuel Rodriguez Caballero topic = Mathematics/Analysis date = 2020-05-02 notify = jose.manuel.rodriguez.caballero@ut.ee, unruh@ut.ee abstract = We formalize in Isabelle/HOL a result due to S. Banach and H. Steinhaus known as the Banach-Steinhaus theorem or Uniform boundedness principle: a pointwise-bounded family of continuous linear operators from a Banach space to a normed space is uniformly bounded. Our approach is an adaptation to Isabelle/HOL of a proof due to A. Sokal. [Complex_Geometry] title = Complex Geometry author = Filip Marić , Danijela Simić topic = Mathematics/Geometry date = 2019-12-16 notify = danijela@matf.bg.ac.rs, filip@matf.bg.ac.rs, boutry@unistra.fr abstract = A formalization of geometry of complex numbers is presented. Fundamental objects that are investigated are the complex plane extended by a single infinite point, its objects (points, lines and circles), and groups of transformations that act on them (e.g., inversions and Möbius transformations). Most objects are defined algebraically, but correspondence with classical geometric definitions is shown. [Poincare_Disc] title = Poincaré Disc Model author = Danijela Simić , Filip Marić , Pierre Boutry topic = Mathematics/Geometry date = 2019-12-16 notify = danijela@matf.bg.ac.rs, filip@matf.bg.ac.rs, boutry@unistra.fr abstract = We describe formalization of the Poincaré disc model of hyperbolic geometry within the Isabelle/HOL proof assistant. The model is defined within the extended complex plane (one dimensional complex projectives space ℂP1), formalized in the AFP entry “Complex Geometry”. Points, lines, congruence of pairs of points, betweenness of triples of points, circles, and isometries are defined within the model. It is shown that the model satisfies all Tarski's axioms except the Euclid's axiom. It is shown that it satisfies its negation and the limiting parallels axiom (which proves it to be a model of hyperbolic geometry). [Fourier] title = Fourier Series author = Lawrence C Paulson topic = Mathematics/Analysis date = 2019-09-06 notify = lp15@cam.ac.uk abstract = This development formalises the square integrable functions over the reals and the basics of Fourier series. It culminates with a proof that every well-behaved periodic function can be approximated by a Fourier series. The material is ported from HOL Light: https://github.com/jrh13/hol-light/blob/master/100/fourier.ml [Generic_Deriving] title = Deriving generic class instances for datatypes author = Jonas Rädle , Lars Hupel topic = Computer science/Data structures date = 2018-11-06 notify = jonas.raedle@gmail.com abstract =

We provide a framework for automatically deriving instances for generic type classes. Our approach is inspired by Haskell's generic-deriving package and Scala's shapeless library. In addition to generating the code for type class functions, we also attempt to automatically prove type class laws for these instances. As of now, however, some manual proofs are still required for recursive datatypes.

Note: There are already articles in the AFP that provide automatic instantiation for a number of classes. Concretely, Deriving allows the automatic instantiation of comparators, linear orders, equality, and hashing. Show instantiates a Haskell-style show class.

Our approach works for arbitrary classes (with some Isabelle/HOL overhead for each class), but a smaller set of datatypes.

[Partial_Order_Reduction] title = Partial Order Reduction author = Julian Brunner topic = Computer science/Automata and formal languages date = 2018-06-05 notify = brunnerj@in.tum.de abstract = This entry provides a formalization of the abstract theory of ample set partial order reduction. The formalization includes transition systems with actions, trace theory, as well as basics on finite, infinite, and lazy sequences. We also provide a basic framework for static analysis on concurrent systems with respect to the ample set condition. [CakeML] title = CakeML author = Lars Hupel , Yu Zhang <> contributors = Johannes Åman Pohjola <> topic = Computer science/Programming languages/Language definitions date = 2018-03-12 notify = hupel@in.tum.de abstract = CakeML is a functional programming language with a proven-correct compiler and runtime system. This entry contains an unofficial version of the CakeML semantics that has been exported from the Lem specifications to Isabelle. Additionally, there are some hand-written theory files that adapt the exported code to Isabelle and port proofs from the HOL4 formalization, e.g. termination and equivalence proofs. [CakeML_Codegen] title = A Verified Code Generator from Isabelle/HOL to CakeML author = Lars Hupel topic = Computer science/Programming languages/Compiling, Logic/Rewriting date = 2019-07-08 notify = lars@hupel.info abstract = This entry contains the formalization that accompanies my PhD thesis (see https://lars.hupel.info/research/codegen/). I develop a verified compilation toolchain from executable specifications in Isabelle/HOL to CakeML abstract syntax trees. This improves over the state-of-the-art in Isabelle by providing a trustworthy procedure for code generation. [DiscretePricing] title = Pricing in discrete financial models author = Mnacho Echenim topic = Mathematics/Probability theory, Mathematics/Games and economics date = 2018-07-16 notify = mnacho.echenim@univ-grenoble-alpes.fr abstract = We have formalized the computation of fair prices for derivative products in discrete financial models. As an application, we derive a way to compute fair prices of derivative products in the Cox-Ross-Rubinstein model of a financial market, thus completing the work that was presented in this paper. extra-history = Change history: [2019-05-12]: Renamed discr_mkt predicate to stk_strict_subs and got rid of predicate A for a more natural definition of the type discrete_market; renamed basic quantity processes for coherent notation; renamed value_process into val_process and closing_value_process to cls_val_process; relaxed hypothesis of lemma CRR_market_fair_price. Added functions to price some basic options. (revision 0b813a1a833f)
[Pell] title = Pell's Equation author = Manuel Eberl topic = Mathematics/Number theory date = 2018-06-23 notify = eberlm@in.tum.de abstract =

This article gives the basic theory of Pell's equation x2 = 1 + Dy2, where D ∈ ℕ is a parameter and x, y are integer variables.

The main result that is proven is the following: If D is not a perfect square, then there exists a fundamental solution (x0, y0) that is not the trivial solution (1, 0) and which generates all other solutions (x, y) in the sense that there exists some n ∈ ℕ such that |x| + |y| √D = (x0 + y0 √D)n. This also implies that the set of solutions is infinite, and it gives us an explicit and executable characterisation of all the solutions.

Based on this, simple executable algorithms for computing the fundamental solution and the infinite sequence of all non-negative solutions are also provided.

[WebAssembly] title = WebAssembly author = Conrad Watt topic = Computer science/Programming languages/Language definitions date = 2018-04-29 notify = caw77@cam.ac.uk abstract = This is a mechanised specification of the WebAssembly language, drawn mainly from the previously published paper formalisation of Haas et al. Also included is a full proof of soundness of the type system, together with a verified type checker and interpreter. We include only a partial procedure for the extraction of the type checker and interpreter here. For more details, please see our paper in CPP 2018. [Knuth_Morris_Pratt] title = The string search algorithm by Knuth, Morris and Pratt author = Fabian Hellauer , Peter Lammich topic = Computer science/Algorithms date = 2017-12-18 notify = hellauer@in.tum.de, lammich@in.tum.de abstract = The Knuth-Morris-Pratt algorithm is often used to show that the problem of finding a string s in a text t can be solved deterministically in O(|s| + |t|) time. We use the Isabelle Refinement Framework to formulate and verify the algorithm. Via refinement, we apply some optimisations and finally use the Sepref tool to obtain executable code in Imperative/HOL. [Minkowskis_Theorem] title = Minkowski's Theorem author = Manuel Eberl topic = Mathematics/Geometry, Mathematics/Number theory date = 2017-07-13 notify = eberlm@in.tum.de abstract =

Minkowski's theorem relates a subset of ℝn, the Lebesgue measure, and the integer lattice ℤn: It states that any convex subset of ℝn with volume greater than 2n contains at least one lattice point from ℤn\{0}, i. e. a non-zero point with integer coefficients.

A related theorem which directly implies this is Blichfeldt's theorem, which states that any subset of ℝn with a volume greater than 1 contains two different points whose difference vector has integer components.

The entry contains a proof of both theorems.

[Name_Carrying_Type_Inference] title = Verified Metatheory and Type Inference for a Name-Carrying Simply-Typed Lambda Calculus author = Michael Rawson topic = Computer science/Programming languages/Type systems date = 2017-07-09 notify = mr644@cam.ac.uk, michaelrawson76@gmail.com abstract = I formalise a Church-style simply-typed \(\lambda\)-calculus, extended with pairs, a unit value, and projection functions, and show some metatheory of the calculus, such as the subject reduction property. Particular attention is paid to the treatment of names in the calculus. A nominal style of binding is used, but I use a manual approach over Nominal Isabelle in order to extract an executable type inference algorithm. More information can be found in my undergraduate dissertation. [Propositional_Proof_Systems] title = Propositional Proof Systems author = Julius Michaelis , Tobias Nipkow topic = Logic/Proof theory date = 2017-06-21 notify = maintainafpppt@liftm.de abstract = We formalize a range of proof systems for classical propositional logic (sequent calculus, natural deduction, Hilbert systems, resolution) and prove the most important meta-theoretic results about semantics and proofs: compactness, soundness, completeness, translations between proof systems, cut-elimination, interpolation and model existence. [Optics] title = Optics author = Simon Foster , Frank Zeyda topic = Computer science/Functional programming, Mathematics/Algebra date = 2017-05-25 notify = simon.foster@york.ac.uk abstract = Lenses provide an abstract interface for manipulating data types through spatially-separated views. They are defined abstractly in terms of two functions, get, the return a value from the source type, and put that updates the value. We mechanise the underlying theory of lenses, in terms of an algebraic hierarchy of lenses, including well-behaved and very well-behaved lenses, each lens class being characterised by a set of lens laws. We also mechanise a lens algebra in Isabelle that enables their composition and comparison, so as to allow construction of complex lenses. This is accompanied by a large library of algebraic laws. Moreover we also show how the lens classes can be applied by instantiating them with a number of Isabelle data types. extra-history = Change history: [2020-03-02]: Added partial bijective and symmetric lenses. Improved alphabet command generating additional lenses and results. Several additional lens relations, including observational equivalence. Additional theorems throughout. Adaptations for Isabelle 2020. (revision 44e2e5c) [2021-01-27] Addition of new theorems throughout, particularly for prisms. New "chantype" command allows the definition of an algebraic datatype with generated prisms. New "dataspace" command allows the definition of a local-based state space, including lenses and prisms. Addition of various examples for the above. (revision 89cf045a) [Game_Based_Crypto] title = Game-based cryptography in HOL author = Andreas Lochbihler , S. Reza Sefidgar <>, Bhargav Bhatt topic = Computer science/Security/Cryptography date = 2017-05-05 notify = mail@andreas-lochbihler.de abstract =

In this AFP entry, we show how to specify game-based cryptographic security notions and formally prove secure several cryptographic constructions from the literature using the CryptHOL framework. Among others, we formalise the notions of a random oracle, a pseudo-random function, an unpredictable function, and of encryption schemes that are indistinguishable under chosen plaintext and/or ciphertext attacks. We prove the random-permutation/random-function switching lemma, security of the Elgamal and hashed Elgamal public-key encryption scheme and correctness and security of several constructions with pseudo-random functions.

Our proofs follow the game-hopping style advocated by Shoup and Bellare and Rogaway, from which most of the examples have been taken. We generalise some of their results such that they can be reused in other proofs. Thanks to CryptHOL's integration with Isabelle's parametricity infrastructure, many simple hops are easily justified using the theory of representation independence.

extra-history = Change history: [2018-09-28]: added the CryptHOL tutorial for game-based cryptography (revision 489a395764ae) [Multi_Party_Computation] title = Multi-Party Computation author = David Aspinall , David Butler topic = Computer science/Security date = 2019-05-09 notify = dbutler@turing.ac.uk abstract = We use CryptHOL to consider Multi-Party Computation (MPC) protocols. MPC was first considered by Yao in 1983 and recent advances in efficiency and an increased demand mean it is now deployed in the real world. Security is considered using the real/ideal world paradigm. We first define security in the semi-honest security setting where parties are assumed not to deviate from the protocol transcript. In this setting we prove multiple Oblivious Transfer (OT) protocols secure and then show security for the gates of the GMW protocol. We then define malicious security, this is a stronger notion of security where parties are assumed to be fully corrupted by an adversary. In this setting we again consider OT, as it is a fundamental building block of almost all MPC protocols. [Sigma_Commit_Crypto] title = Sigma Protocols and Commitment Schemes author = David Butler , Andreas Lochbihler topic = Computer science/Security/Cryptography date = 2019-10-07 notify = dbutler@turing.ac.uk abstract = We use CryptHOL to formalise commitment schemes and Sigma-protocols. Both are widely used fundamental two party cryptographic primitives. Security for commitment schemes is considered using game-based definitions whereas the security of Sigma-protocols is considered using both the game-based and simulation-based security paradigms. In this work, we first define security for both primitives and then prove secure multiple case studies: the Schnorr, Chaum-Pedersen and Okamoto Sigma-protocols as well as a construction that allows for compound (AND and OR statements) Sigma-protocols and the Pedersen and Rivest commitment schemes. We also prove that commitment schemes can be constructed from Sigma-protocols. We formalise this proof at an abstract level, only assuming the existence of a Sigma-protocol; consequently, the instantiations of this result for the concrete Sigma-protocols we consider come for free. [CryptHOL] title = CryptHOL author = Andreas Lochbihler topic = Computer science/Security/Cryptography, Computer science/Functional programming, Mathematics/Probability theory date = 2017-05-05 notify = mail@andreas-lochbihler.de abstract =

CryptHOL provides a framework for formalising cryptographic arguments in Isabelle/HOL. It shallowly embeds a probabilistic functional programming language in higher order logic. The language features monadic sequencing, recursion, random sampling, failures and failure handling, and black-box access to oracles. Oracles are probabilistic functions which maintain hidden state between different invocations. All operators are defined in the new semantic domain of generative probabilistic values, a codatatype. We derive proof rules for the operators and establish a connection with the theory of relational parametricity. Thus, the resuting proofs are trustworthy and comprehensible, and the framework is extensible and widely applicable.

The framework is used in the accompanying AFP entry "Game-based Cryptography in HOL". There, we show-case our framework by formalizing different game-based proofs from the literature. This formalisation continues the work described in the author's ESOP 2016 paper.

[Constructive_Cryptography] title = Constructive Cryptography in HOL author = Andreas Lochbihler , S. Reza Sefidgar<> topic = Computer science/Security/Cryptography, Mathematics/Probability theory date = 2018-12-17 notify = mail@andreas-lochbihler.de, reza.sefidgar@inf.ethz.ch abstract = Inspired by Abstract Cryptography, we extend CryptHOL, a framework for formalizing game-based proofs, with an abstract model of Random Systems and provide proof rules about their composition and equality. This foundation facilitates the formalization of Constructive Cryptography proofs, where the security of a cryptographic scheme is realized as a special form of construction in which a complex random system is built from simpler ones. This is a first step towards a fully-featured compositional framework, similar to Universal Composability framework, that supports formalization of simulation-based proofs. [Probabilistic_While] title = Probabilistic while loop author = Andreas Lochbihler topic = Computer science/Functional programming, Mathematics/Probability theory, Computer science/Algorithms date = 2017-05-05 notify = mail@andreas-lochbihler.de abstract = This AFP entry defines a probabilistic while operator based on sub-probability mass functions and formalises zero-one laws and variant rules for probabilistic loop termination. As applications, we implement probabilistic algorithms for the Bernoulli, geometric and arbitrary uniform distributions that only use fair coin flips, and prove them correct and terminating with probability 1. extra-history = Change history: [2018-02-02]: Added a proof that probabilistic conditioning can be implemented by repeated sampling. (revision 305867c4e911)
[Monad_Normalisation] title = Monad normalisation author = Joshua Schneider <>, Manuel Eberl , Andreas Lochbihler topic = Tools, Computer science/Functional programming, Logic/Rewriting date = 2017-05-05 notify = mail@andreas-lochbihler.de abstract = The usual monad laws can directly be used as rewrite rules for Isabelle’s simplifier to normalise monadic HOL terms and decide equivalences. In a commutative monad, however, the commutativity law is a higher-order permutative rewrite rule that makes the simplifier loop. This AFP entry implements a simproc that normalises monadic expressions in commutative monads using ordered rewriting. The simproc can also permute computations across control operators like if and case. [Monomorphic_Monad] title = Effect polymorphism in higher-order logic author = Andreas Lochbihler topic = Computer science/Functional programming date = 2017-05-05 notify = mail@andreas-lochbihler.de abstract = The notion of a monad cannot be expressed within higher-order logic (HOL) due to type system restrictions. We show that if a monad is used with values of only one type, this notion can be formalised in HOL. Based on this idea, we develop a library of effect specifications and implementations of monads and monad transformers. Hence, we can abstract over the concrete monad in HOL definitions and thus use the same definition for different (combinations of) effects. We illustrate the usefulness of effect polymorphism with a monadic interpreter for a simple language. extra-history = Change history: [2018-02-15]: added further specifications and implementations of non-determinism; more examples (revision bc5399eea78e)
[Constructor_Funs] title = Constructor Functions author = Lars Hupel topic = Tools date = 2017-04-19 notify = hupel@in.tum.de abstract = Isabelle's code generator performs various adaptations for target languages. Among others, constructor applications have to be fully saturated. That means that for constructor calls occuring as arguments to higher-order functions, synthetic lambdas have to be inserted. This entry provides tooling to avoid this construction altogether by introducing constructor functions. [Lazy_Case] title = Lazifying case constants author = Lars Hupel topic = Tools date = 2017-04-18 notify = hupel@in.tum.de abstract = Isabelle's code generator performs various adaptations for target languages. Among others, case statements are printed as match expressions. Internally, this is a sophisticated procedure, because in HOL, case statements are represented as nested calls to the case combinators as generated by the datatype package. Furthermore, the procedure relies on laziness of match expressions in the target language, i.e., that branches guarded by patterns that fail to match are not evaluated. Similarly, if-then-else is printed to the corresponding construct in the target language. This entry provides tooling to replace these special cases in the code generator by ignoring these target language features, instead printing case expressions and if-then-else as functions. [Dict_Construction] title = Dictionary Construction author = Lars Hupel topic = Tools date = 2017-05-24 notify = hupel@in.tum.de abstract = Isabelle's code generator natively supports type classes. For targets that do not have language support for classes and instances, it performs the well-known dictionary translation, as described by Haftmann and Nipkow. This translation happens outside the logic, i.e., there is no guarantee that it is correct, besides the pen-and-paper proof. This work implements a certified dictionary translation that produces new class-free constants and derives equality theorems. [Higher_Order_Terms] title = An Algebra for Higher-Order Terms author = Lars Hupel contributors = Yu Zhang <> topic = Computer science/Programming languages/Lambda calculi date = 2019-01-15 notify = lars@hupel.info abstract = In this formalization, I introduce a higher-order term algebra, generalizing the notions of free variables, matching, and substitution. The need arose from the work on a verified compiler from Isabelle to CakeML. Terms can be thought of as consisting of a generic (free variables, constants, application) and a specific part. As example applications, this entry provides instantiations for de-Bruijn terms, terms with named variables, and Blanchette’s λ-free higher-order terms. Furthermore, I implement translation functions between de-Bruijn terms and named terms and prove their correctness. [Subresultants] title = Subresultants author = Sebastiaan Joosten , René Thiemann , Akihisa Yamada topic = Mathematics/Algebra date = 2017-04-06 notify = rene.thiemann@uibk.ac.at abstract = We formalize the theory of subresultants and the subresultant polynomial remainder sequence as described by Brown and Traub. As a result, we obtain efficient certified algorithms for computing the resultant and the greatest common divisor of polynomials. [Comparison_Sort_Lower_Bound] title = Lower bound on comparison-based sorting algorithms author = Manuel Eberl topic = Computer science/Algorithms date = 2017-03-15 notify = eberlm@in.tum.de abstract =

This article contains a formal proof of the well-known fact that number of comparisons that a comparison-based sorting algorithm needs to perform to sort a list of length n is at least log2 (n!) in the worst case, i. e. Ω(n log n).

For this purpose, a shallow embedding for comparison-based sorting algorithms is defined: a sorting algorithm is a recursive datatype containing either a HOL function or a query of a comparison oracle with a continuation containing the remaining computation. This makes it possible to force the algorithm to use only comparisons and to track the number of comparisons made.

[Quick_Sort_Cost] title = The number of comparisons in QuickSort author = Manuel Eberl topic = Computer science/Algorithms date = 2017-03-15 notify = eberlm@in.tum.de abstract =

We give a formal proof of the well-known results about the number of comparisons performed by two variants of QuickSort: first, the expected number of comparisons of randomised QuickSort (i. e. QuickSort with random pivot choice) is 2 (n+1) Hn - 4 n, which is asymptotically equivalent to 2 n ln n; second, the number of comparisons performed by the classic non-randomised QuickSort has the same distribution in the average case as the randomised one.

[Random_BSTs] title = Expected Shape of Random Binary Search Trees author = Manuel Eberl topic = Computer science/Data structures date = 2017-04-04 notify = eberlm@in.tum.de abstract =

This entry contains proofs for the textbook results about the distributions of the height and internal path length of random binary search trees (BSTs), i. e. BSTs that are formed by taking an empty BST and inserting elements from a fixed set in random order.

In particular, we prove a logarithmic upper bound on the expected height and the Θ(n log n) closed-form solution for the expected internal path length in terms of the harmonic numbers. We also show how the internal path length relates to the average-case cost of a lookup in a BST.

[Randomised_BSTs] title = Randomised Binary Search Trees author = Manuel Eberl topic = Computer science/Data structures date = 2018-10-19 notify = eberlm@in.tum.de abstract =

This work is a formalisation of the Randomised Binary Search Trees introduced by Martínez and Roura, including definitions and correctness proofs.

Like randomised treaps, they are a probabilistic data structure that behaves exactly as if elements were inserted into a non-balancing BST in random order. However, unlike treaps, they only use discrete probability distributions, but their use of randomness is more complicated.

[E_Transcendental] title = The Transcendence of e author = Manuel Eberl topic = Mathematics/Analysis, Mathematics/Number theory date = 2017-01-12 notify = eberlm@in.tum.de abstract =

This work contains a proof that Euler's number e is transcendental. The proof follows the standard approach of assuming that e is algebraic and then using a specific integer polynomial to derive two inconsistent bounds, leading to a contradiction.

This kind of approach can be found in many different sources; this formalisation mostly follows a PlanetMath article by Roger Lipsett.

[Pi_Transcendental] title = The Transcendence of π author = Manuel Eberl topic = Mathematics/Number theory date = 2018-09-28 notify = eberlm@in.tum.de abstract =

This entry shows the transcendence of π based on the classic proof using the fundamental theorem of symmetric polynomials first given by von Lindemann in 1882, but the formalisation mostly follows the version by Niven. The proof reuses much of the machinery developed in the AFP entry on the transcendence of e.

[Hermite_Lindemann] title = The Hermite–Lindemann–Weierstraß Transcendence Theorem author = Manuel Eberl topic = Mathematics/Number theory date = 2021-03-03 notify = eberlm@in.tum.de abstract =

This article provides a formalisation of the Hermite-Lindemann-Weierstraß Theorem (also known as simply Hermite-Lindemann or Lindemann-Weierstraß). This theorem is one of the crowning achievements of 19th century number theory.

The theorem states that if $\alpha_1, \ldots, \alpha_n\in\mathbb{C}$ are algebraic numbers that are linearly independent over $\mathbb{Z}$, then $e^{\alpha_1},\ldots,e^{\alpha_n}$ are algebraically independent over $\mathbb{Q}$.

Like the previous formalisation in Coq by Bernard, I proceeded by formalising Baker's version of the theorem and proof and then deriving the original one from that. Baker's version states that for any algebraic numbers $\beta_1, \ldots, \beta_n\in\mathbb{C}$ and distinct algebraic numbers $\alpha_i, \ldots, \alpha_n\in\mathbb{C}$, we have $\beta_1 e^{\alpha_1} + \ldots + \beta_n e^{\alpha_n} = 0$ if and only if all the $\beta_i$ are zero.

This has a number of direct corollaries, e.g.:

  • $e$ and $\pi$ are transcendental
  • $e^z$, $\sin z$, $\tan z$, etc. are transcendental for algebraic $z\in\mathbb{C}\setminus\{0\}$
  • $\ln z$ is transcendental for algebraic $z\in\mathbb{C}\setminus\{0, 1\}$
[DFS_Framework] title = A Framework for Verifying Depth-First Search Algorithms author = Peter Lammich , René Neumann notify = lammich@in.tum.de date = 2016-07-05 topic = Computer science/Algorithms/Graph abstract =

This entry presents a framework for the modular verification of DFS-based algorithms, which is described in our [CPP-2015] paper. It provides a generic DFS algorithm framework, that can be parameterized with user-defined actions on certain events (e.g. discovery of new node). It comes with an extensible library of invariants, which can be used to derive invariants of a specific parameterization. Using refinement techniques, efficient implementations of the algorithms can easily be derived. Here, the framework comes with templates for a recursive and a tail-recursive implementation, and also with several templates for implementing the data structures required by the DFS algorithm. Finally, this entry contains a set of re-usable DFS-based algorithms, which illustrate the application of the framework.

[CPP-2015] Peter Lammich, René Neumann: A Framework for Verifying Depth-First Search Algorithms. CPP 2015: 137-146

[Flow_Networks] title = Flow Networks and the Min-Cut-Max-Flow Theorem author = Peter Lammich , S. Reza Sefidgar <> topic = Mathematics/Graph theory date = 2017-06-01 notify = lammich@in.tum.de abstract = We present a formalization of flow networks and the Min-Cut-Max-Flow theorem. Our formal proof closely follows a standard textbook proof, and is accessible even without being an expert in Isabelle/HOL, the interactive theorem prover used for the formalization. [Prpu_Maxflow] title = Formalizing Push-Relabel Algorithms author = Peter Lammich , S. Reza Sefidgar <> topic = Computer science/Algorithms/Graph, Mathematics/Graph theory date = 2017-06-01 notify = lammich@in.tum.de abstract = We present a formalization of push-relabel algorithms for computing the maximum flow in a network. We start with Goldberg's et al.~generic push-relabel algorithm, for which we show correctness and the time complexity bound of O(V^2E). We then derive the relabel-to-front and FIFO implementation. Using stepwise refinement techniques, we derive an efficient verified implementation. Our formal proof of the abstract algorithms closely follows a standard textbook proof. It is accessible even without being an expert in Isabelle/HOL, the interactive theorem prover used for the formalization. [Buildings] title = Chamber Complexes, Coxeter Systems, and Buildings author = Jeremy Sylvestre notify = jeremy.sylvestre@ualberta.ca date = 2016-07-01 topic = Mathematics/Algebra, Mathematics/Geometry abstract = We provide a basic formal framework for the theory of chamber complexes and Coxeter systems, and for buildings as thick chamber complexes endowed with a system of apartments. Along the way, we develop some of the general theory of abstract simplicial complexes and of groups (relying on the group_add class for the basics), including free groups and group presentations, and their universal properties. The main results verified are that the deletion condition is both necessary and sufficient for a group with a set of generators of order two to be a Coxeter system, and that the apartments in a (thick) building are all uniformly Coxeter. [Algebraic_VCs] title = Program Construction and Verification Components Based on Kleene Algebra author = Victor B. F. Gomes , Georg Struth notify = victor.gomes@cl.cam.ac.uk, g.struth@sheffield.ac.uk date = 2016-06-18 topic = Mathematics/Algebra abstract = Variants of Kleene algebra support program construction and verification by algebraic reasoning. This entry provides a verification component for Hoare logic based on Kleene algebra with tests, verification components for weakest preconditions and strongest postconditions based on Kleene algebra with domain and a component for step-wise refinement based on refinement Kleene algebra with tests. In addition to these components for the partial correctness of while programs, a verification component for total correctness based on divergence Kleene algebras and one for (partial correctness) of recursive programs based on domain quantales are provided. Finally we have integrated memory models for programs with pointers and a program trace semantics into the weakest precondition component. [C2KA_DistributedSystems] title = Communicating Concurrent Kleene Algebra for Distributed Systems Specification author = Maxime Buyse , Jason Jaskolka topic = Computer science/Automata and formal languages, Mathematics/Algebra date = 2019-08-06 notify = maxime.buyse@polytechnique.edu, jason.jaskolka@carleton.ca abstract = Communicating Concurrent Kleene Algebra (C²KA) is a mathematical framework for capturing the communicating and concurrent behaviour of agents in distributed systems. It extends Hoare et al.'s Concurrent Kleene Algebra (CKA) with communication actions through the notions of stimuli and shared environments. C²KA has applications in studying system-level properties of distributed systems such as safety, security, and reliability. In this work, we formalize results about C²KA and its application for distributed systems specification. We first formalize the stimulus structure and behaviour structure (CKA). Next, we combine them to formalize C²KA and its properties. Then, we formalize notions and properties related to the topology of distributed systems and the potential for communication via stimuli and via shared environments of agents, all within the algebraic setting of C²KA. [Card_Equiv_Relations] title = Cardinality of Equivalence Relations author = Lukas Bulwahn notify = lukas.bulwahn@gmail.com date = 2016-05-24 topic = Mathematics/Combinatorics abstract = This entry provides formulae for counting the number of equivalence relations and partial equivalence relations over a finite carrier set with given cardinality. To count the number of equivalence relations, we provide bijections between equivalence relations and set partitions, and then transfer the main results of the two AFP entries, Cardinality of Set Partitions and Spivey's Generalized Recurrence for Bell Numbers, to theorems on equivalence relations. To count the number of partial equivalence relations, we observe that counting partial equivalence relations over a set A is equivalent to counting all equivalence relations over all subsets of the set A. From this observation and the results on equivalence relations, we show that the cardinality of partial equivalence relations over a finite set of cardinality n is equal to the n+1-th Bell number. [Twelvefold_Way] title = The Twelvefold Way author = Lukas Bulwahn topic = Mathematics/Combinatorics date = 2016-12-29 notify = lukas.bulwahn@gmail.com abstract = This entry provides all cardinality theorems of the Twelvefold Way. The Twelvefold Way systematically classifies twelve related combinatorial problems concerning two finite sets, which include counting permutations, combinations, multisets, set partitions and number partitions. This development builds upon the existing formal developments with cardinality theorems for those structures. It provides twelve bijections from the various structures to different equivalence classes on finite functions, and hence, proves cardinality formulae for these equivalence classes on finite functions. [Chord_Segments] title = Intersecting Chords Theorem author = Lukas Bulwahn notify = lukas.bulwahn@gmail.com date = 2016-10-11 topic = Mathematics/Geometry abstract = This entry provides a geometric proof of the intersecting chords theorem. The theorem states that when two chords intersect each other inside a circle, the products of their segments are equal. After a short review of existing proofs in the literature, I decided to use a proof approach that employs reasoning about lengths of line segments, the orthogonality of two lines and the Pythagoras Law. Hence, one can understand the formalized proof easily with the knowledge of a few general geometric facts that are commonly taught in high-school. This theorem is the 55th theorem of the Top 100 Theorems list. [Category3] title = Category Theory with Adjunctions and Limits author = Eugene W. Stark notify = stark@cs.stonybrook.edu date = 2016-06-26 topic = Mathematics/Category theory abstract =

This article attempts to develop a usable framework for doing category theory in Isabelle/HOL. Our point of view, which to some extent differs from that of the previous AFP articles on the subject, is to try to explore how category theory can be done efficaciously within HOL, rather than trying to match exactly the way things are done using a traditional approach. To this end, we define the notion of category in an "object-free" style, in which a category is represented by a single partial composition operation on arrows. This way of defining categories provides some advantages in the context of HOL, including the ability to avoid the use of records and the possibility of defining functors and natural transformations simply as certain functions on arrows, rather than as composite objects. We define various constructions associated with the basic notions, including: dual category, product category, functor category, discrete category, free category, functor composition, and horizontal and vertical composite of natural transformations. A "set category" locale is defined that axiomatizes the notion "category of all sets at a type and all functions between them," and a fairly extensive set of properties of set categories is derived from the locale assumptions. The notion of a set category is used to prove the Yoneda Lemma in a general setting of a category equipped with a "hom embedding," which maps arrows of the category to the "universe" of the set category. We also give a treatment of adjunctions, defining adjunctions via left and right adjoint functors, natural bijections between hom-sets, and unit and counit natural transformations, and showing the equivalence of these definitions. We also develop the theory of limits, including representations of functors, diagrams and cones, and diagonal functors. We show that right adjoint functors preserve limits, and that limits can be constructed via products and equalizers. We characterize the conditions under which limits exist in a set category. We also examine the case of limits in a functor category, ultimately culminating in a proof that the Yoneda embedding preserves limits.

Revisions made subsequent to the first version of this article added material on equivalence of categories, cartesian categories, categories with pullbacks, categories with finite limits, and cartesian closed categories. A construction was given of the category of hereditarily finite sets and functions between them, and it was shown that this category is cartesian closed.

extra-history = Change history: [2018-05-29]: Revised axioms for the category locale. Introduced notation for composition and "in hom". (revision 8318366d4575)
[2020-02-15]: Move ConcreteCategory.thy from Bicategory to Category3 and use it systematically. Make other minor improvements throughout. (revision a51840d36867)
[2020-07-10]: Added new material, mostly centered around cartesian categories. (revision 06640f317a79)
[2020-11-04]: Minor modifications and extensions made in conjunction with the addition of new material to Bicategory. (revision 472cb2268826)
[MonoidalCategory] title = Monoidal Categories author = Eugene W. Stark topic = Mathematics/Category theory date = 2017-05-04 notify = stark@cs.stonybrook.edu abstract =

Building on the formalization of basic category theory set out in the author's previous AFP article, the present article formalizes some basic aspects of the theory of monoidal categories. Among the notions defined here are monoidal category, monoidal functor, and equivalence of monoidal categories. The main theorems formalized are MacLane's coherence theorem and the constructions of the free monoidal category and free strict monoidal category generated by a given category. The coherence theorem is proved syntactically, using a structurally recursive approach to reduction of terms that might have some novel aspects. We also give proofs of some results given by Etingof et al, which may prove useful in a formal setting. In particular, we show that the left and right unitors need not be taken as given data in the definition of monoidal category, nor does the definition of monoidal functor need to take as given a specific isomorphism expressing the preservation of the unit object. Our definitions of monoidal category and monoidal functor are stated so as to take advantage of the economy afforded by these facts.

Revisions made subsequent to the first version of this article added material on cartesian monoidal categories; showing that the underlying category of a cartesian monoidal category is a cartesian category, and that every cartesian category extends to a cartesian monoidal category.

extra-history = Change history: [2017-05-18]: Integrated material from MonoidalCategory/Category3Adapter into Category3/ and deleted adapter. (revision 015543cdd069)
[2018-05-29]: Modifications required due to 'Category3' changes. Introduced notation for "in hom". (revision 8318366d4575)
[2020-02-15]: Cosmetic improvements. (revision a51840d36867)
[2020-07-10]: Added new material on cartesian monoidal categories. (revision 06640f317a79)
[Card_Multisets] title = Cardinality of Multisets author = Lukas Bulwahn notify = lukas.bulwahn@gmail.com date = 2016-06-26 topic = Mathematics/Combinatorics abstract =

This entry provides three lemmas to count the number of multisets of a given size and finite carrier set. The first lemma provides a cardinality formula assuming that the multiset's elements are chosen from the given carrier set. The latter two lemmas provide formulas assuming that the multiset's elements also cover the given carrier set, i.e., each element of the carrier set occurs in the multiset at least once.

The proof of the first lemma uses the argument of the recurrence relation for counting multisets. The proof of the second lemma is straightforward, and the proof of the third lemma is easily obtained using the first cardinality lemma. A challenge for the formalization is the derivation of the required induction rule, which is a special combination of the induction rules for finite sets and natural numbers. The induction rule is derived by defining a suitable inductive predicate and transforming the predicate's induction rule.

[Posix-Lexing] title = POSIX Lexing with Derivatives of Regular Expressions author = Fahad Ausaf , Roy Dyckhoff , Christian Urban notify = christian.urban@kcl.ac.uk date = 2016-05-24 topic = Computer science/Automata and formal languages abstract = Brzozowski introduced the notion of derivatives for regular expressions. They can be used for a very simple regular expression matching algorithm. Sulzmann and Lu cleverly extended this algorithm in order to deal with POSIX matching, which is the underlying disambiguation strategy for regular expressions needed in lexers. In this entry we give our inductive definition of what a POSIX value is and show (i) that such a value is unique (for given regular expression and string being matched) and (ii) that Sulzmann and Lu's algorithm always generates such a value (provided that the regular expression matches the string). We also prove the correctness of an optimised version of the POSIX matching algorithm. [LocalLexing] title = Local Lexing author = Steven Obua topic = Computer science/Automata and formal languages date = 2017-04-28 notify = steven@recursivemind.com abstract = This formalisation accompanies the paper Local Lexing which introduces a novel parsing concept of the same name. The paper also gives a high-level algorithm for local lexing as an extension of Earley's algorithm. This formalisation proves the algorithm to be correct with respect to its local lexing semantics. As a special case, this formalisation thus also contains a proof of the correctness of Earley's algorithm. The paper contains a short outline of how this formalisation is organised. [MFMC_Countable] title = A Formal Proof of the Max-Flow Min-Cut Theorem for Countable Networks author = Andreas Lochbihler date = 2016-05-09 topic = Mathematics/Graph theory abstract = This article formalises a proof of the maximum-flow minimal-cut theorem for networks with countably many edges. A network is a directed graph with non-negative real-valued edge labels and two dedicated vertices, the source and the sink. A flow in a network assigns non-negative real numbers to the edges such that for all vertices except for the source and the sink, the sum of values on incoming edges equals the sum of values on outgoing edges. A cut is a subset of the vertices which contains the source, but not the sink. Our theorem states that in every network, there is a flow and a cut such that the flow saturates all the edges going out of the cut and is zero on all the incoming edges. The proof is based on the paper The Max-Flow Min-Cut theorem for countable networks by Aharoni et al. Additionally, we prove a characterisation of the lifting operation for relations on discrete probability distributions, which leads to a concise proof of its distributivity over relation composition. notify = mail@andreas-lochbihler.de extra-history = Change history: [2017-09-06]: derive characterisation for the lifting operations on discrete distributions from finite version of the max-flow min-cut theorem (revision a7a198f5bab0)
[2020-12-19]: simpler proof of linkability for bounded unhindered bipartite webs, leading to a simpler proof for networks with bounded out-capacities (revision 93ca33f4d915)
[Liouville_Numbers] title = Liouville numbers author = Manuel Eberl date = 2015-12-28 topic = Mathematics/Analysis, Mathematics/Number theory abstract =

Liouville numbers are a class of transcendental numbers that can be approximated particularly well with rational numbers. Historically, they were the first numbers whose transcendence was proven.

In this entry, we define the concept of Liouville numbers as well as the standard construction to obtain Liouville numbers (including Liouville's constant) and we prove their most important properties: irrationality and transcendence.

The proof is very elementary and requires only standard arithmetic, the Mean Value Theorem for polynomials, and the boundedness of polynomials on compact intervals.

notify = eberlm@in.tum.de [Triangle] title = Basic Geometric Properties of Triangles author = Manuel Eberl date = 2015-12-28 topic = Mathematics/Geometry abstract =

This entry contains a definition of angles between vectors and between three points. Building on this, we prove basic geometric properties of triangles, such as the Isosceles Triangle Theorem, the Law of Sines and the Law of Cosines, that the sum of the angles of a triangle is π, and the congruence theorems for triangles.

The definitions and proofs were developed following those by John Harrison in HOL Light. However, due to Isabelle's type class system, all definitions and theorems in the Isabelle formalisation hold for all real inner product spaces.

notify = eberlm@in.tum.de [Prime_Harmonic_Series] title = The Divergence of the Prime Harmonic Series author = Manuel Eberl date = 2015-12-28 topic = Mathematics/Number theory abstract =

In this work, we prove the lower bound ln(H_n) - ln(5/3) for the partial sum of the Prime Harmonic series and, based on this, the divergence of the Prime Harmonic Series ∑[p prime] · 1/p.

The proof relies on the unique squarefree decomposition of natural numbers. This is similar to Euler's original proof (which was highly informal and morally questionable). Its advantage over proofs by contradiction, like the famous one by Paul Erdős, is that it provides a relatively good lower bound for the partial sums.

notify = eberlm@in.tum.de [Descartes_Sign_Rule] title = Descartes' Rule of Signs author = Manuel Eberl date = 2015-12-28 topic = Mathematics/Analysis abstract =

Descartes' Rule of Signs relates the number of positive real roots of a polynomial with the number of sign changes in its coefficient sequence.

Our proof follows the simple inductive proof given by Rob Arthan, which was also used by John Harrison in his HOL Light formalisation. We proved most of the lemmas for arbitrary linearly-ordered integrity domains (e.g. integers, rationals, reals); the main result, however, requires the intermediate value theorem and was therefore only proven for real polynomials.

notify = eberlm@in.tum.de [Euler_MacLaurin] title = The Euler–MacLaurin Formula author = Manuel Eberl topic = Mathematics/Analysis date = 2017-03-10 notify = eberlm@in.tum.de abstract =

The Euler-MacLaurin formula relates the value of a discrete sum to that of the corresponding integral in terms of the derivatives at the borders of the summation and a remainder term. Since the remainder term is often very small as the summation bounds grow, this can be used to compute asymptotic expansions for sums.

This entry contains a proof of this formula for functions from the reals to an arbitrary Banach space. Two variants of the formula are given: the standard textbook version and a variant outlined in Concrete Mathematics that is more useful for deriving asymptotic estimates.

As example applications, we use that formula to derive the full asymptotic expansion of the harmonic numbers and the sum of inverse squares.

[Card_Partitions] title = Cardinality of Set Partitions author = Lukas Bulwahn date = 2015-12-12 topic = Mathematics/Combinatorics abstract = The theory's main theorem states that the cardinality of set partitions of size k on a carrier set of size n is expressed by Stirling numbers of the second kind. In Isabelle, Stirling numbers of the second kind are defined in the AFP entry `Discrete Summation` through their well-known recurrence relation. The main theorem relates them to the alternative definition as cardinality of set partitions. The proof follows the simple and short explanation in Richard P. Stanley's `Enumerative Combinatorics: Volume 1` and Wikipedia, and unravels the full details and implicit reasoning steps of these explanations. notify = lukas.bulwahn@gmail.com [Card_Number_Partitions] title = Cardinality of Number Partitions author = Lukas Bulwahn date = 2016-01-14 topic = Mathematics/Combinatorics abstract = This entry provides a basic library for number partitions, defines the two-argument partition function through its recurrence relation and relates this partition function to the cardinality of number partitions. The main proof shows that the recursively-defined partition function with arguments n and k equals the cardinality of number partitions of n with exactly k parts. The combinatorial proof follows the proof sketch of Theorem 2.4.1 in Mazur's textbook `Combinatorics: A Guided Tour`. This entry can serve as starting point for various more intrinsic properties about number partitions, the partition function and related recurrence relations. notify = lukas.bulwahn@gmail.com [Multirelations] title = Binary Multirelations author = Hitoshi Furusawa , Georg Struth date = 2015-06-11 topic = Mathematics/Algebra abstract = Binary multirelations associate elements of a set with its subsets; hence they are binary relations from a set to its power set. Applications include alternating automata, models and logics for games, program semantics with dual demonic and angelic nondeterministic choices and concurrent dynamic logics. This proof document supports an arXiv article that formalises the basic algebra of multirelations and proposes axiom systems for them, ranging from weak bi-monoids to weak bi-quantales. notify = [Noninterference_Generic_Unwinding] title = The Generic Unwinding Theorem for CSP Noninterference Security author = Pasquale Noce date = 2015-06-11 topic = Computer science/Security, Computer science/Concurrency/Process calculi abstract =

The classical definition of noninterference security for a deterministic state machine with outputs requires to consider the outputs produced by machine actions after any trace, i.e. any indefinitely long sequence of actions, of the machine. In order to render the verification of the security of such a machine more straightforward, there is a need of some sufficient condition for security such that just individual actions, rather than unbounded sequences of actions, have to be considered.

By extending previous results applying to transitive noninterference policies, Rushby has proven an unwinding theorem that provides a sufficient condition of this kind in the general case of a possibly intransitive policy. This condition has to be satisfied by a generic function mapping security domains into equivalence relations over machine states.

An analogous problem arises for CSP noninterference security, whose definition requires to consider any possible future, i.e. any indefinitely long sequence of subsequent events and any indefinitely large set of refused events associated to that sequence, for each process trace.

This paper provides a sufficient condition for CSP noninterference security, which indeed requires to just consider individual accepted and refused events and applies to the general case of a possibly intransitive policy. This condition follows Rushby's one for classical noninterference security, and has to be satisfied by a generic function mapping security domains into equivalence relations over process traces; hence its name, Generic Unwinding Theorem. Variants of this theorem applying to deterministic processes and trace set processes are also proven. Finally, the sufficient condition for security expressed by the theorem is shown not to be a necessary condition as well, viz. there exists a secure process such that no domain-relation map satisfying the condition exists.

notify = [Noninterference_Ipurge_Unwinding] title = The Ipurge Unwinding Theorem for CSP Noninterference Security author = Pasquale Noce date = 2015-06-11 topic = Computer science/Security abstract =

The definition of noninterference security for Communicating Sequential Processes requires to consider any possible future, i.e. any indefinitely long sequence of subsequent events and any indefinitely large set of refused events associated to that sequence, for each process trace. In order to render the verification of the security of a process more straightforward, there is a need of some sufficient condition for security such that just individual accepted and refused events, rather than unbounded sequences and sets of events, have to be considered.

Of course, if such a sufficient condition were necessary as well, it would be even more valuable, since it would permit to prove not only that a process is secure by verifying that the condition holds, but also that a process is not secure by verifying that the condition fails to hold.

This paper provides a necessary and sufficient condition for CSP noninterference security, which indeed requires to just consider individual accepted and refused events and applies to the general case of a possibly intransitive policy. This condition follows Rushby's output consistency for deterministic state machines with outputs, and has to be satisfied by a specific function mapping security domains into equivalence relations over process traces. The definition of this function makes use of an intransitive purge function following Rushby's one; hence the name given to the condition, Ipurge Unwinding Theorem.

Furthermore, in accordance with Hoare's formal definition of deterministic processes, it is shown that a process is deterministic just in case it is a trace set process, i.e. it may be identified by means of a trace set alone, matching the set of its traces, in place of a failures-divergences pair. Then, variants of the Ipurge Unwinding Theorem are proven for deterministic processes and trace set processes.

notify = [Relational_Method] title = The Relational Method with Message Anonymity for the Verification of Cryptographic Protocols author = Pasquale Noce topic = Computer science/Security date = 2020-12-05 notify = pasquale.noce.lavoro@gmail.com abstract = This paper introduces a new method for the formal verification of cryptographic protocols, the relational method, derived from Paulson's inductive method by means of some enhancements aimed at streamlining formal definitions and proofs, specially for protocols using public key cryptography. Moreover, this paper proposes a method to formalize a further security property, message anonymity, in addition to message confidentiality and authenticity. The relational method, including message anonymity, is then applied to the verification of a sample authentication protocol, comprising Password Authenticated Connection Establishment (PACE) with Chip Authentication Mapping followed by the explicit verification of an additional password over the PACE secure channel. [List_Interleaving] title = Reasoning about Lists via List Interleaving author = Pasquale Noce date = 2015-06-11 topic = Computer science/Data structures abstract =

Among the various mathematical tools introduced in his outstanding work on Communicating Sequential Processes, Hoare has defined "interleaves" as the predicate satisfied by any three lists such that the first list may be split into sublists alternately extracted from the other two ones, whatever is the criterion for extracting an item from either one list or the other in each step.

This paper enriches Hoare's definition by identifying such criterion with the truth value of a predicate taking as inputs the head and the tail of the first list. This enhanced "interleaves" predicate turns out to permit the proof of equalities between lists without the need of an induction. Some rules that allow to infer "interleaves" statements without induction, particularly applying to the addition or removal of a prefix to the input lists, are also proven. Finally, a stronger version of the predicate, named "Interleaves", is shown to fulfil further rules applying to the addition or removal of a suffix to the input lists.

notify = [Residuated_Lattices] title = Residuated Lattices author = Victor B. F. Gomes , Georg Struth date = 2015-04-15 topic = Mathematics/Algebra abstract = The theory of residuated lattices, first proposed by Ward and Dilworth, is formalised in Isabelle/HOL. This includes concepts of residuated functions; their adjoints and conjugates. It also contains necessary and sufficient conditions for the existence of these operations in an arbitrary lattice. The mathematical components for residuated lattices are linked to the AFP entry for relation algebra. In particular, we prove Jonsson and Tsinakis conditions for a residuated boolean algebra to form a relation algebra. notify = g.struth@sheffield.ac.uk [ConcurrentGC] title = Relaxing Safely: Verified On-the-Fly Garbage Collection for x86-TSO author = Peter Gammie , Tony Hosking , Kai Engelhardt <> date = 2015-04-13 topic = Computer science/Algorithms/Concurrent abstract =

We use ConcurrentIMP to model Schism, a state-of-the-art real-time garbage collection scheme for weak memory, and show that it is safe on x86-TSO.

This development accompanies the PLDI 2015 paper of the same name.

notify = peteg42@gmail.com [List_Update] title = Analysis of List Update Algorithms author = Maximilian P.L. Haslbeck , Tobias Nipkow date = 2016-02-17 topic = Computer science/Algorithms/Online abstract =

These theories formalize the quantitative analysis of a number of classical algorithms for the list update problem: 2-competitiveness of move-to-front, the lower bound of 2 for the competitiveness of deterministic list update algorithms and 1.6-competitiveness of the randomized COMB algorithm, the best randomized list update algorithm known to date. The material is based on the first two chapters of Online Computation and Competitive Analysis by Borodin and El-Yaniv.

For an informal description see the FSTTCS 2016 publication Verified Analysis of List Update Algorithms by Haslbeck and Nipkow.

notify = nipkow@in.tum.de [ConcurrentIMP] title = Concurrent IMP author = Peter Gammie date = 2015-04-13 topic = Computer science/Programming languages/Logics abstract = ConcurrentIMP extends the small imperative language IMP with control non-determinism and constructs for synchronous message passing. notify = peteg42@gmail.com [TortoiseHare] title = The Tortoise and Hare Algorithm author = Peter Gammie date = 2015-11-18 topic = Computer science/Algorithms abstract = We formalize the Tortoise and Hare cycle-finding algorithm ascribed to Floyd by Knuth, and an improved version due to Brent. notify = peteg42@gmail.com [UPF] title = The Unified Policy Framework (UPF) author = Achim D. Brucker , Lukas Brügger , Burkhart Wolff date = 2014-11-28 topic = Computer science/Security abstract = We present the Unified Policy Framework (UPF), a generic framework for modelling security (access-control) policies. UPF emphasizes the view that a policy is a policy decision function that grants or denies access to resources, permissions, etc. In other words, instead of modelling the relations of permitted or prohibited requests directly, we model the concrete function that implements the policy decision point in a system. In more detail, UPF is based on the following four principles: 1) Functional representation of policies, 2) No conflicts are possible, 3) Three-valued decision type (allow, deny, undefined), 4) Output type not containing the decision only. notify = adbrucker@0x5f.org, wolff@lri.fr, lukas.a.bruegger@gmail.com [UPF_Firewall] title = Formal Network Models and Their Application to Firewall Policies author = Achim D. Brucker , Lukas Brügger<>, Burkhart Wolff topic = Computer science/Security, Computer science/Networks date = 2017-01-08 notify = adbrucker@0x5f.org abstract = We present a formal model of network protocols and their application to modeling firewall policies. The formalization is based on the Unified Policy Framework (UPF). The formalization was originally developed with for generating test cases for testing the security configuration actual firewall and router (middle-boxes) using HOL-TestGen. Our work focuses on modeling application level protocols on top of tcp/ip. [AODV] title = Loop freedom of the (untimed) AODV routing protocol author = Timothy Bourke , Peter Höfner date = 2014-10-23 topic = Computer science/Concurrency/Process calculi abstract =

The Ad hoc On-demand Distance Vector (AODV) routing protocol allows the nodes in a Mobile Ad hoc Network (MANET) or a Wireless Mesh Network (WMN) to know where to forward data packets. Such a protocol is ‘loop free’ if it never leads to routing decisions that forward packets in circles.

This development mechanises an existing pen-and-paper proof of loop freedom of AODV. The protocol is modelled in the Algebra of Wireless Networks (AWN), which is the subject of an earlier paper and AFP mechanization. The proof relies on a novel compositional approach for lifting invariants to networks of nodes.

We exploit the mechanization to analyse several variants of AODV and show that Isabelle/HOL can re-establish most proof obligations automatically and identify exactly the steps that are no longer valid.

notify = tim@tbrk.org [Show] title = Haskell's Show Class in Isabelle/HOL author = Christian Sternagel , René Thiemann date = 2014-07-29 topic = Computer science/Functional programming license = LGPL abstract = We implemented a type class for "to-string" functions, similar to Haskell's Show class. Moreover, we provide instantiations for Isabelle/HOL's standard types like bool, prod, sum, nats, ints, and rats. It is further possible, to automatically derive show functions for arbitrary user defined datatypes similar to Haskell's "deriving Show". extra-history = Change history: [2015-03-11]: Adapted development to new-style (BNF-based) datatypes.
[2015-04-10]: Moved development for old-style datatypes into subdirectory "Old_Datatype".
notify = christian.sternagel@uibk.ac.at, rene.thiemann@uibk.ac.at [Certification_Monads] title = Certification Monads author = Christian Sternagel , René Thiemann date = 2014-10-03 topic = Computer science/Functional programming abstract = This entry provides several monads intended for the development of stand-alone certifiers via code generation from Isabelle/HOL. More specifically, there are three flavors of error monads (the sum type, for the case where all monadic functions are total; an instance of the former, the so called check monad, yielding either success without any further information or an error message; as well as a variant of the sum type that accommodates partial functions by providing an explicit bottom element) and a parser monad built on top. All of this monads are heavily used in the IsaFoR/CeTA project which thus provides many examples of their usage. notify = c.sternagel@gmail.com, rene.thiemann@uibk.ac.at [CISC-Kernel] title = Formal Specification of a Generic Separation Kernel author = Freek Verbeek , Sergey Tverdyshev , Oto Havle , Holger Blasum , Bruno Langenstein , Werner Stephan , Yakoub Nemouchi , Abderrahmane Feliachi , Burkhart Wolff , Julien Schmaltz date = 2014-07-18 topic = Computer science/Security abstract =

Intransitive noninterference has been a widely studied topic in the last few decades. Several well-established methodologies apply interactive theorem proving to formulate a noninterference theorem over abstract academic models. In joint work with several industrial and academic partners throughout Europe, we are helping in the certification process of PikeOS, an industrial separation kernel developed at SYSGO. In this process, established theories could not be applied. We present a new generic model of separation kernels and a new theory of intransitive noninterference. The model is rich in detail, making it suitable for formal verification of realistic and industrial systems such as PikeOS. Using a refinement-based theorem proving approach, we ensure that proofs remain manageable.

This document corresponds to the deliverable D31.1 of the EURO-MILS Project http://www.euromils.eu.

notify = [pGCL] title = pGCL for Isabelle author = David Cock date = 2014-07-13 topic = Computer science/Programming languages/Language definitions abstract =

pGCL is both a programming language and a specification language that incorporates both probabilistic and nondeterministic choice, in a unified manner. Program verification is by refinement or annotation (or both), using either Hoare triples, or weakest-precondition entailment, in the style of GCL.

This package provides both a shallow embedding of the language primitives, and an annotation and refinement framework. The generated document includes a brief tutorial.

notify = [Noninterference_CSP] title = Noninterference Security in Communicating Sequential Processes author = Pasquale Noce date = 2014-05-23 topic = Computer science/Security abstract =

An extension of classical noninterference security for deterministic state machines, as introduced by Goguen and Meseguer and elegantly formalized by Rushby, to nondeterministic systems should satisfy two fundamental requirements: it should be based on a mathematically precise theory of nondeterminism, and should be equivalent to (or at least not weaker than) the classical notion in the degenerate deterministic case.

This paper proposes a definition of noninterference security applying to Hoare's Communicating Sequential Processes (CSP) in the general case of a possibly intransitive noninterference policy, and proves the equivalence of this security property to classical noninterference security for processes representing deterministic state machines.

Furthermore, McCullough's generalized noninterference security is shown to be weaker than both the proposed notion of CSP noninterference security for a generic process, and classical noninterference security for processes representing deterministic state machines. This renders CSP noninterference security preferable as an extension of classical noninterference security to nondeterministic systems.

notify = pasquale.noce.lavoro@gmail.com [Floyd_Warshall] title = The Floyd-Warshall Algorithm for Shortest Paths author = Simon Wimmer , Peter Lammich topic = Computer science/Algorithms/Graph date = 2017-05-08 notify = wimmers@in.tum.de abstract = The Floyd-Warshall algorithm [Flo62, Roy59, War62] is a classic dynamic programming algorithm to compute the length of all shortest paths between any two vertices in a graph (i.e. to solve the all-pairs shortest path problem, or APSP for short). Given a representation of the graph as a matrix of weights M, it computes another matrix M' which represents a graph with the same path lengths and contains the length of the shortest path between any two vertices i and j. This is only possible if the graph does not contain any negative cycles. However, in this case the Floyd-Warshall algorithm will detect the situation by calculating a negative diagonal entry. This entry includes a formalization of the algorithm and of these key properties. The algorithm is refined to an efficient imperative version using the Imperative Refinement Framework. [Roy_Floyd_Warshall] title = Transitive closure according to Roy-Floyd-Warshall author = Makarius Wenzel <> date = 2014-05-23 topic = Computer science/Algorithms/Graph abstract = This formulation of the Roy-Floyd-Warshall algorithm for the transitive closure bypasses matrices and arrays, but uses a more direct mathematical model with adjacency functions for immediate predecessors and successors. This can be implemented efficiently in functional programming languages and is particularly adequate for sparse relations. notify = [GPU_Kernel_PL] title = Syntax and semantics of a GPU kernel programming language author = John Wickerson date = 2014-04-03 topic = Computer science/Programming languages/Language definitions abstract = This document accompanies the article "The Design and Implementation of a Verification Technique for GPU Kernels" by Adam Betts, Nathan Chong, Alastair F. Donaldson, Jeroen Ketema, Shaz Qadeer, Paul Thomson and John Wickerson. It formalises all of the definitions provided in Sections 3 and 4 of the article. notify = [AWN] title = Mechanization of the Algebra for Wireless Networks (AWN) author = Timothy Bourke date = 2014-03-08 topic = Computer science/Concurrency/Process calculi abstract =

AWN is a process algebra developed for modelling and analysing protocols for Mobile Ad hoc Networks (MANETs) and Wireless Mesh Networks (WMNs). AWN models comprise five distinct layers: sequential processes, local parallel compositions, nodes, partial networks, and complete networks.

This development mechanises the original operational semantics of AWN and introduces a variant 'open' operational semantics that enables the compositional statement and proof of invariants across distinct network nodes. It supports labels (for weakening invariants) and (abstract) data state manipulations. A framework for compositional invariant proofs is developed, including a tactic (inv_cterms) for inductive invariant proofs of sequential processes, lifting rules for the open versions of the higher layers, and a rule for transferring lifted properties back to the standard semantics. A notion of 'control terms' reduces proof obligations to the subset of subterms that act directly (in contrast to operators for combining terms and joining processes).

notify = tim@tbrk.org [Selection_Heap_Sort] title = Verification of Selection and Heap Sort Using Locales author = Danijela Petrovic date = 2014-02-11 topic = Computer science/Algorithms abstract = Stepwise program refinement techniques can be used to simplify program verification. Programs are better understood since their main properties are clearly stated, and verification of rather complex algorithms is reduced to proving simple statements connecting successive program specifications. Additionally, it is easy to analyze similar algorithms and to compare their properties within a single formalization. Usually, formal analysis is not done in educational setting due to complexity of verification and a lack of tools and procedures to make comparison easy. Verification of an algorithm should not only give correctness proof, but also better understanding of an algorithm. If the verification is based on small step program refinement, it can become simple enough to be demonstrated within the university-level computer science curriculum. In this paper we demonstrate this and give a formal analysis of two well known algorithms (Selection Sort and Heap Sort) using proof assistant Isabelle/HOL and program refinement techniques. notify = [Real_Impl] title = Implementing field extensions of the form Q[sqrt(b)] author = René Thiemann date = 2014-02-06 license = LGPL topic = Mathematics/Analysis abstract = We apply data refinement to implement the real numbers, where we support all numbers in the field extension Q[sqrt(b)], i.e., all numbers of the form p + q * sqrt(b) for rational numbers p and q and some fixed natural number b. To this end, we also developed algorithms to precisely compute roots of a rational number, and to perform a factorization of natural numbers which eliminates duplicate prime factors.

Our results have been used to certify termination proofs which involve polynomial interpretations over the reals. extra-history = Change history: [2014-07-11]: Moved NthRoot_Impl to Sqrt-Babylonian. notify = rene.thiemann@uibk.ac.at [ShortestPath] title = An Axiomatic Characterization of the Single-Source Shortest Path Problem author = Christine Rizkallah date = 2013-05-22 topic = Mathematics/Graph theory abstract = This theory is split into two sections. In the first section, we give a formal proof that a well-known axiomatic characterization of the single-source shortest path problem is correct. Namely, we prove that in a directed graph with a non-negative cost function on the edges the single-source shortest path function is the only function that satisfies a set of four axioms. In the second section, we give a formal proof of the correctness of an axiomatic characterization of the single-source shortest path problem for directed graphs with general cost functions. The axioms here are more involved because we have to account for potential negative cycles in the graph. The axioms are summarized in three Isabelle locales. notify = [Launchbury] title = The Correctness of Launchbury's Natural Semantics for Lazy Evaluation author = Joachim Breitner date = 2013-01-31 topic = Computer science/Programming languages/Lambda calculi, Computer science/Semantics abstract = In his seminal paper "Natural Semantics for Lazy Evaluation", John Launchbury proves his semantics correct with respect to a denotational semantics, and outlines an adequacy proof. We have formalized both semantics and machine-checked the correctness proof, clarifying some details. Furthermore, we provide a new and more direct adequacy proof that does not require intermediate operational semantics. extra-history = Change history: [2014-05-24]: Added the proof of adequacy, as well as simplified and improved the existing proofs. Adjusted abstract accordingly. [2015-03-16]: Booleans and if-then-else added to syntax and semantics, making this entry suitable to be used by the entry "Call_Arity". notify = [Call_Arity] title = The Safety of Call Arity author = Joachim Breitner date = 2015-02-20 topic = Computer science/Programming languages/Transformations abstract = We formalize the Call Arity analysis, as implemented in GHC, and prove both functional correctness and, more interestingly, safety (i.e. the transformation does not increase allocation).

We use syntax and the denotational semantics from the entry "Launchbury", where we formalized Launchbury's natural semantics for lazy evaluation.

The functional correctness of Call Arity is proved with regard to that denotational semantics. The operational properties are shown with regard to a small-step semantics akin to Sestoft's mark 1 machine, which we prove to be equivalent to Launchbury's semantics.

We use Christian Urban's Nominal2 package to define our terms and make use of Brian Huffman's HOLCF package for the domain-theoretical aspects of the development. extra-history = Change history: [2015-03-16]: This entry now builds on top of the Launchbury entry, and the equivalency proof of the natural and the small-step semantics was added. notify = [CCS] title = CCS in nominal logic author = Jesper Bengtson date = 2012-05-29 topic = Computer science/Concurrency/Process calculi abstract = We formalise a large portion of CCS as described in Milner's book 'Communication and Concurrency' using the nominal datatype package in Isabelle. Our results include many of the standard theorems of bisimulation equivalence and congruence, for both weak and strong versions. One main goal of this formalisation is to keep the machine-checked proofs as close to their pen-and-paper counterpart as possible.

This entry is described in detail in Bengtson's thesis. notify = [Pi_Calculus] title = The pi-calculus in nominal logic author = Jesper Bengtson date = 2012-05-29 topic = Computer science/Concurrency/Process calculi abstract = We formalise the pi-calculus using the nominal datatype package, based on ideas from the nominal logic by Pitts et al., and demonstrate an implementation in Isabelle/HOL. The purpose is to derive powerful induction rules for the semantics in order to conduct machine checkable proofs, closely following the intuitive arguments found in manual proofs. In this way we have covered many of the standard theorems of bisimulation equivalence and congruence, both late and early, and both strong and weak in a uniform manner. We thus provide one of the most extensive formalisations of a the pi-calculus ever done inside a theorem prover.

A significant gain in our formulation is that agents are identified up to alpha-equivalence, thereby greatly reducing the arguments about bound names. This is a normal strategy for manual proofs about the pi-calculus, but that kind of hand waving has previously been difficult to incorporate smoothly in an interactive theorem prover. We show how the nominal logic formalism and its support in Isabelle accomplishes this and thus significantly reduces the tedium of conducting completely formal proofs. This improves on previous work using weak higher order abstract syntax since we do not need extra assumptions to filter out exotic terms and can keep all arguments within a familiar first-order logic.

This entry is described in detail in Bengtson's thesis. notify = [Psi_Calculi] title = Psi-calculi in Isabelle author = Jesper Bengtson date = 2012-05-29 topic = Computer science/Concurrency/Process calculi abstract = Psi-calculi are extensions of the pi-calculus, accommodating arbitrary nominal datatypes to represent not only data but also communication channels, assertions and conditions, giving it an expressive power beyond the applied pi-calculus and the concurrent constraint pi-calculus.

We have formalised psi-calculi in the interactive theorem prover Isabelle using its nominal datatype package. One distinctive feature is that the framework needs to treat binding sequences, as opposed to single binders, in an efficient way. While different methods for formalising single binder calculi have been proposed over the last decades, representations for such binding sequences are not very well explored.

The main effort in the formalisation is to keep the machine checked proofs as close to their pen-and-paper counterparts as possible. This includes treating all binding sequences as atomic elements, and creating custom induction and inversion rules that to remove the bulk of manual alpha-conversions.

This entry is described in detail in Bengtson's thesis. notify = [Encodability_Process_Calculi] title = Analysing and Comparing Encodability Criteria for Process Calculi author = Kirstin Peters , Rob van Glabbeek date = 2015-08-10 topic = Computer science/Concurrency/Process calculi abstract = Encodings or the proof of their absence are the main way to compare process calculi. To analyse the quality of encodings and to rule out trivial or meaningless encodings, they are augmented with quality criteria. There exists a bunch of different criteria and different variants of criteria in order to reason in different settings. This leads to incomparable results. Moreover it is not always clear whether the criteria used to obtain a result in a particular setting do indeed fit to this setting. We show how to formally reason about and compare encodability criteria by mapping them on requirements on a relation between source and target terms that is induced by the encoding function. In particular we analyse the common criteria full abstraction, operational correspondence, divergence reflection, success sensitiveness, and respect of barbs; e.g. we analyse the exact nature of the simulation relation (coupled simulation versus bisimulation) that is induced by different variants of operational correspondence. This way we reduce the problem of analysing or comparing encodability criteria to the better understood problem of comparing relations on processes. notify = kirstin.peters@tu-berlin.de [Circus] title = Isabelle/Circus author = Abderrahmane Feliachi , Burkhart Wolff , Marie-Claude Gaudel contributors = Makarius Wenzel date = 2012-05-27 topic = Computer science/Concurrency/Process calculi, Computer science/System description languages abstract = The Circus specification language combines elements for complex data and behavior specifications, using an integration of Z and CSP with a refinement calculus. Its semantics is based on Hoare and He's Unifying Theories of Programming (UTP). Isabelle/Circus is a formalization of the UTP and the Circus language in Isabelle/HOL. It contains proof rules and tactic support that allows for proofs of refinement for Circus processes (involving both data and behavioral aspects).

The Isabelle/Circus environment supports a syntax for the semantic definitions which is close to textbook presentations of Circus. This article contains an extended version of corresponding VSTTE Paper together with the complete formal development of its underlying commented theories. extra-history = Change history: [2014-06-05]: More polishing, shorter proofs, added Circus syntax, added Makarius Wenzel as contributor. notify = [Dijkstra_Shortest_Path] title = Dijkstra's Shortest Path Algorithm author = Benedikt Nordhoff , Peter Lammich topic = Computer science/Algorithms/Graph date = 2012-01-30 abstract = We implement and prove correct Dijkstra's algorithm for the single source shortest path problem, conceived in 1956 by E. Dijkstra. The algorithm is implemented using the data refinement framework for monadic, nondeterministic programs. An efficient implementation is derived using data structures from the Isabelle Collection Framework. notify = lammich@in.tum.de [Refine_Monadic] title = Refinement for Monadic Programs author = Peter Lammich topic = Computer science/Programming languages/Logics date = 2012-01-30 abstract = We provide a framework for program and data refinement in Isabelle/HOL. The framework is based on a nondeterminism-monad with assertions, i.e., the monad carries a set of results or an assertion failure. Recursion is expressed by fixed points. For convenience, we also provide while and foreach combinators.

The framework provides tools to automatize canonical tasks, such as verification condition generation, finding appropriate data refinement relations, and refine an executable program to a form that is accepted by the Isabelle/HOL code generator.

This submission comes with a collection of examples and a user-guide, illustrating the usage of the framework. extra-history = Change history: [2012-04-23] Introduced ordered FOREACH loops
[2012-06] New features: REC_rule_arb and RECT_rule_arb allow for generalizing over variables. prepare_code_thms - command extracts code equations for recursion combinators.
[2012-07] New example: Nested DFS for emptiness check of Buchi-automata with witness.
New feature: fo_rule method to apply resolution using first-order matching. Useful for arg_conf, fun_cong.
[2012-08] Adaptation to ICF v2.
[2012-10-05] Adaptations to include support for Automatic Refinement Framework.
[2013-09] This entry now depends on Automatic Refinement
[2014-06] New feature: vc_solve method to solve verification conditions. Maintenace changes: VCG-rules for nfoldli, improved setup for FOREACH-loops.
[2014-07] Now defining recursion via flat domain. Dropped many single-valued prerequisites. Changed notion of data refinement. In single-valued case, this matches the old notion. In non-single valued case, the new notion allows for more convenient rules. In particular, the new definitions allow for projecting away ghost variables as a refinement step.
[2014-11] New features: le-or-fail relation (leof), modular reasoning about loop invariants. notify = lammich@in.tum.de [Refine_Imperative_HOL] title = The Imperative Refinement Framework author = Peter Lammich notify = lammich@in.tum.de date = 2016-08-08 topic = Computer science/Programming languages/Transformations,Computer science/Data structures abstract = We present the Imperative Refinement Framework (IRF), a tool that supports a stepwise refinement based approach to imperative programs. This entry is based on the material we presented in [ITP-2015, CPP-2016]. It uses the Monadic Refinement Framework as a frontend for the specification of the abstract programs, and Imperative/HOL as a backend to generate executable imperative programs. The IRF comes with tool support to synthesize imperative programs from more abstract, functional ones, using efficient imperative implementations for the abstract data structures. This entry also includes the Imperative Isabelle Collection Framework (IICF), which provides a library of re-usable imperative collection data structures. Moreover, this entry contains a quickstart guide and a reference manual, which provide an introduction to using the IRF for Isabelle/HOL experts. It also provids a collection of (partly commented) practical examples, some highlights being Dijkstra's Algorithm, Nested-DFS, and a generic worklist algorithm with subsumption. Finally, this entry contains benchmark scripts that compare the runtime of some examples against reference implementations of the algorithms in Java and C++. [ITP-2015] Peter Lammich: Refinement to Imperative/HOL. ITP 2015: 253--269 [CPP-2016] Peter Lammich: Refinement based verification of imperative data structures. CPP 2016: 27--36 [Automatic_Refinement] title = Automatic Data Refinement author = Peter Lammich topic = Computer science/Programming languages/Logics date = 2013-10-02 abstract = We present the Autoref tool for Isabelle/HOL, which automatically refines algorithms specified over abstract concepts like maps and sets to algorithms over concrete implementations like red-black-trees, and produces a refinement theorem. It is based on ideas borrowed from relational parametricity due to Reynolds and Wadler. The tool allows for rapid prototyping of verified, executable algorithms. Moreover, it can be configured to fine-tune the result to the user~s needs. Our tool is able to automatically instantiate generic algorithms, which greatly simplifies the implementation of executable data structures.

This AFP-entry provides the basic tool, which is then used by the Refinement and Collection Framework to provide automatic data refinement for the nondeterminism monad and various collection datastructures. notify = lammich@in.tum.de [EdmondsKarp_Maxflow] title = Formalizing the Edmonds-Karp Algorithm author = Peter Lammich , S. Reza Sefidgar<> notify = lammich@in.tum.de date = 2016-08-12 topic = Computer science/Algorithms/Graph abstract = We present a formalization of the Ford-Fulkerson method for computing the maximum flow in a network. Our formal proof closely follows a standard textbook proof, and is accessible even without being an expert in Isabelle/HOL--- the interactive theorem prover used for the formalization. We then use stepwise refinement to obtain the Edmonds-Karp algorithm, and formally prove a bound on its complexity. Further refinement yields a verified implementation, whose execution time compares well to an unverified reference implementation in Java. This entry is based on our ITP-2016 paper with the same title. [VerifyThis2018] title = VerifyThis 2018 - Polished Isabelle Solutions author = Peter Lammich , Simon Wimmer topic = Computer science/Algorithms date = 2018-04-27 notify = lammich@in.tum.de abstract = VerifyThis 2018 was a program verification competition associated with ETAPS 2018. It was the 7th event in the VerifyThis competition series. In this entry, we present polished and completed versions of our solutions that we created during the competition. [PseudoHoops] title = Pseudo Hoops author = George Georgescu <>, Laurentiu Leustean <>, Viorel Preoteasa topic = Mathematics/Algebra date = 2011-09-22 abstract = Pseudo-hoops are algebraic structures introduced by B. Bosbach under the name of complementary semigroups. In this formalization we prove some properties of pseudo-hoops and we define the basic concepts of filter and normal filter. The lattice of normal filters is isomorphic with the lattice of congruences of a pseudo-hoop. We also study some important classes of pseudo-hoops. Bounded Wajsberg pseudo-hoops are equivalent to pseudo-Wajsberg algebras and bounded basic pseudo-hoops are equivalent to pseudo-BL algebras. Some examples of pseudo-hoops are given in the last section of the formalization. notify = viorel.preoteasa@aalto.fi [MonoBoolTranAlgebra] title = Algebra of Monotonic Boolean Transformers author = Viorel Preoteasa topic = Computer science/Programming languages/Logics date = 2011-09-22 abstract = Algebras of imperative programming languages have been successful in reasoning about programs. In general an algebra of programs is an algebraic structure with programs as elements and with program compositions (sequential composition, choice, skip) as algebra operations. Various versions of these algebras were introduced to model partial correctness, total correctness, refinement, demonic choice, and other aspects. We formalize here an algebra which can be used to model total correctness, refinement, demonic and angelic choice. The basic model of this algebra are monotonic Boolean transformers (monotonic functions from a Boolean algebra to itself). notify = viorel.preoteasa@aalto.fi [LatticeProperties] title = Lattice Properties author = Viorel Preoteasa topic = Mathematics/Order date = 2011-09-22 abstract = This formalization introduces and collects some algebraic structures based on lattices and complete lattices for use in other developments. The structures introduced are modular, and lattice ordered groups. In addition to the results proved for the new lattices, this formalization also introduces theorems about latices and complete lattices in general. extra-history = Change history: [2012-01-05]: Removed the theory about distributive complete lattices which is in the standard library now. Added a theory about well founded and transitive relations and a result about fixpoints in complete lattices and well founded relations. Moved the results about conjunctive and disjunctive functions to a new theory. Removed the syntactic classes for inf and sup which are in the standard library now. notify = viorel.preoteasa@aalto.fi [Impossible_Geometry] title = Proving the Impossibility of Trisecting an Angle and Doubling the Cube author = Ralph Romanos , Lawrence C. Paulson topic = Mathematics/Algebra, Mathematics/Geometry date = 2012-08-05 abstract = Squaring the circle, doubling the cube and trisecting an angle, using a compass and straightedge alone, are classic unsolved problems first posed by the ancient Greeks. All three problems were proved to be impossible in the 19th century. The following document presents the proof of the impossibility of solving the latter two problems using Isabelle/HOL, following a proof by Carrega. The proof uses elementary methods: no Galois theory or field extensions. The set of points constructible using a compass and straightedge is defined inductively. Radical expressions, which involve only square roots and arithmetic of rational numbers, are defined, and we find that all constructive points have radical coordinates. Finally, doubling the cube and trisecting certain angles requires solving certain cubic equations that can be proved to have no rational roots. The Isabelle proofs require a great many detailed calculations. notify = ralph.romanos@student.ecp.fr, lp15@cam.ac.uk [IP_Addresses] title = IP Addresses author = Cornelius Diekmann , Julius Michaelis , Lars Hupel notify = diekmann@net.in.tum.de date = 2016-06-28 topic = Computer science/Networks abstract = This entry contains a definition of IP addresses and a library to work with them. Generic IP addresses are modeled as machine words of arbitrary length. Derived from this generic definition, IPv4 addresses are 32bit machine words, IPv6 addresses are 128bit words. Additionally, IPv4 addresses can be represented in dot-decimal notation and IPv6 addresses in (compressed) colon-separated notation. We support toString functions and parsers for both notations. Sets of IP addresses can be represented with a netmask (e.g. 192.168.0.0/255.255.0.0) or in CIDR notation (e.g. 192.168.0.0/16). To provide executable code for set operations on IP address ranges, the library includes a datatype to work on arbitrary intervals of machine words. [Simple_Firewall] title = Simple Firewall author = Cornelius Diekmann , Julius Michaelis , Maximilian Haslbeck notify = diekmann@net.in.tum.de, max.haslbeck@gmx.de date = 2016-08-24 topic = Computer science/Networks abstract = We present a simple model of a firewall. The firewall can accept or drop a packet and can match on interfaces, IP addresses, protocol, and ports. It was designed to feature nice mathematical properties: The type of match expressions was carefully crafted such that the conjunction of two match expressions is only one match expression. This model is too simplistic to mirror all aspects of the real world. In the upcoming entry "Iptables Semantics", we will translate the Linux firewall iptables to this model. For a fixed service (e.g. ssh, http), we provide an algorithm to compute an overview of the firewall's filtering behavior. The algorithm computes minimal service matrices, i.e. graphs which partition the complete IPv4 and IPv6 address space and visualize the allowed accesses between partitions. For a detailed description, see Verified iptables Firewall Analysis, IFIP Networking 2016. [Iptables_Semantics] title = Iptables Semantics author = Cornelius Diekmann , Lars Hupel notify = diekmann@net.in.tum.de, hupel@in.tum.de date = 2016-09-09 topic = Computer science/Networks abstract = We present a big step semantics of the filtering behavior of the Linux/netfilter iptables firewall. We provide algorithms to simplify complex iptables rulests to a simple firewall model (c.f. AFP entry Simple_Firewall) and to verify spoofing protection of a ruleset. Internally, we embed our semantics into ternary logic, ultimately supporting every iptables match condition by abstracting over unknowns. Using this AFP entry and all entries it depends on, we created an easy-to-use, stand-alone haskell tool called fffuu. The tool does not require any input —except for the iptables-save dump of the analyzed firewall— and presents interesting results about the user's ruleset. Real-Word firewall errors have been uncovered, and the correctness of rulesets has been proved, with the help of our tool. [Routing] title = Routing author = Julius Michaelis , Cornelius Diekmann notify = afp@liftm.de date = 2016-08-31 topic = Computer science/Networks abstract = This entry contains definitions for routing with routing tables/longest prefix matching. A routing table entry is modelled as a record of a prefix match, a metric, an output port, and an optional next hop. A routing table is a list of entries, sorted by prefix length and metric. Additionally, a parser and serializer for the output of the ip-route command, a function to create a relation from output port to corresponding destination IP space, and a model of a Linux-style router are included. [KBPs] title = Knowledge-based programs author = Peter Gammie topic = Computer science/Automata and formal languages date = 2011-05-17 abstract = Knowledge-based programs (KBPs) are a formalism for directly relating agents' knowledge and behaviour. Here we present a general scheme for compiling KBPs to executable automata with a proof of correctness in Isabelle/HOL. We develop the algorithm top-down, using Isabelle's locale mechanism to structure these proofs, and show that two classic examples can be synthesised using Isabelle's code generator. extra-history = Change history: [2012-03-06]: Add some more views and revive the code generation. notify = kleing@cse.unsw.edu.au [Tarskis_Geometry] title = The independence of Tarski's Euclidean axiom author = T. J. M. Makarios topic = Mathematics/Geometry date = 2012-10-30 abstract = Tarski's axioms of plane geometry are formalized and, using the standard real Cartesian model, shown to be consistent. A substantial theory of the projective plane is developed. Building on this theory, the Klein-Beltrami model of the hyperbolic plane is defined and shown to satisfy all of Tarski's axioms except his Euclidean axiom; thus Tarski's Euclidean axiom is shown to be independent of his other axioms of plane geometry.

An earlier version of this work was the subject of the author's MSc thesis, which contains natural-language explanations of some of the more interesting proofs. notify = tjm1983@gmail.com [IsaGeoCoq] title = Tarski's Parallel Postulate implies the 5th Postulate of Euclid, the Postulate of Playfair and the original Parallel Postulate of Euclid author = Roland Coghetto topic = Mathematics/Geometry license = LGPL date = 2021-01-31 notify = roland_coghetto@hotmail.com abstract =

The GeoCoq library contains a formalization of geometry using the Coq proof assistant. It contains both proofs about the foundations of geometry and high-level proofs in the same style as in high school. We port a part of the GeoCoq 2.4.0 library to Isabelle/HOL: more precisely, the files Chap02.v to Chap13_3.v, suma.v as well as the associated definitions and some useful files for the demonstration of certain parallel postulates. The synthetic approach of the demonstrations is directly inspired by those contained in GeoCoq. The names of the lemmas and theorems used are kept as far as possible as well as the definitions.

It should be noted that T.J.M. Makarios has done some proofs in Tarski's Geometry. It uses a definition that does not quite coincide with the definition used in Geocoq and here. Furthermore, corresponding definitions in the Poincaré Disc Model development are not identical to those defined in GeoCoq.

In the last part, it is formalized that, in the neutral/absolute space, the axiom of the parallels of Tarski's system implies the Playfair axiom, the 5th postulate of Euclid and Euclid's original parallel postulate. These proofs, which are not constructive, are directly inspired by Pierre Boutry, Charly Gries, Julien Narboux and Pascal Schreck.

[General-Triangle] title = The General Triangle Is Unique author = Joachim Breitner topic = Mathematics/Geometry date = 2011-04-01 abstract = Some acute-angled triangles are special, e.g. right-angled or isoscele triangles. Some are not of this kind, but, without measuring angles, look as if they were. In that sense, there is exactly one general triangle. This well-known fact is proven here formally. notify = mail@joachim-breitner.de [LightweightJava] title = Lightweight Java author = Rok Strniša , Matthew Parkinson topic = Computer science/Programming languages/Language definitions date = 2011-02-07 abstract = A fully-formalized and extensible minimal imperative fragment of Java. notify = rok@strnisa.com [Lower_Semicontinuous] title = Lower Semicontinuous Functions author = Bogdan Grechuk topic = Mathematics/Analysis date = 2011-01-08 abstract = We define the notions of lower and upper semicontinuity for functions from a metric space to the extended real line. We prove that a function is both lower and upper semicontinuous if and only if it is continuous. We also give several equivalent characterizations of lower semicontinuity. In particular, we prove that a function is lower semicontinuous if and only if its epigraph is a closed set. Also, we introduce the notion of the lower semicontinuous hull of an arbitrary function and prove its basic properties. notify = hoelzl@in.tum.de [RIPEMD-160-SPARK] title = RIPEMD-160 author = Fabian Immler topic = Computer science/Programming languages/Static analysis date = 2011-01-10 abstract = This work presents a verification of an implementation in SPARK/ADA of the cryptographic hash-function RIPEMD-160. A functional specification of RIPEMD-160 is given in Isabelle/HOL. Proofs for the verification conditions generated by the static-analysis toolset of SPARK certify the functional correctness of the implementation. extra-history = Change history: [2015-11-09]: Entry is now obsolete, moved to Isabelle distribution. notify = immler@in.tum.de [Regular-Sets] title = Regular Sets and Expressions author = Alexander Krauss , Tobias Nipkow contributors = Manuel Eberl topic = Computer science/Automata and formal languages date = 2010-05-12 abstract = This is a library of constructions on regular expressions and languages. It provides the operations of concatenation, Kleene star and derivative on languages. Regular expressions and their meaning are defined. An executable equivalence checker for regular expressions is verified; it does not need automata but works directly on regular expressions. By mapping regular expressions to binary relations, an automatic and complete proof method for (in)equalities of binary relations over union, concatenation and (reflexive) transitive closure is obtained.

Extended regular expressions with complement and intersection are also defined and an equivalence checker is provided. extra-history = Change history: [2011-08-26]: Christian Urban added a theory about derivatives and partial derivatives of regular expressions
[2012-05-10]: Tobias Nipkow added extended regular expressions
[2012-05-10]: Tobias Nipkow added equivalence checking with partial derivatives notify = nipkow@in.tum.de, krauss@in.tum.de, christian.urban@kcl.ac.uk [Regex_Equivalence] title = Unified Decision Procedures for Regular Expression Equivalence author = Tobias Nipkow , Dmitriy Traytel topic = Computer science/Automata and formal languages date = 2014-01-30 abstract = We formalize a unified framework for verified decision procedures for regular expression equivalence. Five recently published formalizations of such decision procedures (three based on derivatives, two on marked regular expressions) can be obtained as instances of the framework. We discover that the two approaches based on marked regular expressions, which were previously thought to be the same, are different, and one seems to produce uniformly smaller automata. The common framework makes it possible to compare the performance of the different decision procedures in a meaningful way. The formalization is described in a paper of the same name presented at Interactive Theorem Proving 2014. notify = nipkow@in.tum.de, traytel@in.tum.de [MSO_Regex_Equivalence] title = Decision Procedures for MSO on Words Based on Derivatives of Regular Expressions author = Dmitriy Traytel , Tobias Nipkow topic = Computer science/Automata and formal languages, Logic/General logic/Decidability of theories date = 2014-06-12 abstract = Monadic second-order logic on finite words (MSO) is a decidable yet expressive logic into which many decision problems can be encoded. Since MSO formulas correspond to regular languages, equivalence of MSO formulas can be reduced to the equivalence of some regular structures (e.g. automata). We verify an executable decision procedure for MSO formulas that is not based on automata but on regular expressions.

Decision procedures for regular expression equivalence have been formalized before, usually based on Brzozowski derivatives. Yet, for a straightforward embedding of MSO formulas into regular expressions an extension of regular expressions with a projection operation is required. We prove total correctness and completeness of an equivalence checker for regular expressions extended in that way. We also define a language-preserving translation of formulas into regular expressions with respect to two different semantics of MSO.

The formalization is described in this ICFP 2013 functional pearl. notify = traytel@in.tum.de, nipkow@in.tum.de [Formula_Derivatives] title = Derivatives of Logical Formulas author = Dmitriy Traytel topic = Computer science/Automata and formal languages, Logic/General logic/Decidability of theories date = 2015-05-28 abstract = We formalize new decision procedures for WS1S, M2L(Str), and Presburger Arithmetics. Formulas of these logics denote regular languages. Unlike traditional decision procedures, we do not translate formulas into automata (nor into regular expressions), at least not explicitly. Instead we devise notions of derivatives (inspired by Brzozowski derivatives for regular expressions) that operate on formulas directly and compute a syntactic bisimulation using these derivatives. The treatment of Boolean connectives and quantifiers is uniform for all mentioned logics and is abstracted into a locale. This locale is then instantiated by different atomic formulas and their derivatives (which may differ even for the same logic under different encodings of interpretations as formal words).

The WS1S instance is described in the draft paper A Coalgebraic Decision Procedure for WS1S by the author. notify = traytel@in.tum.de [Myhill-Nerode] title = The Myhill-Nerode Theorem Based on Regular Expressions author = Chunhan Wu <>, Xingyuan Zhang <>, Christian Urban contributors = Manuel Eberl topic = Computer science/Automata and formal languages date = 2011-08-26 abstract = There are many proofs of the Myhill-Nerode theorem using automata. In this library we give a proof entirely based on regular expressions, since regularity of languages can be conveniently defined using regular expressions (it is more painful in HOL to define regularity in terms of automata). We prove the first direction of the Myhill-Nerode theorem by solving equational systems that involve regular expressions. For the second direction we give two proofs: one using tagging-functions and another using partial derivatives. We also establish various closure properties of regular languages. Most details of the theories are described in our ITP 2011 paper. notify = christian.urban@kcl.ac.uk [Universal_Turing_Machine] title = Universal Turing Machine author = Jian Xu<>, Xingyuan Zhang<>, Christian Urban , Sebastiaan J. C. Joosten topic = Logic/Computability, Computer science/Automata and formal languages date = 2019-02-08 notify = sjcjoosten@gmail.com, christian.urban@kcl.ac.uk abstract = We formalise results from computability theory: recursive functions, undecidability of the halting problem, and the existence of a universal Turing machine. This formalisation is the AFP entry corresponding to the paper Mechanising Turing Machines and Computability Theory in Isabelle/HOL, ITP 2013. [CYK] title = A formalisation of the Cocke-Younger-Kasami algorithm author = Maksym Bortin date = 2016-04-27 topic = Computer science/Algorithms, Computer science/Automata and formal languages abstract = The theory provides a formalisation of the Cocke-Younger-Kasami algorithm (CYK for short), an approach to solving the word problem for context-free languages. CYK decides if a word is in the languages generated by a context-free grammar in Chomsky normal form. The formalized algorithm is executable. notify = maksym.bortin@nicta.com.au [Boolean_Expression_Checkers] title = Boolean Expression Checkers author = Tobias Nipkow date = 2014-06-08 topic = Computer science/Algorithms, Logic/General logic/Mechanization of proofs abstract = This entry provides executable checkers for the following properties of boolean expressions: satisfiability, tautology and equivalence. Internally, the checkers operate on binary decision trees and are reasonably efficient (for purely functional algorithms). extra-history = Change history: [2015-09-23]: Salomon Sickert added an interface that does not require the usage of the Boolean formula datatype. Furthermore the general Mapping type is used instead of an association list. notify = nipkow@in.tum.de [Presburger-Automata] title = Formalizing the Logic-Automaton Connection author = Stefan Berghofer , Markus Reiter <> date = 2009-12-03 topic = Computer science/Automata and formal languages, Logic/General logic/Decidability of theories abstract = This work presents a formalization of a library for automata on bit strings. It forms the basis of a reflection-based decision procedure for Presburger arithmetic, which is efficiently executable thanks to Isabelle's code generator. With this work, we therefore provide a mechanized proof of a well-known connection between logic and automata theory. The formalization is also described in a publication [TPHOLs 2009]. notify = berghofe@in.tum.de [Functional-Automata] title = Functional Automata author = Tobias Nipkow date = 2004-03-30 topic = Computer science/Automata and formal languages abstract = This theory defines deterministic and nondeterministic automata in a functional representation: the transition function/relation and the finality predicate are just functions. Hence the state space may be infinite. It is shown how to convert regular expressions into such automata. A scanner (generator) is implemented with the help of functional automata: the scanner chops the input up into longest recognized substrings. Finally we also show how to convert a certain subclass of functional automata (essentially the finite deterministic ones) into regular sets. notify = nipkow@in.tum.de [Statecharts] title = Formalizing Statecharts using Hierarchical Automata author = Steffen Helke , Florian Kammüller topic = Computer science/Automata and formal languages date = 2010-08-08 abstract = We formalize in Isabelle/HOL the abtract syntax and a synchronous step semantics for the specification language Statecharts. The formalization is based on Hierarchical Automata which allow a structural decomposition of Statecharts into Sequential Automata. To support the composition of Statecharts, we introduce calculating operators to construct a Hierarchical Automaton in a stepwise manner. Furthermore, we present a complete semantics of Statecharts including a theory of data spaces, which enables the modelling of racing effects. We also adapt CTL for Statecharts to build a bridge for future combinations with model checking. However the main motivation of this work is to provide a sound and complete basis for reasoning on Statecharts. As a central meta theorem we prove that the well-formedness of a Statechart is preserved by the semantics. notify = nipkow@in.tum.de [Stuttering_Equivalence] title = Stuttering Equivalence author = Stephan Merz topic = Computer science/Automata and formal languages date = 2012-05-07 abstract =

Two omega-sequences are stuttering equivalent if they differ only by finite repetitions of elements. Stuttering equivalence is a fundamental concept in the theory of concurrent and distributed systems. Notably, Lamport argues that refinement notions for such systems should be insensitive to finite stuttering. Peled and Wilke showed that all PLTL (propositional linear-time temporal logic) properties that are insensitive to stuttering equivalence can be expressed without the next-time operator. Stuttering equivalence is also important for certain verification techniques such as partial-order reduction for model checking.

We formalize stuttering equivalence in Isabelle/HOL. Our development relies on the notion of stuttering sampling functions that may skip blocks of identical sequence elements. We also encode PLTL and prove the theorem due to Peled and Wilke.

extra-history = Change history: [2013-01-31]: Added encoding of PLTL and proved Peled and Wilke's theorem. Adjusted abstract accordingly. notify = Stephan.Merz@loria.fr [Coinductive_Languages] title = A Codatatype of Formal Languages author = Dmitriy Traytel topic = Computer science/Automata and formal languages date = 2013-11-15 abstract =

We define formal languages as a codataype of infinite trees branching over the alphabet. Each node in such a tree indicates whether the path to this node constitutes a word inside or outside of the language. This codatatype is isormorphic to the set of lists representation of languages, but caters for definitions by corecursion and proofs by coinduction.

Regular operations on languages are then defined by primitive corecursion. A difficulty arises here, since the standard definitions of concatenation and iteration from the coalgebraic literature are not primitively corecursive-they require guardedness up-to union/concatenation. Without support for up-to corecursion, these operation must be defined as a composition of primitive ones (and proved being equal to the standard definitions). As an exercise in coinduction we also prove the axioms of Kleene algebra for the defined regular operations.

Furthermore, a language for context-free grammars given by productions in Greibach normal form and an initial nonterminal is constructed by primitive corecursion, yielding an executable decision procedure for the word problem without further ado.

notify = traytel@in.tum.de [Tree-Automata] title = Tree Automata author = Peter Lammich date = 2009-11-25 topic = Computer science/Automata and formal languages abstract = This work presents a machine-checked tree automata library for Standard-ML, OCaml and Haskell. The algorithms are efficient by using appropriate data structures like RB-trees. The available algorithms for non-deterministic automata include membership query, reduction, intersection, union, and emptiness check with computation of a witness for non-emptiness. The executable algorithms are derived from less-concrete, non-executable algorithms using data-refinement techniques. The concrete data structures are from the Isabelle Collections Framework. Moreover, this work contains a formalization of the class of tree-regular languages and its closure properties under set operations. notify = peter.lammich@uni-muenster.de, nipkow@in.tum.de [Depth-First-Search] title = Depth First Search author = Toshiaki Nishihara <>, Yasuhiko Minamide <> date = 2004-06-24 topic = Computer science/Algorithms/Graph abstract = Depth-first search of a graph is formalized with recdef. It is shown that it visits all of the reachable nodes from a given list of nodes. Executable ML code of depth-first search is obtained using the code generation feature of Isabelle/HOL. notify = lp15@cam.ac.uk, krauss@in.tum.de [FFT] title = Fast Fourier Transform author = Clemens Ballarin date = 2005-10-12 topic = Computer science/Algorithms/Mathematical abstract = We formalise a functional implementation of the FFT algorithm over the complex numbers, and its inverse. Both are shown equivalent to the usual definitions of these operations through Vandermonde matrices. They are also shown to be inverse to each other, more precisely, that composition of the inverse and the transformation yield the identity up to a scalar. notify = ballarin@in.tum.de [Gauss-Jordan-Elim-Fun] title = Gauss-Jordan Elimination for Matrices Represented as Functions author = Tobias Nipkow date = 2011-08-19 topic = Computer science/Algorithms/Mathematical, Mathematics/Algebra abstract = This theory provides a compact formulation of Gauss-Jordan elimination for matrices represented as functions. Its distinctive feature is succinctness. It is not meant for large computations. notify = nipkow@in.tum.de [UpDown_Scheme] title = Verification of the UpDown Scheme author = Johannes Hölzl date = 2015-01-28 topic = Computer science/Algorithms/Mathematical abstract = The UpDown scheme is a recursive scheme used to compute the stiffness matrix on a special form of sparse grids. Usually, when discretizing a Euclidean space of dimension d we need O(n^d) points, for n points along each dimension. Sparse grids are a hierarchical representation where the number of points is reduced to O(n * log(n)^d). One disadvantage of such sparse grids is that the algorithm now operate recursively in the dimensions and levels of the sparse grid.

The UpDown scheme allows us to compute the stiffness matrix on such a sparse grid. The stiffness matrix represents the influence of each representation function on the L^2 scalar product. For a detailed description see Dirk Pflüger's PhD thesis. This formalization was developed as an interdisciplinary project (IDP) at the Technische Universität München. notify = hoelzl@in.tum.de [GraphMarkingIBP] title = Verification of the Deutsch-Schorr-Waite Graph Marking Algorithm using Data Refinement author = Viorel Preoteasa , Ralph-Johan Back date = 2010-05-28 topic = Computer science/Algorithms/Graph abstract = The verification of the Deutsch-Schorr-Waite graph marking algorithm is used as a benchmark in many formalizations of pointer programs. The main purpose of this mechanization is to show how data refinement of invariant based programs can be used in verifying practical algorithms. The verification starts with an abstract algorithm working on a graph given by a relation next on nodes. Gradually the abstract program is refined into Deutsch-Schorr-Waite graph marking algorithm where only one bit per graph node of additional memory is used for marking. extra-history = Change history: [2012-01-05]: Updated for the new definition of data refinement and the new syntax for demonic and angelic update statements notify = viorel.preoteasa@aalto.fi [Efficient-Mergesort] title = Efficient Mergesort topic = Computer science/Algorithms date = 2011-11-09 author = Christian Sternagel abstract = We provide a formalization of the mergesort algorithm as used in GHC's Data.List module, proving correctness and stability. Furthermore, experimental data suggests that generated (Haskell-)code for this algorithm is much faster than for previous algorithms available in the Isabelle distribution. extra-history = Change history: [2012-10-24]: Added reference to journal article.
[2018-09-17]: Added theory Efficient_Mergesort that works exclusively with the mutual induction schemas generated by the function package.
[2018-09-19]: Added theory Mergesort_Complexity that proves an upper bound on the number of comparisons that are required by mergesort.
[2018-09-19]: Theory Efficient_Mergesort replaces theory Efficient_Sort but keeping the old name Efficient_Sort. [2020-11-20]: Additional theory Natural_Mergesort that developes an efficient mergesort algorithm without key-functions for educational purposes. notify = c.sternagel@gmail.com [SATSolverVerification] title = Formal Verification of Modern SAT Solvers author = Filip Marić date = 2008-07-23 topic = Computer science/Algorithms abstract = This document contains formal correctness proofs of modern SAT solvers. Following (Krstic et al, 2007) and (Nieuwenhuis et al., 2006), solvers are described using state-transition systems. Several different SAT solver descriptions are given and their partial correctness and termination is proved. These include:

  • a solver based on classical DPLL procedure (using only a backtrack-search with unit propagation),
  • a very general solver with backjumping and learning (similar to the description given in (Nieuwenhuis et al., 2006)), and
  • a solver with a specific conflict analysis algorithm (similar to the description given in (Krstic et al., 2007)).
Within the SAT solver correctness proofs, a large number of lemmas about propositional logic and CNF formulae are proved. This theory is self-contained and could be used for further exploring of properties of CNF based SAT algorithms. notify = [Transitive-Closure] title = Executable Transitive Closures of Finite Relations topic = Computer science/Algorithms/Graph date = 2011-03-14 author = Christian Sternagel , René Thiemann license = LGPL abstract = We provide a generic work-list algorithm to compute the transitive closure of finite relations where only successors of newly detected states are generated. This algorithm is then instantiated for lists over arbitrary carriers and red black trees (which are faster but require a linear order on the carrier), respectively. Our formalization was performed as part of the IsaFoR/CeTA project where reflexive transitive closures of large tree automata have to be computed. extra-history = Change history: [2014-09-04] added example simprocs in Finite_Transitive_Closure_Simprocs notify = c.sternagel@gmail.com, rene.thiemann@uibk.ac.at [Transitive-Closure-II] title = Executable Transitive Closures topic = Computer science/Algorithms/Graph date = 2012-02-29 author = René Thiemann license = LGPL abstract =

We provide a generic work-list algorithm to compute the (reflexive-)transitive closure of relations where only successors of newly detected states are generated. In contrast to our previous work, the relations do not have to be finite, but each element must only have finitely many (indirect) successors. Moreover, a subsumption relation can be used instead of pure equality. An executable variant of the algorithm is available where the generic operations are instantiated with list operations.

This formalization was performed as part of the IsaFoR/CeTA project, and it has been used to certify size-change termination proofs where large transitive closures have to be computed.

notify = rene.thiemann@uibk.ac.at [MuchAdoAboutTwo] title = Much Ado About Two author = Sascha Böhme date = 2007-11-06 topic = Computer science/Algorithms abstract = This article is an Isabelle formalisation of a paper with the same title. In a similar way as Knuth's 0-1-principle for sorting algorithms, that paper develops a 0-1-2-principle for parallel prefix computations. notify = boehmes@in.tum.de [DiskPaxos] title = Proving the Correctness of Disk Paxos date = 2005-06-22 author = Mauro Jaskelioff , Stephan Merz topic = Computer science/Algorithms/Distributed abstract = Disk Paxos is an algorithm for building arbitrary fault-tolerant distributed systems. The specification of Disk Paxos has been proved correct informally and tested using the TLC model checker, but up to now, it has never been fully formally verified. In this work we have formally verified its correctness using the Isabelle theorem prover and the HOL logic system, showing that Isabelle is a practical tool for verifying properties of TLA+ specifications. notify = kleing@cse.unsw.edu.au [GenClock] title = Formalization of a Generalized Protocol for Clock Synchronization author = Alwen Tiu date = 2005-06-24 topic = Computer science/Algorithms/Distributed abstract = We formalize the generalized Byzantine fault-tolerant clock synchronization protocol of Schneider. This protocol abstracts from particular algorithms or implementations for clock synchronization. This abstraction includes several assumptions on the behaviors of physical clocks and on general properties of concrete algorithms/implementations. Based on these assumptions the correctness of the protocol is proved by Schneider. His proof was later verified by Shankar using the theorem prover EHDM (precursor to PVS). Our formalization in Isabelle/HOL is based on Shankar's formalization. notify = kleing@cse.unsw.edu.au [ClockSynchInst] title = Instances of Schneider's generalized protocol of clock synchronization author = Damián Barsotti date = 2006-03-15 topic = Computer science/Algorithms/Distributed abstract = F. B. Schneider ("Understanding protocols for Byzantine clock synchronization") generalizes a number of protocols for Byzantine fault-tolerant clock synchronization and presents a uniform proof for their correctness. In Schneider's schema, each processor maintains a local clock by periodically adjusting each value to one computed by a convergence function applied to the readings of all the clocks. Then, correctness of an algorithm, i.e. that the readings of two clocks at any time are within a fixed bound of each other, is based upon some conditions on the convergence function. To prove that a particular clock synchronization algorithm is correct it suffices to show that the convergence function used by the algorithm meets Schneider's conditions. Using the theorem prover Isabelle, we formalize the proofs that the convergence functions of two algorithms, namely, the Interactive Convergence Algorithm (ICA) of Lamport and Melliar-Smith and the Fault-tolerant Midpoint algorithm of Lundelius-Lynch, meet Schneider's conditions. Furthermore, we experiment on handling some parts of the proofs with fully automatic tools like ICS and CVC-lite. These theories are part of a joint work with Alwen Tiu and Leonor P. Nieto "Verification of Clock Synchronization Algorithms: Experiments on a combination of deductive tools" in proceedings of AVOCS 2005. In this work the correctness of Schneider schema was also verified using Isabelle (entry GenClock in AFP). notify = kleing@cse.unsw.edu.au [Heard_Of] title = Verifying Fault-Tolerant Distributed Algorithms in the Heard-Of Model date = 2012-07-27 author = Henri Debrat , Stephan Merz topic = Computer science/Algorithms/Distributed abstract = Distributed computing is inherently based on replication, promising increased tolerance to failures of individual computing nodes or communication channels. Realizing this promise, however, involves quite subtle algorithmic mechanisms, and requires precise statements about the kinds and numbers of faults that an algorithm tolerates (such as process crashes, communication faults or corrupted values). The landmark theorem due to Fischer, Lynch, and Paterson shows that it is impossible to achieve Consensus among N asynchronously communicating nodes in the presence of even a single permanent failure. Existing solutions must rely on assumptions of "partial synchrony".

Indeed, there have been numerous misunderstandings on what exactly a given algorithm is supposed to realize in what kinds of environments. Moreover, the abundance of subtly different computational models complicates comparisons between different algorithms. Charron-Bost and Schiper introduced the Heard-Of model for representing algorithms and failure assumptions in a uniform framework, simplifying comparisons between algorithms.

In this contribution, we represent the Heard-Of model in Isabelle/HOL. We define two semantics of runs of algorithms with different unit of atomicity and relate these through a reduction theorem that allows us to verify algorithms in the coarse-grained semantics (where proofs are easier) and infer their correctness for the fine-grained one (which corresponds to actual executions). We instantiate the framework by verifying six Consensus algorithms that differ in the underlying algorithmic mechanisms and the kinds of faults they tolerate. notify = Stephan.Merz@loria.fr [Consensus_Refined] title = Consensus Refined date = 2015-03-18 author = Ognjen Maric <>, Christoph Sprenger topic = Computer science/Algorithms/Distributed abstract = Algorithms for solving the consensus problem are fundamental to distributed computing. Despite their brevity, their ability to operate in concurrent, asynchronous and failure-prone environments comes at the cost of complex and subtle behaviors. Accordingly, understanding how they work and proving their correctness is a non-trivial endeavor where abstraction is immensely helpful. Moreover, research on consensus has yielded a large number of algorithms, many of which appear to share common algorithmic ideas. A natural question is whether and how these similarities can be distilled and described in a precise, unified way. In this work, we combine stepwise refinement and lockstep models to provide an abstract and unified view of a sizeable family of consensus algorithms. Our models provide insights into the design choices underlying the different algorithms, and classify them based on those choices. notify = sprenger@inf.ethz.ch [Key_Agreement_Strong_Adversaries] title = Refining Authenticated Key Agreement with Strong Adversaries author = Joseph Lallemand , Christoph Sprenger topic = Computer science/Security license = LGPL date = 2017-01-31 notify = joseph.lallemand@loria.fr, sprenger@inf.ethz.ch abstract = We develop a family of key agreement protocols that are correct by construction. Our work substantially extends prior work on developing security protocols by refinement. First, we strengthen the adversary by allowing him to compromise different resources of protocol participants, such as their long-term keys or their session keys. This enables the systematic development of protocols that ensure strong properties such as perfect forward secrecy. Second, we broaden the class of protocols supported to include those with non-atomic keys and equationally defined cryptographic operators. We use these extensions to develop key agreement protocols including signed Diffie-Hellman and the core of IKEv1 and SKEME. [Security_Protocol_Refinement] title = Developing Security Protocols by Refinement author = Christoph Sprenger , Ivano Somaini<> topic = Computer science/Security license = LGPL date = 2017-05-24 notify = sprenger@inf.ethz.ch abstract = We propose a development method for security protocols based on stepwise refinement. Our refinement strategy transforms abstract security goals into protocols that are secure when operating over an insecure channel controlled by a Dolev-Yao-style intruder. As intermediate levels of abstraction, we employ messageless guard protocols and channel protocols communicating over channels with security properties. These abstractions provide insights on why protocols are secure and foster the development of families of protocols sharing common structure and properties. We have implemented our method in Isabelle/HOL and used it to develop different entity authentication and key establishment protocols, including realistic features such as key confirmation, replay caches, and encrypted tickets. Our development highlights that guard protocols and channel protocols provide fundamental abstractions for bridging the gap between security properties and standard protocol descriptions based on cryptographic messages. It also shows that our refinement approach scales to protocols of nontrivial size and complexity. [Abortable_Linearizable_Modules] title = Abortable Linearizable Modules author = Rachid Guerraoui , Viktor Kuncak , Giuliano Losa date = 2012-03-01 topic = Computer science/Algorithms/Distributed abstract = We define the Abortable Linearizable Module automaton (ALM for short) and prove its key composition property using the IOA theory of HOLCF. The ALM is at the heart of the Speculative Linearizability framework. This framework simplifies devising correct speculative algorithms by enabling their decomposition into independent modules that can be analyzed and proved correct in isolation. It is particularly useful when working in a distributed environment, where the need to tolerate faults and asynchrony has made current monolithic protocols so intricate that it is no longer tractable to check their correctness. Our theory contains a typical example of a refinement proof in the I/O-automata framework of Lynch and Tuttle. notify = giuliano@losa.fr, nipkow@in.tum.de [Amortized_Complexity] title = Amortized Complexity Verified author = Tobias Nipkow date = 2014-07-07 topic = Computer science/Data structures abstract = A framework for the analysis of the amortized complexity of functional data structures is formalized in Isabelle/HOL and applied to a number of standard examples and to the folowing non-trivial ones: skew heaps, splay trees, splay heaps and pairing heaps.

A preliminary version of this work (without pairing heaps) is described in a paper published in the proceedings of the conference on Interactive Theorem Proving ITP 2015. An extended version of this publication is available here. extra-history = Change history: [2015-03-17]: Added pairing heaps by Hauke Brinkop.
[2016-07-12]: Moved splay heaps from here to Splay_Tree
[2016-07-14]: Moved pairing heaps from here to the new Pairing_Heap notify = nipkow@in.tum.de [Dynamic_Tables] title = Parameterized Dynamic Tables author = Tobias Nipkow date = 2015-06-07 topic = Computer science/Data structures abstract = This article formalizes the amortized analysis of dynamic tables parameterized with their minimal and maximal load factors and the expansion and contraction factors.

A full description is found in a companion paper. notify = nipkow@in.tum.de [AVL-Trees] title = AVL Trees author = Tobias Nipkow , Cornelia Pusch <> date = 2004-03-19 topic = Computer science/Data structures abstract = Two formalizations of AVL trees with room for extensions. The first formalization is monolithic and shorter, the second one in two stages, longer and a bit simpler. The final implementation is the same. If you are interested in developing this further, please contact gerwin.klein@nicta.com.au. extra-history = Change history: [2011-04-11]: Ondrej Kuncar added delete function notify = kleing@cse.unsw.edu.au [BDD] title = BDD Normalisation author = Veronika Ortner <>, Norbert Schirmer <> date = 2008-02-29 topic = Computer science/Data structures abstract = We present the verification of the normalisation of a binary decision diagram (BDD). The normalisation follows the original algorithm presented by Bryant in 1986 and transforms an ordered BDD in a reduced, ordered and shared BDD. The verification is based on Hoare logics. notify = kleing@cse.unsw.edu.au, norbert.schirmer@web.de [BinarySearchTree] title = Binary Search Trees author = Viktor Kuncak date = 2004-04-05 topic = Computer science/Data structures abstract = The correctness is shown of binary search tree operations (lookup, insert and remove) implementing a set. Two versions are given, for both structured and linear (tactic-style) proofs. An implementation of integer-indexed maps is also verified. notify = lp15@cam.ac.uk [Splay_Tree] title = Splay Tree author = Tobias Nipkow notify = nipkow@in.tum.de date = 2014-08-12 topic = Computer science/Data structures abstract = Splay trees are self-adjusting binary search trees which were invented by Sleator and Tarjan [JACM 1985]. This entry provides executable and verified functional splay trees as well as the related splay heaps (due to Okasaki).

The amortized complexity of splay trees and heaps is analyzed in the AFP entry Amortized Complexity. extra-history = Change history: [2016-07-12]: Moved splay heaps here from Amortized_Complexity [Root_Balanced_Tree] title = Root-Balanced Tree author = Tobias Nipkow notify = nipkow@in.tum.de date = 2017-08-20 topic = Computer science/Data structures abstract =

Andersson introduced general balanced trees, search trees based on the design principle of partial rebuilding: perform update operations naively until the tree becomes too unbalanced, at which point a whole subtree is rebalanced. This article defines and analyzes a functional version of general balanced trees, which we call root-balanced trees. Using a lightweight model of execution time, amortized logarithmic complexity is verified in the theorem prover Isabelle.

This is the Isabelle formalization of the material decribed in the APLAS 2017 article Verified Root-Balanced Trees by the same author, which also presents experimental results that show competitiveness of root-balanced with AVL and red-black trees.

[Skew_Heap] title = Skew Heap author = Tobias Nipkow date = 2014-08-13 topic = Computer science/Data structures abstract = Skew heaps are an amazingly simple and lightweight implementation of priority queues. They were invented by Sleator and Tarjan [SIAM 1986] and have logarithmic amortized complexity. This entry provides executable and verified functional skew heaps.

The amortized complexity of skew heaps is analyzed in the AFP entry Amortized Complexity. notify = nipkow@in.tum.de [Pairing_Heap] title = Pairing Heap author = Hauke Brinkop , Tobias Nipkow date = 2016-07-14 topic = Computer science/Data structures abstract = This library defines three different versions of pairing heaps: a functional version of the original design based on binary trees [Fredman et al. 1986], the version by Okasaki [1998] and a modified version of the latter that is free of structural invariants.

The amortized complexity of pairing heaps is analyzed in the AFP article Amortized Complexity. extra-0 = Origin: This library was extracted from Amortized Complexity and extended. notify = nipkow@in.tum.de [Priority_Queue_Braun] title = Priority Queues Based on Braun Trees author = Tobias Nipkow date = 2014-09-04 topic = Computer science/Data structures abstract = This entry verifies priority queues based on Braun trees. Insertion and deletion take logarithmic time and preserve the balanced nature of Braun trees. Two implementations of deletion are provided. notify = nipkow@in.tum.de extra-history = Change history: [2019-12-16]: Added theory Priority_Queue_Braun2 with second version of del_min [Binomial-Queues] title = Functional Binomial Queues author = René Neumann date = 2010-10-28 topic = Computer science/Data structures abstract = Priority queues are an important data structure and efficient implementations of them are crucial. We implement a functional variant of binomial queues in Isabelle/HOL and show its functional correctness. A verification against an abstract reference specification of priority queues has also been attempted, but could not be achieved to the full extent. notify = florian.haftmann@informatik.tu-muenchen.de [Binomial-Heaps] title = Binomial Heaps and Skew Binomial Heaps author = Rene Meis , Finn Nielsen , Peter Lammich date = 2010-10-28 topic = Computer science/Data structures abstract = We implement and prove correct binomial heaps and skew binomial heaps. Both are data-structures for priority queues. While binomial heaps have logarithmic findMin, deleteMin, insert, and meld operations, skew binomial heaps have constant time findMin, insert, and meld operations, and only the deleteMin-operation is logarithmic. This is achieved by using skew links to avoid cascading linking on insert-operations, and data-structural bootstrapping to get constant-time findMin and meld operations. Our implementation follows the paper by Brodal and Okasaki. notify = peter.lammich@uni-muenster.de [Finger-Trees] title = Finger Trees author = Benedikt Nordhoff , Stefan Körner , Peter Lammich date = 2010-10-28 topic = Computer science/Data structures abstract = We implement and prove correct 2-3 finger trees. Finger trees are a general purpose data structure, that can be used to efficiently implement other data structures, such as priority queues. Intuitively, a finger tree is an annotated sequence, where the annotations are elements of a monoid. Apart from operations to access the ends of the sequence, the main operation is to split the sequence at the point where a monotone predicate over the sum of the left part of the sequence becomes true for the first time. The implementation follows the paper of Hinze and Paterson. The code generator can be used to get efficient, verified code. notify = peter.lammich@uni-muenster.de [Trie] title = Trie author = Andreas Lochbihler , Tobias Nipkow date = 2015-03-30 topic = Computer science/Data structures abstract = This article formalizes the ``trie'' data structure invented by Fredkin [CACM 1960]. It also provides a specialization where the entries in the trie are lists. extra-0 = Origin: This article was extracted from existing articles by the authors. notify = nipkow@in.tum.de [FinFun] title = Code Generation for Functions as Data author = Andreas Lochbihler date = 2009-05-06 topic = Computer science/Data structures abstract = FinFuns are total functions that are constant except for a finite set of points, i.e. a generalisation of finite maps. They are formalised as a new type in Isabelle/HOL such that the code generator can handle equality tests and quantification on FinFuns. On the code output level, FinFuns are explicitly represented by constant functions and pointwise updates, similarly to associative lists. Inside the logic, they behave like ordinary functions with extensionality. Via the update/constant pattern, a recursion combinator and an induction rule for FinFuns allow for defining and reasoning about operators on FinFun that are also executable. extra-history = Change history: [2010-08-13]: new concept domain of a FinFun as a FinFun (revision 34b3517cbc09)
[2010-11-04]: new conversion function from FinFun to list of elements in the domain (revision 0c167102e6ed)
[2012-03-07]: replace sets as FinFuns by predicates as FinFuns because the set type constructor has been reintroduced (revision b7aa87989f3a) notify = nipkow@in.tum.de [Collections] title = Collections Framework author = Peter Lammich contributors = Andreas Lochbihler , Thomas Tuerk <> date = 2009-11-25 topic = Computer science/Data structures abstract = This development provides an efficient, extensible, machine checked collections framework. The library adopts the concepts of interface, implementation and generic algorithm from object-oriented programming and implements them in Isabelle/HOL. The framework features the use of data refinement techniques to refine an abstract specification (using high-level concepts like sets) to a more concrete implementation (using collection datastructures, like red-black-trees). The code-generator of Isabelle/HOL can be used to generate efficient code. extra-history = Change history: [2010-10-08]: New Interfaces: OrderedSet, OrderedMap, List. Fifo now implements list-interface: Function names changed: put/get --> enqueue/dequeue. New Implementations: ArrayList, ArrayHashMap, ArrayHashSet, TrieMap, TrieSet. Invariant-free datastructures: Invariant implicitely hidden in typedef. Record-interfaces: All operations of an interface encapsulated as record. Examples moved to examples subdirectory.
[2010-12-01]: New Interfaces: Priority Queues, Annotated Lists. Implemented by finger trees, (skew) binomial queues.
[2011-10-10]: SetSpec: Added operations: sng, isSng, bexists, size_abort, diff, filter, iterate_rule_insertP MapSpec: Added operations: sng, isSng, iterate_rule_insertP, bexists, size, size_abort, restrict, map_image_filter, map_value_image_filter Some maintenance changes
[2012-04-25]: New iterator foundation by Tuerk. Various maintenance changes.
[2012-08]: Collections V2. New features: Polymorphic iterators. Generic algorithm instantiation where required. Naming scheme changed from xx_opname to xx.opname. A compatibility file CollectionsV1 tries to simplify porting of existing theories, by providing old naming scheme and the old monomorphic iterator locales.
[2013-09]: Added Generic Collection Framework based on Autoref. The GenCF provides: Arbitrary nesting, full integration with Autoref.
[2014-06]: Maintenace changes to GenCF: Optimized inj_image on list_set. op_set_cart (Cartesian product). big-Union operation. atLeastLessThan - operation ({a..<b})
notify = lammich@in.tum.de [Containers] title = Light-weight Containers author = Andreas Lochbihler contributors = René Thiemann date = 2013-04-15 topic = Computer science/Data structures abstract = This development provides a framework for container types like sets and maps such that generated code implements these containers with different (efficient) data structures. Thanks to type classes and refinement during code generation, this light-weight approach can seamlessly replace Isabelle's default setup for code generation. Heuristics automatically pick one of the available data structures depending on the type of elements to be stored, but users can also choose on their own. The extensible design permits to add more implementations at any time.

To support arbitrary nesting of sets, we define a linear order on sets based on a linear order of the elements and provide efficient implementations. It even allows to compare complements with non-complements. extra-history = Change history: [2013-07-11]: add pretty printing for sets (revision 7f3f52c5f5fa)
[2013-09-20]: provide generators for canonical type class instantiations (revision 159f4401f4a8 by René Thiemann)
[2014-07-08]: add support for going from partial functions to mappings (revision 7a6fc957e8ed)
[2018-03-05]: add two application examples: depth-first search and 2SAT (revision e5e1a1da2411) notify = mail@andreas-lochbihler.de [FileRefinement] title = File Refinement author = Karen Zee , Viktor Kuncak date = 2004-12-09 topic = Computer science/Data structures abstract = These theories illustrates the verification of basic file operations (file creation, file read and file write) in the Isabelle theorem prover. We describe a file at two levels of abstraction: an abstract file represented as a resizable array, and a concrete file represented using data blocks. notify = kkz@mit.edu [Datatype_Order_Generator] title = Generating linear orders for datatypes author = René Thiemann date = 2012-08-07 topic = Computer science/Data structures abstract = We provide a framework for registering automatic methods to derive class instances of datatypes, as it is possible using Haskell's ``deriving Ord, Show, ...'' feature.

We further implemented such automatic methods to derive (linear) orders or hash-functions which are required in the Isabelle Collection Framework. Moreover, for the tactic of Huffman and Krauss to show that a datatype is countable, we implemented a wrapper so that this tactic becomes accessible in our framework.

Our formalization was performed as part of the IsaFoR/CeTA project. With our new tactic we could completely remove tedious proofs for linear orders of two datatypes.

This development is aimed at datatypes generated by the "old_datatype" command. notify = rene.thiemann@uibk.ac.at [Deriving] title = Deriving class instances for datatypes author = Christian Sternagel , René Thiemann date = 2015-03-11 topic = Computer science/Data structures abstract =

We provide a framework for registering automatic methods to derive class instances of datatypes, as it is possible using Haskell's ``deriving Ord, Show, ...'' feature.

We further implemented such automatic methods to derive comparators, linear orders, parametrizable equality functions, and hash-functions which are required in the Isabelle Collection Framework and the Container Framework. Moreover, for the tactic of Blanchette to show that a datatype is countable, we implemented a wrapper so that this tactic becomes accessible in our framework. All of the generators are based on the infrastructure that is provided by the BNF-based datatype package.

Our formalization was performed as part of the IsaFoR/CeTA project. With our new tactics we could remove several tedious proofs for (conditional) linear orders, and conditional equality operators within IsaFoR and the Container Framework.

notify = rene.thiemann@uibk.ac.at [List-Index] title = List Index date = 2010-02-20 author = Tobias Nipkow topic = Computer science/Data structures abstract = This theory provides functions for finding the index of an element in a list, by predicate and by value. notify = nipkow@in.tum.de [List-Infinite] title = Infinite Lists date = 2011-02-23 author = David Trachtenherz <> topic = Computer science/Data structures abstract = We introduce a theory of infinite lists in HOL formalized as functions over naturals (folder ListInf, theories ListInf and ListInf_Prefix). It also provides additional results for finite lists (theory ListInf/List2), natural numbers (folder CommonArith, esp. division/modulo, naturals with infinity), sets (folder CommonSet, esp. cutting/truncating sets, traversing sets of naturals). notify = nipkow@in.tum.de [Matrix] title = Executable Matrix Operations on Matrices of Arbitrary Dimensions topic = Computer science/Data structures date = 2010-06-17 author = Christian Sternagel , René Thiemann license = LGPL abstract = We provide the operations of matrix addition, multiplication, transposition, and matrix comparisons as executable functions over ordered semirings. Moreover, it is proven that strongly normalizing (monotone) orders can be lifted to strongly normalizing (monotone) orders over matrices. We further show that the standard semirings over the naturals, integers, and rationals, as well as the arctic semirings satisfy the axioms that are required by our matrix theory. Our formalization is part of the CeTA system which contains several termination techniques. The provided theories have been essential to formalize matrix-interpretations and arctic interpretations. extra-history = Change history: [2010-09-17]: Moved theory on arbitrary (ordered) semirings to Abstract Rewriting. notify = rene.thiemann@uibk.ac.at, christian.sternagel@uibk.ac.at [Matrix_Tensor] title = Tensor Product of Matrices topic = Computer science/Data structures, Mathematics/Algebra date = 2016-01-18 author = T.V.H. Prathamesh abstract = In this work, the Kronecker tensor product of matrices and the proofs of some of its properties are formalized. Properties which have been formalized include associativity of the tensor product and the mixed-product property. notify = prathamesh@imsc.res.in [Huffman] title = The Textbook Proof of Huffman's Algorithm author = Jasmin Christian Blanchette date = 2008-10-15 topic = Computer science/Data structures abstract = Huffman's algorithm is a procedure for constructing a binary tree with minimum weighted path length. This report presents a formal proof of the correctness of Huffman's algorithm written using Isabelle/HOL. Our proof closely follows the sketches found in standard algorithms textbooks, uncovering a few snags in the process. Another distinguishing feature of our formalization is the use of custom induction rules to help Isabelle's automatic tactics, leading to very short proofs for most of the lemmas. notify = jasmin.blanchette@gmail.com [Partial_Function_MR] title = Mutually Recursive Partial Functions author = René Thiemann topic = Computer science/Functional programming date = 2014-02-18 license = LGPL abstract = We provide a wrapper around the partial-function command that supports mutual recursion. notify = rene.thiemann@uibk.ac.at [Lifting_Definition_Option] title = Lifting Definition Option author = René Thiemann topic = Computer science/Functional programming date = 2014-10-13 license = LGPL abstract = We implemented a command that can be used to easily generate elements of a restricted type {x :: 'a. P x}, provided the definition is of the form f ys = (if check ys then Some(generate ys :: 'a) else None) where ys is a list of variables y1 ... yn and check ys ==> P(generate ys) can be proved.

In principle, such a definition is also directly possible using the lift_definition command. However, then this definition will not be suitable for code-generation. To this end, we automated a more complex construction of Joachim Breitner which is amenable for code-generation, and where the test check ys will only be performed once. In the automation, one auxiliary type is created, and Isabelle's lifting- and transfer-package is invoked several times. notify = rene.thiemann@uibk.ac.at [Coinductive] title = Coinductive topic = Computer science/Functional programming author = Andreas Lochbihler contributors = Johannes Hölzl date = 2010-02-12 abstract = This article collects formalisations of general-purpose coinductive data types and sets. Currently, it contains coinductive natural numbers, coinductive lists, i.e. lazy lists or streams, infinite streams, coinductive terminated lists, coinductive resumptions, a library of operations on coinductive lists, and a version of König's lemma as an application for coinductive lists.
The initial theory was contributed by Paulson and Wenzel. Extensions and other coinductive formalisations of general interest are welcome. extra-history = Change history: [2010-06-10]: coinductive lists: setup for quotient package (revision 015574f3bf3c)
[2010-06-28]: new codatatype terminated lazy lists (revision e12de475c558)
[2010-08-04]: terminated lazy lists: setup for quotient package; more lemmas (revision 6ead626f1d01)
[2010-08-17]: Koenig's lemma as an example application for coinductive lists (revision f81ce373fa96)
[2011-02-01]: lazy implementation of coinductive (terminated) lists for the code generator (revision 6034973dce83)
[2011-07-20]: new codatatype resumption (revision 811364c776c7)
[2012-06-27]: new codatatype stream with operations (with contributions by Peter Gammie) (revision dd789a56473c)
[2013-03-13]: construct codatatypes with the BNF package and adjust the definitions and proofs, setup for lifting and transfer packages (revision f593eda5b2c0)
[2013-09-20]: stream theory uses type and operations from HOL/BNF/Examples/Stream (revision 692809b2b262)
[2014-04-03]: ccpo structure on codatatypes used to define ldrop, ldropWhile, lfilter, lconcat as least fixpoint; ccpo topology on coinductive lists contributed by Johannes Hölzl; added examples (revision 23cd8156bd42)
notify = mail@andreas-lochbihler.de [Stream-Fusion] title = Stream Fusion author = Brian Huffman topic = Computer science/Functional programming date = 2009-04-29 abstract = Stream Fusion is a system for removing intermediate list structures from Haskell programs; it consists of a Haskell library along with several compiler rewrite rules. (The library is available online.)

These theories contain a formalization of much of the Stream Fusion library in HOLCF. Lazy list and stream types are defined, along with coercions between the two types, as well as an equivalence relation for streams that generate the same list. List and stream versions of map, filter, foldr, enumFromTo, append, zipWith, and concatMap are defined, and the stream versions are shown to respect stream equivalence. notify = brianh@cs.pdx.edu [Tycon] title = Type Constructor Classes and Monad Transformers author = Brian Huffman date = 2012-06-26 topic = Computer science/Functional programming abstract = These theories contain a formalization of first class type constructors and axiomatic constructor classes for HOLCF. This work is described in detail in the ICFP 2012 paper Formal Verification of Monad Transformers by the author. The formalization is a revised and updated version of earlier joint work with Matthews and White.

Based on the hierarchy of type classes in Haskell, we define classes for functors, monads, monad-plus, etc. Each one includes all the standard laws as axioms. We also provide a new user command, tycondef, for defining new type constructors in HOLCF. Using tycondef, we instantiate the type class hierarchy with various monads and monad transformers. notify = huffman@in.tum.de [CoreC++] title = CoreC++ author = Daniel Wasserrab date = 2006-05-15 topic = Computer science/Programming languages/Language definitions abstract = We present an operational semantics and type safety proof for multiple inheritance in C++. The semantics models the behavior of method calls, field accesses, and two forms of casts in C++ class hierarchies. For explanations see the OOPSLA 2006 paper by Wasserrab, Nipkow, Snelting and Tip. notify = nipkow@in.tum.de [FeatherweightJava] title = A Theory of Featherweight Java in Isabelle/HOL author = J. Nathan Foster , Dimitrios Vytiniotis date = 2006-03-31 topic = Computer science/Programming languages/Language definitions abstract = We formalize the type system, small-step operational semantics, and type soundness proof for Featherweight Java, a simple object calculus, in Isabelle/HOL. notify = kleing@cse.unsw.edu.au [Jinja] title = Jinja is not Java author = Gerwin Klein , Tobias Nipkow date = 2005-06-01 topic = Computer science/Programming languages/Language definitions abstract = We introduce Jinja, a Java-like programming language with a formal semantics designed to exhibit core features of the Java language architecture. Jinja is a compromise between realism of the language and tractability and clarity of the formal semantics. The following aspects are formalised: a big and a small step operational semantics for Jinja and a proof of their equivalence; a type system and a definite initialisation analysis; a type safety proof of the small step semantics; a virtual machine (JVM), its operational semantics and its type system; a type safety proof for the JVM; a bytecode verifier, i.e. data flow analyser for the JVM; a correctness proof of the bytecode verifier w.r.t. the type system; a compiler and a proof that it preserves semantics and well-typedness. The emphasis of this work is not on particular language features but on providing a unified model of the source language, the virtual machine and the compiler. The whole development has been carried out in the theorem prover Isabelle/HOL. notify = kleing@cse.unsw.edu.au, nipkow@in.tum.de [JinjaThreads] title = Jinja with Threads author = Andreas Lochbihler date = 2007-12-03 topic = Computer science/Programming languages/Language definitions abstract = We extend the Jinja source code semantics by Klein and Nipkow with Java-style arrays and threads. Concurrency is captured in a generic framework semantics for adding concurrency through interleaving to a sequential semantics, which features dynamic thread creation, inter-thread communication via shared memory, lock synchronisation and joins. Also, threads can suspend themselves and be notified by others. We instantiate the framework with the adapted versions of both Jinja source and byte code and show type safety for the multithreaded case. Equally, the compiler from source to byte code is extended, for which we prove weak bisimilarity between the source code small step semantics and the defensive Jinja virtual machine. On top of this, we formalise the JMM and show the DRF guarantee and consistency. For description of the different parts, see Lochbihler's papers at FOOL 2008, ESOP 2010, ITP 2011, and ESOP 2012. extra-history = Change history: [2008-04-23]: added bytecode formalisation with arrays and threads, added thread joins (revision f74a8be156a7)
[2009-04-27]: added verified compiler from source code to bytecode; encapsulate native methods in separate semantics (revision e4f26541e58a)
[2009-11-30]: extended compiler correctness proof to infinite and deadlocking computations (revision e50282397435)
[2010-06-08]: added thread interruption; new abstract memory model with sequential consistency as implementation (revision 0cb9e8dbd78d)
[2010-06-28]: new thread interruption model (revision c0440d0a1177)
[2010-10-15]: preliminary version of the Java memory model for source code (revision 02fee0ef3ca2)
[2010-12-16]: improved version of the Java memory model, also for bytecode executable scheduler for source code semantics (revision 1f41c1842f5a)
[2011-02-02]: simplified code generator setup new random scheduler (revision 3059dafd013f)
[2011-07-21]: new interruption model, generalized JMM proof of DRF guarantee, allow class Object to declare methods and fields, simplified subtyping relation, corrected division and modulo implementation (revision 46e4181ed142)
[2012-02-16]: added example programs (revision bf0b06c8913d)
[2012-11-21]: type safety proof for the Java memory model, allow spurious wake-ups (revision 76063d860ae0)
[2013-05-16]: support for non-deterministic memory allocators (revision cc3344a49ced)
[2017-10-20]: add an atomic compare-and-swap operation for volatile fields (revision a6189b1d6b30)
notify = mail@andreas-lochbihler.de [Locally-Nameless-Sigma] title = Locally Nameless Sigma Calculus author = Ludovic Henrio , Florian Kammüller , Bianca Lutz , Henry Sudhof date = 2010-04-30 topic = Computer science/Programming languages/Language definitions abstract = We present a Theory of Objects based on the original functional sigma-calculus by Abadi and Cardelli but with an additional parameter to methods. We prove confluence of the operational semantics following the outline of Nipkow's proof of confluence for the lambda-calculus reusing his theory Commutation, a generic diamond lemma reduction. We furthermore formalize a simple type system for our sigma-calculus including a proof of type safety. The entire development uses the concept of Locally Nameless representation for binders. We reuse an earlier proof of confluence for a simpler sigma-calculus based on de Bruijn indices and lists to represent objects. notify = nipkow@in.tum.de [Attack_Trees] title = Attack Trees in Isabelle for GDPR compliance of IoT healthcare systems author = Florian Kammueller topic = Computer science/Security date = 2020-04-27 notify = florian.kammuller@gmail.com abstract = In this article, we present a proof theory for Attack Trees. Attack Trees are a well established and useful model for the construction of attacks on systems since they allow a stepwise exploration of high level attacks in application scenarios. Using the expressiveness of Higher Order Logic in Isabelle, we develop a generic theory of Attack Trees with a state-based semantics based on Kripke structures and CTL. The resulting framework allows mechanically supported logic analysis of the meta-theory of the proof calculus of Attack Trees and at the same time the developed proof theory enables application to case studies. A central correctness and completeness result proved in Isabelle establishes a connection between the notion of Attack Tree validity and CTL. The application is illustrated on the example of a healthcare IoT system and GDPR compliance verification. [AutoFocus-Stream] title = AutoFocus Stream Processing for Single-Clocking and Multi-Clocking Semantics author = David Trachtenherz <> date = 2011-02-23 topic = Computer science/Programming languages/Language definitions abstract = We formalize the AutoFocus Semantics (a time-synchronous subset of the Focus formalism) as stream processing functions on finite and infinite message streams represented as finite/infinite lists. The formalization comprises both the conventional single-clocking semantics (uniform global clock for all components and communications channels) and its extension to multi-clocking semantics (internal execution clocking of a component may be a multiple of the external communication clocking). The semantics is defined by generic stream processing functions making it suitable for simulation/code generation in Isabelle/HOL. Furthermore, a number of AutoFocus semantics properties are formalized using definitions from the IntervalLogic theories. notify = nipkow@in.tum.de [FocusStreamsCaseStudies] title = Stream Processing Components: Isabelle/HOL Formalisation and Case Studies author = Maria Spichkova date = 2013-11-14 topic = Computer science/Programming languages/Language definitions abstract = This set of theories presents an Isabelle/HOL formalisation of stream processing components introduced in Focus, a framework for formal specification and development of interactive systems. This is an extended and updated version of the formalisation, which was elaborated within the methodology "Focus on Isabelle". In addition, we also applied the formalisation on three case studies that cover different application areas: process control (Steam Boiler System), data transmission (FlexRay communication protocol), memory and processing components (Automotive-Gateway System). notify = lp15@cam.ac.uk, maria.spichkova@rmit.edu.au [Isabelle_Meta_Model] title = A Meta-Model for the Isabelle API author = Frédéric Tuong , Burkhart Wolff date = 2015-09-16 topic = Computer science/Programming languages/Language definitions abstract = We represent a theory of (a fragment of) Isabelle/HOL in Isabelle/HOL. The purpose of this exercise is to write packages for domain-specific specifications such as class models, B-machines, ..., and generally speaking, any domain-specific languages whose abstract syntax can be defined by a HOL "datatype". On this basis, the Isabelle code-generator can then be used to generate code for global context transformations as well as tactic code.

Consequently the package is geared towards parsing, printing and code-generation to the Isabelle API. It is at the moment not sufficiently rich for doing meta theory on Isabelle itself. Extensions in this direction are possible though.

Moreover, the chosen fragment is fairly rudimentary. However it should be easily adapted to one's needs if a package is written on top of it. The supported API contains types, terms, transformation of global context like definitions and data-type declarations as well as infrastructure for Isar-setups.

This theory is drawn from the Featherweight OCL project where it is used to construct a package for object-oriented data-type theories generated from UML class diagrams. The Featherweight OCL, for example, allows for both the direct execution of compiled tactic code by the Isabelle API as well as the generation of ".thy"-files for debugging purposes.

Gained experience from this project shows that the compiled code is sufficiently efficient for practical purposes while being based on a formal model on which properties of the package can be proven such as termination of certain transformations, correctness, etc. notify = tuong@users.gforge.inria.fr, wolff@lri.fr [Clean] title = Clean - An Abstract Imperative Programming Language and its Theory author = Frédéric Tuong , Burkhart Wolff topic = Computer science/Programming languages, Computer science/Semantics date = 2019-10-04 notify = wolff@lri.fr, ftuong@lri.fr abstract = Clean is based on a simple, abstract execution model for an imperative target language. “Abstract” is understood in contrast to “Concrete Semantics”; alternatively, the term “shallow-style embedding” could be used. It strives for a type-safe notion of program-variables, an incremental construction of the typed state-space, support of incremental verification, and open-world extensibility of new type definitions being intertwined with the program definitions. Clean is based on a “no-frills” state-exception monad with the usual definitions of bind and unit for the compositional glue of state-based computations. Clean offers conditionals and loops supporting C-like control-flow operators such as break and return. The state-space construction is based on the extensible record package. Direct recursion of procedures is supported. Clean’s design strives for extreme simplicity. It is geared towards symbolic execution and proven correct verification tools. The underlying libraries of this package, however, deliberately restrict themselves to the most elementary infrastructure for these tasks. The package is intended to serve as demonstrator semantic backend for Isabelle/C, or for the test-generation techniques. [PCF] title = Logical Relations for PCF author = Peter Gammie date = 2012-07-01 topic = Computer science/Programming languages/Lambda calculi abstract = We apply Andy Pitts's methods of defining relations over domains to several classical results in the literature. We show that the Y combinator coincides with the domain-theoretic fixpoint operator, that parallel-or and the Plotkin existential are not definable in PCF, that the continuation semantics for PCF coincides with the direct semantics, and that our domain-theoretic semantics for PCF is adequate for reasoning about contextual equivalence in an operational semantics. Our version of PCF is untyped and has both strict and non-strict function abstractions. The development is carried out in HOLCF. notify = peteg42@gmail.com [POPLmark-deBruijn] title = POPLmark Challenge Via de Bruijn Indices author = Stefan Berghofer date = 2007-08-02 topic = Computer science/Programming languages/Lambda calculi abstract = We present a solution to the POPLmark challenge designed by Aydemir et al., which has as a goal the formalization of the meta-theory of System F<:. The formalization is carried out in the theorem prover Isabelle/HOL using an encoding based on de Bruijn indices. We start with a relatively simple formalization covering only the basic features of System F<:, and explain how it can be extended to also cover records and more advanced binding constructs. notify = berghofe@in.tum.de [Lam-ml-Normalization] title = Strong Normalization of Moggis's Computational Metalanguage author = Christian Doczkal date = 2010-08-29 topic = Computer science/Programming languages/Lambda calculi abstract = Handling variable binding is one of the main difficulties in formal proofs. In this context, Moggi's computational metalanguage serves as an interesting case study. It features monadic types and a commuting conversion rule that rearranges the binding structure. Lindley and Stark have given an elegant proof of strong normalization for this calculus. The key construction in their proof is a notion of relational TT-lifting, using stacks of elimination contexts to obtain a Girard-Tait style logical relation. I give a formalization of their proof in Isabelle/HOL-Nominal with a particular emphasis on the treatment of bound variables. notify = doczkal@ps.uni-saarland.de, nipkow@in.tum.de [MiniML] title = Mini ML author = Wolfgang Naraschewski <>, Tobias Nipkow date = 2004-03-19 topic = Computer science/Programming languages/Type systems abstract = This theory defines the type inference rules and the type inference algorithm W for MiniML (simply-typed lambda terms with let) due to Milner. It proves the soundness and completeness of W w.r.t. the rules. notify = kleing@cse.unsw.edu.au [Simpl] title = A Sequential Imperative Programming Language Syntax, Semantics, Hoare Logics and Verification Environment author = Norbert Schirmer <> date = 2008-02-29 topic = Computer science/Programming languages/Language definitions, Computer science/Programming languages/Logics license = LGPL abstract = We present the theory of Simpl, a sequential imperative programming language. We introduce its syntax, its semantics (big and small-step operational semantics) and Hoare logics for both partial as well as total correctness. We prove soundness and completeness of the Hoare logic. We integrate and automate the Hoare logic in Isabelle/HOL to obtain a practically usable verification environment for imperative programs. Simpl is independent of a concrete programming language but expressive enough to cover all common language features: mutually recursive procedures, abrupt termination and exceptions, runtime faults, local and global variables, pointers and heap, expressions with side effects, pointers to procedures, partial application and closures, dynamic method invocation and also unbounded nondeterminism. notify = kleing@cse.unsw.edu.au, norbert.schirmer@web.de [Separation_Algebra] title = Separation Algebra author = Gerwin Klein , Rafal Kolanski , Andrew Boyton date = 2012-05-11 topic = Computer science/Programming languages/Logics license = BSD abstract = We present a generic type class implementation of separation algebra for Isabelle/HOL as well as lemmas and generic tactics which can be used directly for any instantiation of the type class.

The ex directory contains example instantiations that include structures such as a heap or virtual memory.

The abstract separation algebra is based upon "Abstract Separation Logic" by Calcagno et al. These theories are also the basis of the ITP 2012 rough diamond "Mechanised Separation Algebra" by the authors.

The aim of this work is to support and significantly reduce the effort for future separation logic developments in Isabelle/HOL by factoring out the part of separation logic that can be treated abstractly once and for all. This includes developing typical default rule sets for reasoning as well as automated tactic support for separation logic. notify = kleing@cse.unsw.edu.au, rafal.kolanski@nicta.com.au [Separation_Logic_Imperative_HOL] title = A Separation Logic Framework for Imperative HOL author = Peter Lammich , Rene Meis date = 2012-11-14 topic = Computer science/Programming languages/Logics license = BSD abstract = We provide a framework for separation-logic based correctness proofs of Imperative HOL programs. Our framework comes with a set of proof methods to automate canonical tasks such as verification condition generation and frame inference. Moreover, we provide a set of examples that show the applicability of our framework. The examples include algorithms on lists, hash-tables, and union-find trees. We also provide abstract interfaces for lists, maps, and sets, that allow to develop generic imperative algorithms and use data-refinement techniques.
As we target Imperative HOL, our programs can be translated to efficiently executable code in various target languages, including ML, OCaml, Haskell, and Scala. notify = lammich@in.tum.de [Inductive_Confidentiality] title = Inductive Study of Confidentiality author = Giampaolo Bella date = 2012-05-02 topic = Computer science/Security abstract = This document contains the full theory files accompanying article Inductive Study of Confidentiality --- for Everyone in Formal Aspects of Computing. They aim at an illustrative and didactic presentation of the Inductive Method of protocol analysis, focusing on the treatment of one of the main goals of security protocols: confidentiality against a threat model. The treatment of confidentiality, which in fact forms a key aspect of all protocol analysis tools, has been found cryptic by many learners of the Inductive Method, hence the motivation for this work. The theory files in this document guide the reader step by step towards design and proof of significant confidentiality theorems. These are developed against two threat models, the standard Dolev-Yao and a more audacious one, the General Attacker, which turns out to be particularly useful also for teaching purposes. notify = giamp@dmi.unict.it [Possibilistic_Noninterference] title = Possibilistic Noninterference author = Andrei Popescu , Johannes Hölzl date = 2012-09-10 topic = Computer science/Security, Computer science/Programming languages/Type systems abstract = We formalize a wide variety of Volpano/Smith-style noninterference notions for a while language with parallel composition. We systematize and classify these notions according to compositionality w.r.t. the language constructs. Compositionality yields sound syntactic criteria (a.k.a. type systems) in a uniform way.

An article about these proofs is published in the proceedings of the conference Certified Programs and Proofs 2012. notify = hoelzl@in.tum.de [SIFUM_Type_Systems] title = A Formalization of Assumptions and Guarantees for Compositional Noninterference author = Sylvia Grewe , Heiko Mantel , Daniel Schoepe date = 2014-04-23 topic = Computer science/Security, Computer science/Programming languages/Type systems abstract = Research in information-flow security aims at developing methods to identify undesired information leaks within programs from private (high) sources to public (low) sinks. For a concurrent system, it is desirable to have compositional analysis methods that allow for analyzing each thread independently and that nevertheless guarantee that the parallel composition of successfully analyzed threads satisfies a global security guarantee. However, such a compositional analysis should not be overly pessimistic about what an environment might do with shared resources. Otherwise, the analysis will reject many intuitively secure programs.

The paper "Assumptions and Guarantees for Compositional Noninterference" by Mantel et. al. presents one solution for this problem: an approach for compositionally reasoning about non-interference in concurrent programs via rely-guarantee-style reasoning. We present an Isabelle/HOL formalization of the concepts and proofs of this approach. notify = [Dependent_SIFUM_Type_Systems] title = A Dependent Security Type System for Concurrent Imperative Programs author = Toby Murray , Robert Sison<>, Edward Pierzchalski<>, Christine Rizkallah notify = toby.murray@unimelb.edu.au date = 2016-06-25 topic = Computer science/Security, Computer science/Programming languages/Type systems abstract = The paper "Compositional Verification and Refinement of Concurrent Value-Dependent Noninterference" by Murray et. al. (CSF 2016) presents a dependent security type system for compositionally verifying a value-dependent noninterference property, defined in (Murray, PLAS 2015), for concurrent programs. This development formalises that security definition, the type system and its soundness proof, and demonstrates its application on some small examples. It was derived from the SIFUM_Type_Systems AFP entry, by Sylvia Grewe, Heiko Mantel and Daniel Schoepe, and whose structure it inherits. extra-history = Change history: [2016-08-19]: Removed unused "stop" parameter and "stop_no_eval" assumption from the sifum_security locale. (revision dbc482d36372) [2016-09-27]: Added security locale support for the imposition of requirements on the initial memory. (revision cce4ceb74ddb) [Dependent_SIFUM_Refinement] title = Compositional Security-Preserving Refinement for Concurrent Imperative Programs author = Toby Murray , Robert Sison<>, Edward Pierzchalski<>, Christine Rizkallah notify = toby.murray@unimelb.edu.au date = 2016-06-28 topic = Computer science/Security abstract = The paper "Compositional Verification and Refinement of Concurrent Value-Dependent Noninterference" by Murray et. al. (CSF 2016) presents a compositional theory of refinement for a value-dependent noninterference property, defined in (Murray, PLAS 2015), for concurrent programs. This development formalises that refinement theory, and demonstrates its application on some small examples. extra-history = Change history: [2016-08-19]: Removed unused "stop" parameters from the sifum_refinement locale. (revision dbc482d36372) [2016-09-02]: TobyM extended "simple" refinement theory to be usable for all bisimulations. (revision 547f31c25f60) [Relational-Incorrectness-Logic] title = An Under-Approximate Relational Logic author = Toby Murray topic = Computer science/Programming languages/Logics, Computer science/Security date = 2020-03-12 notify = toby.murray@unimelb.edu.au abstract = Recently, authors have proposed under-approximate logics for reasoning about programs. So far, all such logics have been confined to reasoning about individual program behaviours. Yet there exist many over-approximate relational logics for reasoning about pairs of programs and relating their behaviours. We present the first under-approximate relational logic, for the simple imperative language IMP. We prove our logic is both sound and complete. Additionally, we show how reasoning in this logic can be decomposed into non-relational reasoning in an under-approximate Hoare logic, mirroring Beringer’s result for over-approximate relational logics. We illustrate the application of our logic on some small examples in which we provably demonstrate the presence of insecurity. [Strong_Security] title = A Formalization of Strong Security author = Sylvia Grewe , Alexander Lux , Heiko Mantel , Jens Sauer date = 2014-04-23 topic = Computer science/Security, Computer science/Programming languages/Type systems abstract = Research in information-flow security aims at developing methods to identify undesired information leaks within programs from private sources to public sinks. Noninterference captures this intuition. Strong security from Sabelfeld and Sands formalizes noninterference for concurrent systems.

We present an Isabelle/HOL formalization of strong security for arbitrary security lattices (Sabelfeld and Sands use a two-element security lattice in the original publication). The formalization includes compositionality proofs for strong security and a soundness proof for a security type system that checks strong security for programs in a simple while language with dynamic thread creation.

Our formalization of the security type system is abstract in the language for expressions and in the semantic side conditions for expressions. It can easily be instantiated with different syntactic approximations for these side conditions. The soundness proof of such an instantiation boils down to showing that these syntactic approximations imply the semantic side conditions. notify = [WHATandWHERE_Security] title = A Formalization of Declassification with WHAT-and-WHERE-Security author = Sylvia Grewe , Alexander Lux , Heiko Mantel , Jens Sauer date = 2014-04-23 topic = Computer science/Security, Computer science/Programming languages/Type systems abstract = Research in information-flow security aims at developing methods to identify undesired information leaks within programs from private sources to public sinks. Noninterference captures this intuition by requiring that no information whatsoever flows from private sources to public sinks. However, in practice this definition is often too strict: Depending on the intuitive desired security policy, the controlled declassification of certain private information (WHAT) at certain points in the program (WHERE) might not result in an undesired information leak.

We present an Isabelle/HOL formalization of such a security property for controlled declassification, namely WHAT&WHERE-security from "Scheduler-Independent Declassification" by Lux, Mantel, and Perner. The formalization includes compositionality proofs for and a soundness proof for a security type system that checks for programs in a simple while language with dynamic thread creation.

Our formalization of the security type system is abstract in the language for expressions and in the semantic side conditions for expressions. It can easily be instantiated with different syntactic approximations for these side conditions. The soundness proof of such an instantiation boils down to showing that these syntactic approximations imply the semantic side conditions.

This Isabelle/HOL formalization uses theories from the entry Strong Security. notify = [VolpanoSmith] title = A Correctness Proof for the Volpano/Smith Security Typing System author = Gregor Snelting , Daniel Wasserrab date = 2008-09-02 topic = Computer science/Programming languages/Type systems, Computer science/Security abstract = The Volpano/Smith/Irvine security type systems requires that variables are annotated as high (secret) or low (public), and provides typing rules which guarantee that secret values cannot leak to public output ports. This property of a program is called confidentiality. For a simple while-language without threads, our proof shows that typeability in the Volpano/Smith system guarantees noninterference. Noninterference means that if two initial states for program execution are low-equivalent, then the final states are low-equivalent as well. This indeed implies that secret values cannot leak to public ports. The proof defines an abstract syntax and operational semantics for programs, formalizes noninterference, and then proceeds by rule induction on the operational semantics. The mathematically most intricate part is the treatment of implicit flows. Note that the Volpano/Smith system is not flow-sensitive and thus quite unprecise, resulting in false alarms. However, due to the correctness property, all potential breaks of confidentiality are discovered. notify = [Abstract-Hoare-Logics] title = Abstract Hoare Logics author = Tobias Nipkow date = 2006-08-08 topic = Computer science/Programming languages/Logics abstract = These therories describe Hoare logics for a number of imperative language constructs, from while-loops to mutually recursive procedures. Both partial and total correctness are treated. In particular a proof system for total correctness of recursive procedures in the presence of unbounded nondeterminism is presented. notify = nipkow@in.tum.de [Stone_Algebras] title = Stone Algebras author = Walter Guttmann notify = walter.guttmann@canterbury.ac.nz date = 2016-09-06 topic = Mathematics/Order abstract = A range of algebras between lattices and Boolean algebras generalise the notion of a complement. We develop a hierarchy of these pseudo-complemented algebras that includes Stone algebras. Independently of this theory we study filters based on partial orders. Both theories are combined to prove Chen and Grätzer's construction theorem for Stone algebras. The latter involves extensive reasoning about algebraic structures in addition to reasoning in algebraic structures. [Kleene_Algebra] title = Kleene Algebra author = Alasdair Armstrong <>, Georg Struth , Tjark Weber date = 2013-01-15 topic = Computer science/Programming languages/Logics, Computer science/Automata and formal languages, Mathematics/Algebra abstract = These files contain a formalisation of variants of Kleene algebras and their most important models as axiomatic type classes in Isabelle/HOL. Kleene algebras are foundational structures in computing with applications ranging from automata and language theory to computational modeling, program construction and verification.

We start with formalising dioids, which are additively idempotent semirings, and expand them by axiomatisations of the Kleene star for finite iteration and an omega operation for infinite iteration. We show that powersets over a given monoid, (regular) languages, sets of paths in a graph, sets of computation traces, binary relations and formal power series form Kleene algebras, and consider further models based on lattices, max-plus semirings and min-plus semirings. We also demonstrate that dioids are closed under the formation of matrices (proofs for Kleene algebras remain to be completed).

On the one hand we have aimed at a reference formalisation of variants of Kleene algebras that covers a wide range of variants and the core theorems in a structured and modular way and provides readable proofs at text book level. On the other hand, we intend to use this algebraic hierarchy and its models as a generic algebraic middle-layer from which programming applications can quickly be explored, implemented and verified. notify = g.struth@sheffield.ac.uk, tjark.weber@it.uu.se [KAT_and_DRA] title = Kleene Algebra with Tests and Demonic Refinement Algebras author = Alasdair Armstrong <>, Victor B. F. Gomes , Georg Struth date = 2014-01-23 topic = Computer science/Programming languages/Logics, Computer science/Automata and formal languages, Mathematics/Algebra abstract = We formalise Kleene algebra with tests (KAT) and demonic refinement algebra (DRA) in Isabelle/HOL. KAT is relevant for program verification and correctness proofs in the partial correctness setting. While DRA targets similar applications in the context of total correctness. Our formalisation contains the two most important models of these algebras: binary relations in the case of KAT and predicate transformers in the case of DRA. In addition, we derive the inference rules for Hoare logic in KAT and its relational model and present a simple formally verified program verification tool prototype based on the algebraic approach. notify = g.struth@dcs.shef.ac.uk [KAD] title = Kleene Algebras with Domain author = Victor B. F. Gomes , Walter Guttmann , Peter Höfner , Georg Struth , Tjark Weber date = 2016-04-12 topic = Computer science/Programming languages/Logics, Computer science/Automata and formal languages, Mathematics/Algebra abstract = Kleene algebras with domain are Kleene algebras endowed with an operation that maps each element of the algebra to its domain of definition (or its complement) in abstract fashion. They form a simple algebraic basis for Hoare logics, dynamic logics or predicate transformer semantics. We formalise a modular hierarchy of algebras with domain and antidomain (domain complement) operations in Isabelle/HOL that ranges from domain and antidomain semigroups to modal Kleene algebras and divergence Kleene algebras. We link these algebras with models of binary relations and program traces. We include some examples from modal logics, termination and program analysis. notify = walter.guttman@canterbury.ac.nz, g.struth@sheffield.ac.uk, tjark.weber@it.uu.se [Regular_Algebras] title = Regular Algebras author = Simon Foster , Georg Struth date = 2014-05-21 topic = Computer science/Automata and formal languages, Mathematics/Algebra abstract = Regular algebras axiomatise the equational theory of regular expressions as induced by regular language identity. We use Isabelle/HOL for a detailed systematic study of regular algebras given by Boffa, Conway, Kozen and Salomaa. We investigate the relationships between these classes, formalise a soundness proof for the smallest class (Salomaa's) and obtain completeness of the largest one (Boffa's) relative to a deep result by Krob. In addition we provide a large collection of regular identities in the general setting of Boffa's axiom. Our regular algebra hierarchy is orthogonal to the Kleene algebra hierarchy in the Archive of Formal Proofs; we have not aimed at an integration for pragmatic reasons. notify = simon.foster@york.ac.uk, g.struth@sheffield.ac.uk [BytecodeLogicJmlTypes] title = A Bytecode Logic for JML and Types author = Lennart Beringer <>, Martin Hofmann date = 2008-12-12 topic = Computer science/Programming languages/Logics abstract = This document contains the Isabelle/HOL sources underlying the paper A bytecode logic for JML and types by Beringer and Hofmann, updated to Isabelle 2008. We present a program logic for a subset of sequential Java bytecode that is suitable for representing both, features found in high-level specification language JML as well as interpretations of high-level type systems. To this end, we introduce a fine-grained collection of assertions, including strong invariants, local annotations and VDM-reminiscent partial-correctness specifications. Thanks to a goal-oriented structure and interpretation of judgements, verification may proceed without recourse to an additional control flow analysis. The suitability for interpreting intensional type systems is illustrated by the proof-carrying-code style encoding of a type system for a first-order functional language which guarantees a constant upper bound on the number of objects allocated throughout an execution, be the execution terminating or non-terminating. Like the published paper, the formal development is restricted to a comparatively small subset of the JVML, lacking (among other features) exceptions, arrays, virtual methods, and static fields. This shortcoming has been overcome meanwhile, as our paper has formed the basis of the Mobius base logic, a program logic for the full sequential fragment of the JVML. Indeed, the present formalisation formed the basis of a subsequent formalisation of the Mobius base logic in the proof assistant Coq, which includes a proof of soundness with respect to the Bicolano operational semantics by Pichardie. notify = [DataRefinementIBP] title = Semantics and Data Refinement of Invariant Based Programs author = Viorel Preoteasa , Ralph-Johan Back date = 2010-05-28 topic = Computer science/Programming languages/Logics abstract = The invariant based programming is a technique of constructing correct programs by first identifying the basic situations (pre- and post-conditions and invariants) that can occur during the execution of the program, and then defining the transitions and proving that they preserve the invariants. Data refinement is a technique of building correct programs working on concrete datatypes as refinements of more abstract programs. In the theories presented here we formalize the predicate transformer semantics for invariant based programs and their data refinement. extra-history = Change history: [2012-01-05]: Moved some general complete lattice properties to the AFP entry Lattice Properties. Changed the definition of the data refinement relation to be more general and updated all corresponding theorems. Added new syntax for demonic and angelic update statements. notify = viorel.preoteasa@aalto.fi [RefinementReactive] title = Formalization of Refinement Calculus for Reactive Systems author = Viorel Preoteasa date = 2014-10-08 topic = Computer science/Programming languages/Logics abstract = We present a formalization of refinement calculus for reactive systems. Refinement calculus is based on monotonic predicate transformers (monotonic functions from sets of post-states to sets of pre-states), and it is a powerful formalism for reasoning about imperative programs. We model reactive systems as monotonic property transformers that transform sets of output infinite sequences into sets of input infinite sequences. Within this semantics we can model refinement of reactive systems, (unbounded) angelic and demonic nondeterminism, sequential composition, and other semantic properties. We can model systems that may fail for some inputs, and we can model compatibility of systems. We can specify systems that have liveness properties using linear temporal logic, and we can refine system specifications into systems based on symbolic transitions systems, suitable for implementations. notify = viorel.preoteasa@aalto.fi [SIFPL] title = Secure information flow and program logics author = Lennart Beringer <>, Martin Hofmann date = 2008-11-10 topic = Computer science/Programming languages/Logics, Computer science/Security abstract = We present interpretations of type systems for secure information flow in Hoare logic, complementing previous encodings in relational program logics. We first treat the imperative language IMP, extended by a simple procedure call mechanism. For this language we consider base-line non-interference in the style of Volpano et al. and the flow-sensitive type system by Hunt and Sands. In both cases, we show how typing derivations may be used to automatically generate proofs in the program logic that certify the absence of illicit flows. We then add instructions for object creation and manipulation, and derive appropriate proof rules for base-line non-interference. As a consequence of our work, standard verification technology may be used for verifying that a concrete program satisfies the non-interference property.

The present proof development represents an update of the formalisation underlying our paper [CSF 2007] and is intended to resolve any ambiguities that may be present in the paper. notify = lennart.beringer@ifi.lmu.de [TLA] title = A Definitional Encoding of TLA* in Isabelle/HOL author = Gudmund Grov , Stephan Merz date = 2011-11-19 topic = Computer science/Programming languages/Logics abstract = We mechanise the logic TLA* [Merz 1999], an extension of Lamport's Temporal Logic of Actions (TLA) [Lamport 1994] for specifying and reasoning about concurrent and reactive systems. Aiming at a framework for mechanising] the verification of TLA (or TLA*) specifications, this contribution reuses some elements from a previous axiomatic encoding of TLA in Isabelle/HOL by the second author [Merz 1998], which has been part of the Isabelle distribution. In contrast to that previous work, we give here a shallow, definitional embedding, with the following highlights:

  • a theory of infinite sequences, including a formalisation of the concepts of stuttering invariance central to TLA and TLA*;
  • a definition of the semantics of TLA*, which extends TLA by a mutually-recursive definition of formulas and pre-formulas, generalising TLA action formulas;
  • a substantial set of derived proof rules, including the TLA* axioms and Lamport's proof rules for system verification;
  • a set of examples illustrating the usage of Isabelle/TLA* for reasoning about systems.
Note that this work is unrelated to the ongoing development of a proof system for the specification language TLA+, which includes an encoding of TLA+ as a new Isabelle object logic [Chaudhuri et al 2010]. notify = ggrov@inf.ed.ac.uk [Compiling-Exceptions-Correctly] title = Compiling Exceptions Correctly author = Tobias Nipkow date = 2004-07-09 topic = Computer science/Programming languages/Compiling abstract = An exception compilation scheme that dynamically creates and removes exception handler entries on the stack. A formalization of an article of the same name by Hutton and Wright. notify = nipkow@in.tum.de [NormByEval] title = Normalization by Evaluation author = Klaus Aehlig , Tobias Nipkow date = 2008-02-18 topic = Computer science/Programming languages/Compiling abstract = This article formalizes normalization by evaluation as implemented in Isabelle. Lambda calculus plus term rewriting is compiled into a functional program with pattern matching. It is proved that the result of a successful evaluation is a) correct, i.e. equivalent to the input, and b) in normal form. notify = nipkow@in.tum.de [Program-Conflict-Analysis] title = Formalization of Conflict Analysis of Programs with Procedures, Thread Creation, and Monitors topic = Computer science/Programming languages/Static analysis author = Peter Lammich , Markus Müller-Olm date = 2007-12-14 abstract = In this work we formally verify the soundness and precision of a static program analysis that detects conflicts (e. g. data races) in programs with procedures, thread creation and monitors with the Isabelle theorem prover. As common in static program analysis, our program model abstracts guarded branching by nondeterministic branching, but completely interprets the call-/return behavior of procedures, synchronization by monitors, and thread creation. The analysis is based on the observation that all conflicts already occur in a class of particularly restricted schedules. These restricted schedules are suited to constraint-system-based program analysis. The formalization is based upon a flowgraph-based program model with an operational semantics as reference point. notify = peter.lammich@uni-muenster.de [Shivers-CFA] title = Shivers' Control Flow Analysis topic = Computer science/Programming languages/Static analysis author = Joachim Breitner date = 2010-11-16 abstract = In his dissertation, Olin Shivers introduces a concept of control flow graphs for functional languages, provides an algorithm to statically derive a safe approximation of the control flow graph and proves this algorithm correct. In this research project, Shivers' algorithms and proofs are formalized in the HOLCF extension of HOL. notify = mail@joachim-breitner.de, nipkow@in.tum.de [Slicing] title = Towards Certified Slicing author = Daniel Wasserrab date = 2008-09-16 topic = Computer science/Programming languages/Static analysis abstract = Slicing is a widely-used technique with applications in e.g. compiler technology and software security. Thus verification of algorithms in these areas is often based on the correctness of slicing, which should ideally be proven independent of concrete programming languages and with the help of well-known verifying techniques such as proof assistants. As a first step in this direction, this contribution presents a framework for dynamic and static intraprocedural slicing based on control flow and program dependence graphs. Abstracting from concrete syntax we base the framework on a graph representation of the program fulfilling certain structural and well-formedness properties.

The formalization consists of the basic framework (in subdirectory Basic/), the correctness proof for dynamic slicing (in subdirectory Dynamic/), the correctness proof for static intraprocedural slicing (in subdirectory StaticIntra/) and instantiations of the framework with a simple While language (in subdirectory While/) and the sophisticated object-oriented bytecode language of Jinja (in subdirectory JinjaVM/). For more information on the framework, see the TPHOLS 2008 paper by Wasserrab and Lochbihler and the PLAS 2009 paper by Wasserrab et al. notify = [HRB-Slicing] title = Backing up Slicing: Verifying the Interprocedural Two-Phase Horwitz-Reps-Binkley Slicer author = Daniel Wasserrab date = 2009-11-13 topic = Computer science/Programming languages/Static analysis abstract = After verifying dynamic and static interprocedural slicing, we present a modular framework for static interprocedural slicing. To this end, we formalized the standard two-phase slicer from Horwitz, Reps and Binkley (see their TOPLAS 12(1) 1990 paper) together with summary edges as presented by Reps et al. (see FSE 1994). The framework is again modular in the programming language by using an abstract CFG, defined via structural and well-formedness properties. Using a weak simulation between the original and sliced graph, we were able to prove the correctness of static interprocedural slicing. We also instantiate our framework with a simple While language with procedures. This shows that the chosen abstractions are indeed valid. notify = nipkow@in.tum.de [WorkerWrapper] title = The Worker/Wrapper Transformation author = Peter Gammie date = 2009-10-30 topic = Computer science/Programming languages/Transformations abstract = Gill and Hutton formalise the worker/wrapper transformation, building on the work of Launchbury and Peyton-Jones who developed it as a way of changing the type at which a recursive function operates. This development establishes the soundness of the technique and several examples of its use. notify = peteg42@gmail.com, nipkow@in.tum.de [JiveDataStoreModel] title = Jive Data and Store Model author = Nicole Rauch , Norbert Schirmer <> date = 2005-06-20 license = LGPL topic = Computer science/Programming languages/Misc abstract = This document presents the formalization of an object-oriented data and store model in Isabelle/HOL. This model is being used in the Java Interactive Verification Environment, Jive. notify = kleing@cse.unsw.edu.au, schirmer@in.tum.de [HotelKeyCards] title = Hotel Key Card System author = Tobias Nipkow date = 2006-09-09 topic = Computer science/Security abstract = Two models of an electronic hotel key card system are contrasted: a state based and a trace based one. Both are defined, verified, and proved equivalent in the theorem prover Isabelle/HOL. It is shown that if a guest follows a certain safety policy regarding her key cards, she can be sure that nobody but her can enter her room. notify = nipkow@in.tum.de [RSAPSS] title = SHA1, RSA, PSS and more author = Christina Lindenberg <>, Kai Wirt <> date = 2005-05-02 topic = Computer science/Security/Cryptography abstract = Formal verification is getting more and more important in computer science. However the state of the art formal verification methods in cryptography are very rudimentary. These theories are one step to provide a tool box allowing the use of formal methods in every aspect of cryptography. Moreover we present a proof of concept for the feasibility of verification techniques to a standard signature algorithm. notify = nipkow@in.tum.de [InformationFlowSlicing] title = Information Flow Noninterference via Slicing author = Daniel Wasserrab date = 2010-03-23 topic = Computer science/Security abstract =

In this contribution, we show how correctness proofs for intra- and interprocedural slicing can be used to prove that slicing is able to guarantee information flow noninterference. Moreover, we also illustrate how to lift the control flow graphs of the respective frameworks such that they fulfil the additional assumptions needed in the noninterference proofs. A detailed description of the intraprocedural proof and its interplay with the slicing framework can be found in the PLAS'09 paper by Wasserrab et al.

This entry contains the part for intra-procedural slicing. See entry InformationFlowSlicing_Inter for the inter-procedural part.

extra-history = Change history: [2016-06-10]: The original entry InformationFlowSlicing contained both the inter- and intra-procedural case was split into two for easier maintenance. notify = [InformationFlowSlicing_Inter] title = Inter-Procedural Information Flow Noninterference via Slicing author = Daniel Wasserrab date = 2010-03-23 topic = Computer science/Security abstract =

In this contribution, we show how correctness proofs for intra- and interprocedural slicing can be used to prove that slicing is able to guarantee information flow noninterference. Moreover, we also illustrate how to lift the control flow graphs of the respective frameworks such that they fulfil the additional assumptions needed in the noninterference proofs. A detailed description of the intraprocedural proof and its interplay with the slicing framework can be found in the PLAS'09 paper by Wasserrab et al.

This entry contains the part for inter-procedural slicing. See entry InformationFlowSlicing for the intra-procedural part.

extra-history = Change history: [2016-06-10]: The original entry InformationFlowSlicing contained both the inter- and intra-procedural case was split into two for easier maintenance. notify = [ComponentDependencies] title = Formalisation and Analysis of Component Dependencies author = Maria Spichkova date = 2014-04-28 topic = Computer science/System description languages abstract = This set of theories presents a formalisation in Isabelle/HOL of data dependencies between components. The approach allows to analyse system structure oriented towards efficient checking of system: it aims at elaborating for a concrete system, which parts of the system are necessary to check a given property. notify = maria.spichkova@rmit.edu.au [Verified-Prover] title = A Mechanically Verified, Efficient, Sound and Complete Theorem Prover For First Order Logic author = Tom Ridge <> date = 2004-09-28 topic = Logic/General logic/Mechanization of proofs abstract = Soundness and completeness for a system of first order logic are formally proved, building on James Margetson's formalization of work by Wainer and Wallen. The completeness proofs naturally suggest an algorithm to derive proofs. This algorithm, which can be implemented tail recursively, is formalized in Isabelle/HOL. The algorithm can be executed via the rewriting tactics of Isabelle. Alternatively, the definitions can be exported to OCaml, yielding a directly executable program. notify = lp15@cam.ac.uk [Completeness] title = Completeness theorem author = James Margetson <>, Tom Ridge <> date = 2004-09-20 topic = Logic/Proof theory abstract = The completeness of first-order logic is proved, following the first five pages of Wainer and Wallen's chapter of the book Proof Theory by Aczel et al., CUP, 1992. Their presentation of formulas allows the proofs to use symmetry arguments. Margetson formalized this theorem by early 2000. The Isar conversion is thanks to Tom Ridge. A paper describing the formalization is available [pdf]. notify = lp15@cam.ac.uk [Ordinal] title = Countable Ordinals author = Brian Huffman date = 2005-11-11 topic = Logic/Set theory abstract = This development defines a well-ordered type of countable ordinals. It includes notions of continuous and normal functions, recursively defined functions over ordinals, least fixed-points, and derivatives. Much of ordinal arithmetic is formalized, including exponentials and logarithms. The development concludes with formalizations of Cantor Normal Form and Veblen hierarchies over normal functions. notify = lcp@cl.cam.ac.uk [Ordinals_and_Cardinals] title = Ordinals and Cardinals author = Andrei Popescu date = 2009-09-01 topic = Logic/Set theory abstract = We develop a basic theory of ordinals and cardinals in Isabelle/HOL, up to the point where some cardinality facts relevant for the ``working mathematician" become available. Unlike in set theory, here we do not have at hand canonical notions of ordinal and cardinal. Therefore, here an ordinal is merely a well-order relation and a cardinal is an ordinal minim w.r.t. order embedding on its field. extra-history = Change history: [2012-09-25]: This entry has been discontinued because it is now part of the Isabelle distribution. notify = uuomul@yahoo.com, nipkow@in.tum.de [FOL-Fitting] title = First-Order Logic According to Fitting author = Stefan Berghofer contributors = Asta Halkjær From date = 2007-08-02 topic = Logic/General logic/Classical first-order logic abstract = We present a formalization of parts of Melvin Fitting's book "First-Order Logic and Automated Theorem Proving". The formalization covers the syntax of first-order logic, its semantics, the model existence theorem, a natural deduction proof calculus together with a proof of correctness and completeness, as well as the Löwenheim-Skolem theorem. extra-history = Change history: [2018-07-21]: Proved completeness theorem for open formulas. Proofs are now written in the declarative style. Enumeration of pairs and datatypes is automated using the Countable theory. notify = berghofe@in.tum.de [Epistemic_Logic] title = Epistemic Logic: Completeness of Modal Logics author = Asta Halkjær From topic = Logic/General logic/Logics of knowledge and belief date = 2018-10-29 notify = ahfrom@dtu.dk abstract = This work is a formalization of epistemic logic with countably many agents. It includes proofs of soundness and completeness for the axiom system K. The completeness proof is based on the textbook "Reasoning About Knowledge" by Fagin, Halpern, Moses and Vardi (MIT Press 1995). The extensions of system K (T, KB, K4, S4, S5) and their completeness proofs are based on the textbook "Modal Logic" by Blackburn, de Rijke and Venema (Cambridge University Press 2001). extra-history = Change history: [2020-04-15]: Added completeness of modal logics T, KB, K4, S4 and S5. [SequentInvertibility] title = Invertibility in Sequent Calculi author = Peter Chapman <> date = 2009-08-28 topic = Logic/Proof theory license = LGPL abstract = The invertibility of the rules of a sequent calculus is important for guiding proof search and can be used in some formalised proofs of Cut admissibility. We present sufficient conditions for when a rule is invertible with respect to a calculus. We illustrate the conditions with examples. It must be noted we give purely syntactic criteria; no guarantees are given as to the suitability of the rules. notify = pc@cs.st-andrews.ac.uk, nipkow@in.tum.de [LinearQuantifierElim] title = Quantifier Elimination for Linear Arithmetic author = Tobias Nipkow date = 2008-01-11 topic = Logic/General logic/Decidability of theories abstract = This article formalizes quantifier elimination procedures for dense linear orders, linear real arithmetic and Presburger arithmetic. In each case both a DNF-based non-elementary algorithm and one or more (doubly) exponential NNF-based algorithms are formalized, including the well-known algorithms by Ferrante and Rackoff and by Cooper. The NNF-based algorithms for dense linear orders are new but based on Ferrante and Rackoff and on an algorithm by Loos and Weisspfenning which simulates infenitesimals. All algorithms are directly executable. In particular, they yield reflective quantifier elimination procedures for HOL itself. The formalization makes heavy use of locales and is therefore highly modular. notify = nipkow@in.tum.de [Nat-Interval-Logic] title = Interval Temporal Logic on Natural Numbers author = David Trachtenherz <> date = 2011-02-23 topic = Logic/General logic/Temporal logic abstract = We introduce a theory of temporal logic operators using sets of natural numbers as time domain, formalized in a shallow embedding manner. The theory comprises special natural intervals (theory IL_Interval: open and closed intervals, continuous and modulo intervals, interval traversing results), operators for shifting intervals to left/right on the number axis as well as expanding/contracting intervals by constant factors (theory IL_IntervalOperators.thy), and ultimately definitions and results for unary and binary temporal operators on arbitrary natural sets (theory IL_TemporalOperators). notify = nipkow@in.tum.de [Recursion-Theory-I] title = Recursion Theory I author = Michael Nedzelsky <> date = 2008-04-05 topic = Logic/Computability abstract = This document presents the formalization of introductory material from recursion theory --- definitions and basic properties of primitive recursive functions, Cantor pairing function and computably enumerable sets (including a proof of existence of a one-complete computably enumerable set and a proof of the Rice's theorem). notify = MichaelNedzelsky@yandex.ru [Free-Boolean-Algebra] topic = Logic/General logic/Classical propositional logic title = Free Boolean Algebra author = Brian Huffman date = 2010-03-29 abstract = This theory defines a type constructor representing the free Boolean algebra over a set of generators. Values of type (α)formula represent propositional formulas with uninterpreted variables from type α, ordered by implication. In addition to all the standard Boolean algebra operations, the library also provides a function for building homomorphisms to any other Boolean algebra type. notify = brianh@cs.pdx.edu [Sort_Encodings] title = Sound and Complete Sort Encodings for First-Order Logic author = Jasmin Christian Blanchette , Andrei Popescu date = 2013-06-27 topic = Logic/General logic/Mechanization of proofs abstract = This is a formalization of the soundness and completeness properties for various efficient encodings of sorts in unsorted first-order logic used by Isabelle's Sledgehammer tool.

Essentially, the encodings proceed as follows: a many-sorted problem is decorated with (as few as possible) tags or guards that make the problem monotonic; then sorts can be soundly erased.

The development employs a formalization of many-sorted first-order logic in clausal form (clauses, structures and the basic properties of the satisfaction relation), which could be of interest as the starting point for other formalizations of first-order logic metatheory. notify = uuomul@yahoo.com [Lambda_Free_RPOs] title = Formalization of Recursive Path Orders for Lambda-Free Higher-Order Terms author = Jasmin Christian Blanchette , Uwe Waldmann , Daniel Wand date = 2016-09-23 topic = Logic/Rewriting abstract = This Isabelle/HOL formalization defines recursive path orders (RPOs) for higher-order terms without lambda-abstraction and proves many useful properties about them. The main order fully coincides with the standard RPO on first-order terms also in the presence of currying, distinguishing it from previous work. An optimized variant is formalized as well. It appears promising as the basis of a higher-order superposition calculus. notify = jasmin.blanchette@gmail.com [Lambda_Free_KBOs] title = Formalization of Knuth–Bendix Orders for Lambda-Free Higher-Order Terms author = Heiko Becker , Jasmin Christian Blanchette , Uwe Waldmann , Daniel Wand date = 2016-11-12 topic = Logic/Rewriting abstract = This Isabelle/HOL formalization defines Knuth–Bendix orders for higher-order terms without lambda-abstraction and proves many useful properties about them. The main order fully coincides with the standard transfinite KBO with subterm coefficients on first-order terms. It appears promising as the basis of a higher-order superposition calculus. notify = jasmin.blanchette@gmail.com [Lambda_Free_EPO] title = Formalization of the Embedding Path Order for Lambda-Free Higher-Order Terms author = Alexander Bentkamp topic = Logic/Rewriting date = 2018-10-19 notify = a.bentkamp@vu.nl abstract = This Isabelle/HOL formalization defines the Embedding Path Order (EPO) for higher-order terms without lambda-abstraction and proves many useful properties about it. In contrast to the lambda-free recursive path orders, it does not fully coincide with RPO on first-order terms, but it is compatible with arbitrary higher-order contexts. [Nested_Multisets_Ordinals] title = Formalization of Nested Multisets, Hereditary Multisets, and Syntactic Ordinals author = Jasmin Christian Blanchette , Mathias Fleury , Dmitriy Traytel date = 2016-11-12 topic = Logic/Rewriting abstract = This Isabelle/HOL formalization introduces a nested multiset datatype and defines Dershowitz and Manna's nested multiset order. The order is proved well founded and linear. By removing one constructor, we transform the nested multisets into hereditary multisets. These are isomorphic to the syntactic ordinals—the ordinals can be recursively expressed in Cantor normal form. Addition, subtraction, multiplication, and linear orders are provided on this type. notify = jasmin.blanchette@gmail.com [Abstract-Rewriting] title = Abstract Rewriting topic = Logic/Rewriting date = 2010-06-14 author = Christian Sternagel , René Thiemann license = LGPL abstract = We present an Isabelle formalization of abstract rewriting (see, e.g., the book by Baader and Nipkow). First, we define standard relations like joinability, meetability, conversion, etc. Then, we formalize important properties of abstract rewrite systems, e.g., confluence and strong normalization. Our main concern is on strong normalization, since this formalization is the basis of CeTA (which is mainly about strong normalization of term rewrite systems). Hence lemmas involving strong normalization constitute by far the biggest part of this theory. One of those is Newman's lemma. extra-history = Change history: [2010-09-17]: Added theories defining several (ordered) semirings related to strong normalization and giving some standard instances.
[2013-10-16]: Generalized delta-orders from rationals to Archimedean fields. notify = christian.sternagel@uibk.ac.at, rene.thiemann@uibk.ac.at [First_Order_Terms] title = First-Order Terms author = Christian Sternagel , René Thiemann topic = Logic/Rewriting, Computer science/Algorithms license = LGPL date = 2018-02-06 notify = c.sternagel@gmail.com, rene.thiemann@uibk.ac.at abstract = We formalize basic results on first-order terms, including matching and a first-order unification algorithm, as well as well-foundedness of the subsumption order. This entry is part of the Isabelle Formalization of Rewriting IsaFoR, where first-order terms are omni-present: the unification algorithm is used to certify several confluence and termination techniques, like critical-pair computation and dependency graph approximations; and the subsumption order is a crucial ingredient for completion. [Free-Groups] title = Free Groups author = Joachim Breitner date = 2010-06-24 topic = Mathematics/Algebra abstract = Free Groups are, in a sense, the most generic kind of group. They are defined over a set of generators with no additional relations in between them. They play an important role in the definition of group presentations and in other fields. This theory provides the definition of Free Group as the set of fully canceled words in the generators. The universal property is proven, as well as some isomorphisms results about Free Groups. extra-history = Change history: [2011-12-11]: Added the Ping Pong Lemma. notify = [CofGroups] title = An Example of a Cofinitary Group in Isabelle/HOL author = Bart Kastermans date = 2009-08-04 topic = Mathematics/Algebra abstract = We formalize the usual proof that the group generated by the function k -> k + 1 on the integers gives rise to a cofinitary group. notify = nipkow@in.tum.de [Group-Ring-Module] title = Groups, Rings and Modules author = Hidetsune Kobayashi <>, L. Chen <>, H. Murao <> date = 2004-05-18 topic = Mathematics/Algebra abstract = The theory of groups, rings and modules is developed to a great depth. Group theory results include Zassenhaus's theorem and the Jordan-Hoelder theorem. The ring theory development includes ideals, quotient rings and the Chinese remainder theorem. The module development includes the Nakayama lemma, exact sequences and Tensor products. notify = lp15@cam.ac.uk [Robbins-Conjecture] title = A Complete Proof of the Robbins Conjecture author = Matthew Wampler-Doty <> date = 2010-05-22 topic = Mathematics/Algebra abstract = This document gives a formalization of the proof of the Robbins conjecture, following A. Mann, A Complete Proof of the Robbins Conjecture, 2003. notify = nipkow@in.tum.de [Valuation] title = Fundamental Properties of Valuation Theory and Hensel's Lemma author = Hidetsune Kobayashi <> date = 2007-08-08 topic = Mathematics/Algebra abstract = Convergence with respect to a valuation is discussed as convergence of a Cauchy sequence. Cauchy sequences of polynomials are defined. They are used to formalize Hensel's lemma. notify = lp15@cam.ac.uk [Rank_Nullity_Theorem] title = Rank-Nullity Theorem in Linear Algebra author = Jose Divasón , Jesús Aransay topic = Mathematics/Algebra date = 2013-01-16 abstract = In this contribution, we present some formalizations based on the HOL-Multivariate-Analysis session of Isabelle. Firstly, a generalization of several theorems of such library are presented. Secondly, some definitions and proofs involving Linear Algebra and the four fundamental subspaces of a matrix are shown. Finally, we present a proof of the result known in Linear Algebra as the ``Rank-Nullity Theorem'', which states that, given any linear map f from a finite dimensional vector space V to a vector space W, then the dimension of V is equal to the dimension of the kernel of f (which is a subspace of V) and the dimension of the range of f (which is a subspace of W). The proof presented here is based on the one given by Sheldon Axler in his book Linear Algebra Done Right. As a corollary of the previous theorem, and taking advantage of the relationship between linear maps and matrices, we prove that, for every matrix A (which has associated a linear map between finite dimensional vector spaces), the sum of its null space and its column space (which is equal to the range of the linear map) is equal to the number of columns of A. extra-history = Change history: [2014-07-14]: Added some generalizations that allow us to formalize the Rank-Nullity Theorem over finite dimensional vector spaces, instead of over the more particular euclidean spaces. Updated abstract. notify = jose.divasonm@unirioja.es, jesus-maria.aransay@unirioja.es [Affine_Arithmetic] title = Affine Arithmetic author = Fabian Immler date = 2014-02-07 topic = Mathematics/Analysis abstract = We give a formalization of affine forms as abstract representations of zonotopes. We provide affine operations as well as overapproximations of some non-affine operations like multiplication and division. Expressions involving those operations can automatically be turned into (executable) functions approximating the original expression in affine arithmetic. extra-history = Change history: [2015-01-31]: added algorithm for zonotope/hyperplane intersection
[2017-09-20]: linear approximations for all symbols from the floatarith data type notify = immler@in.tum.de [Laplace_Transform] title = Laplace Transform author = Fabian Immler topic = Mathematics/Analysis date = 2019-08-14 notify = fimmler@cs.cmu.edu abstract = This entry formalizes the Laplace transform and concrete Laplace transforms for arithmetic functions, frequency shift, integration and (higher) differentiation in the time domain. It proves Lerch's lemma and uniqueness of the Laplace transform for continuous functions. In order to formalize the foundational assumptions, this entry contains a formalization of piecewise continuous functions and functions of exponential order. [Cauchy] title = Cauchy's Mean Theorem and the Cauchy-Schwarz Inequality author = Benjamin Porter <> date = 2006-03-14 topic = Mathematics/Analysis abstract = This document presents the mechanised proofs of two popular theorems attributed to Augustin Louis Cauchy - Cauchy's Mean Theorem and the Cauchy-Schwarz Inequality. notify = kleing@cse.unsw.edu.au [Integration] title = Integration theory and random variables author = Stefan Richter date = 2004-11-19 topic = Mathematics/Analysis abstract = Lebesgue-style integration plays a major role in advanced probability. We formalize concepts of elementary measure theory, real-valued random variables as Borel-measurable functions, and a stepwise inductive definition of the integral itself. All proofs are carried out in human readable style using the Isar language. extra-note = Note: This article is of historical interest only. Lebesgue-style integration and probability theory are now available as part of the Isabelle/HOL distribution (directory Probability). notify = richter@informatik.rwth-aachen.de, nipkow@in.tum.de, hoelzl@in.tum.de [Ordinary_Differential_Equations] title = Ordinary Differential Equations author = Fabian Immler , Johannes Hölzl topic = Mathematics/Analysis date = 2012-04-26 abstract =

Session Ordinary-Differential-Equations formalizes ordinary differential equations (ODEs) and initial value problems. This work comprises proofs for local and global existence of unique solutions (Picard-Lindelöf theorem). Moreover, it contains a formalization of the (continuous or even differentiable) dependency of the flow on initial conditions as the flow of ODEs.

Not in the generated document are the following sessions:

  • HOL-ODE-Numerics: Rigorous numerical algorithms for computing enclosures of solutions based on Runge-Kutta methods and affine arithmetic. Reachability analysis with splitting and reduction at hyperplanes.
  • HOL-ODE-Examples: Applications of the numerical algorithms to concrete systems of ODEs.
  • Lorenz_C0, Lorenz_C1: Verified algorithms for checking C1-information according to Tucker's proof, computation of C0-information.

extra-history = Change history: [2014-02-13]: added an implementation of the Euler method based on affine arithmetic
[2016-04-14]: added flow and variational equation
[2016-08-03]: numerical algorithms for reachability analysis (using second-order Runge-Kutta methods, splitting, and reduction) implemented using Lammich's framework for automatic refinement
[2017-09-20]: added Poincare map and propagation of variational equation in reachability analysis, verified algorithms for C1-information and computations for C0-information of the Lorenz attractor. notify = immler@in.tum.de, hoelzl@in.tum.de [Polynomials] title = Executable Multivariate Polynomials author = Christian Sternagel , René Thiemann , Alexander Maletzky , Fabian Immler , Florian Haftmann , Andreas Lochbihler , Alexander Bentkamp date = 2010-08-10 topic = Mathematics/Analysis, Mathematics/Algebra, Computer science/Algorithms/Mathematical license = LGPL abstract = We define multivariate polynomials over arbitrary (ordered) semirings in combination with (executable) operations like addition, multiplication, and substitution. We also define (weak) monotonicity of polynomials and comparison of polynomials where we provide standard estimations like absolute positiveness or the more recent approach of Neurauter, Zankl, and Middeldorp. Moreover, it is proven that strongly normalizing (monotone) orders can be lifted to strongly normalizing (monotone) orders over polynomials. Our formalization was performed as part of the IsaFoR/CeTA-system which contains several termination techniques. The provided theories have been essential to formalize polynomial interpretations.

This formalization also contains an abstract representation as coefficient functions with finite support and a type of power-products. If this type is ordered by a linear (term) ordering, various additional notions, such as leading power-product, leading coefficient etc., are introduced as well. Furthermore, a lot of generic properties of, and functions on, multivariate polynomials are formalized, including the substitution and evaluation homomorphisms, embeddings of polynomial rings into larger rings (i.e. with one additional indeterminate), homogenization and dehomogenization of polynomials, and the canonical isomorphism between R[X,Y] and R[X][Y]. extra-history = Change history: [2010-09-17]: Moved theories on arbitrary (ordered) semirings to Abstract Rewriting.
[2016-10-28]: Added abstract representation of polynomials and authors Maletzky/Immler.
[2018-01-23]: Added authors Haftmann, Lochbihler after incorporating their formalization of multivariate polynomials based on Polynomial mappings. Moved material from Bentkamp's entry "Deep Learning".
[2019-04-18]: Added material about polynomials whose power-products are represented themselves by polynomial mappings. notify = rene.thiemann@uibk.ac.at, christian.sternagel@uibk.ac.at, alexander.maletzky@risc.jku.at, immler@in.tum.de [Sqrt_Babylonian] title = Computing N-th Roots using the Babylonian Method author = René Thiemann date = 2013-01-03 topic = Mathematics/Analysis license = LGPL abstract = We implement the Babylonian method to compute n-th roots of numbers. We provide precise algorithms for naturals, integers and rationals, and offer an approximation algorithm for square roots over linear ordered fields. Moreover, there are precise algorithms to compute the floor and the ceiling of n-th roots. extra-history = Change history: [2013-10-16]: Added algorithms to compute floor and ceiling of sqrt of integers. [2014-07-11]: Moved NthRoot_Impl from Real-Impl to this entry. notify = rene.thiemann@uibk.ac.at [Sturm_Sequences] title = Sturm's Theorem author = Manuel Eberl date = 2014-01-11 topic = Mathematics/Analysis abstract = Sturm's Theorem states that polynomial sequences with certain properties, so-called Sturm sequences, can be used to count the number of real roots of a real polynomial. This work contains a proof of Sturm's Theorem and code for constructing Sturm sequences efficiently. It also provides the “sturm” proof method, which can decide certain statements about the roots of real polynomials, such as “the polynomial P has exactly n roots in the interval I” or “P(x) > Q(x) for all x ∈ ℝ”. notify = eberlm@in.tum.de [Sturm_Tarski] title = The Sturm-Tarski Theorem author = Wenda Li date = 2014-09-19 topic = Mathematics/Analysis abstract = We have formalized the Sturm-Tarski theorem (also referred as the Tarski theorem), which generalizes Sturm's theorem. Sturm's theorem is usually used as a way to count distinct real roots, while the Sturm-Tarksi theorem forms the basis for Tarski's classic quantifier elimination for real closed field. notify = wl302@cam.ac.uk [Markov_Models] title = Markov Models author = Johannes Hölzl , Tobias Nipkow date = 2012-01-03 topic = Mathematics/Probability theory, Computer science/Automata and formal languages abstract = This is a formalization of Markov models in Isabelle/HOL. It builds on Isabelle's probability theory. The available models are currently Discrete-Time Markov Chains and a extensions of them with rewards.

As application of these models we formalize probabilistic model checking of pCTL formulas, analysis of IPv4 address allocation in ZeroConf and an analysis of the anonymity of the Crowds protocol. See here for the corresponding paper. notify = hoelzl@in.tum.de [Probabilistic_System_Zoo] title = A Zoo of Probabilistic Systems author = Johannes Hölzl , Andreas Lochbihler , Dmitriy Traytel date = 2015-05-27 topic = Computer science/Automata and formal languages abstract = Numerous models of probabilistic systems are studied in the literature. Coalgebra has been used to classify them into system types and compare their expressiveness. We formalize the resulting hierarchy of probabilistic system types by modeling the semantics of the different systems as codatatypes. This approach yields simple and concise proofs, as bisimilarity coincides with equality for codatatypes.

This work is described in detail in the ITP 2015 publication by the authors. notify = traytel@in.tum.de [Density_Compiler] title = A Verified Compiler for Probability Density Functions author = Manuel Eberl , Johannes Hölzl , Tobias Nipkow date = 2014-10-09 topic = Mathematics/Probability theory, Computer science/Programming languages/Compiling abstract = Bhat et al. [TACAS 2013] developed an inductive compiler that computes density functions for probability spaces described by programs in a probabilistic functional language. In this work, we implement such a compiler for a modified version of this language within the theorem prover Isabelle and give a formal proof of its soundness w.r.t. the semantics of the source and target language. Together with Isabelle's code generation for inductive predicates, this yields a fully verified, executable density compiler. The proof is done in two steps: First, an abstract compiler working with abstract functions modelled directly in the theorem prover's logic is defined and proved sound. Then, this compiler is refined to a concrete version that returns a target-language expression.

An article with the same title and authors is published in the proceedings of ESOP 2015. A detailed presentation of this work can be found in the first author's master's thesis. notify = hoelzl@in.tum.de [CAVA_Automata] title = The CAVA Automata Library author = Peter Lammich date = 2014-05-28 topic = Computer science/Automata and formal languages abstract = We report on the graph and automata library that is used in the fully verified LTL model checker CAVA. As most components of CAVA use some type of graphs or automata, a common automata library simplifies assembly of the components and reduces redundancy.

The CAVA Automata Library provides a hierarchy of graph and automata classes, together with some standard algorithms. Its object oriented design allows for sharing of algorithms, theorems, and implementations between its classes, and also simplifies extensions of the library. Moreover, it is integrated into the Automatic Refinement Framework, supporting automatic refinement of the abstract automata types to efficient data structures.

Note that the CAVA Automata Library is work in progress. Currently, it is very specifically tailored towards the requirements of the CAVA model checker. Nevertheless, the formalization techniques presented here allow an extension of the library to a wider scope. Moreover, they are not limited to graph libraries, but apply to class hierarchies in general.

The CAVA Automata Library is described in the paper: Peter Lammich, The CAVA Automata Library, Isabelle Workshop 2014. notify = lammich@in.tum.de [LTL] title = Linear Temporal Logic author = Salomon Sickert contributors = Benedikt Seidl date = 2016-03-01 topic = Logic/General logic/Temporal logic, Computer science/Automata and formal languages abstract = This theory provides a formalisation of linear temporal logic (LTL) and unifies previous formalisations within the AFP. This entry establishes syntax and semantics for this logic and decouples it from existing entries, yielding a common environment for theories reasoning about LTL. Furthermore a parser written in SML and an executable simplifier are provided. extra-history = Change history: [2019-03-12]: Support for additional operators, implementation of common equivalence relations, definition of syntactic fragments of LTL and the minimal disjunctive normal form.
notify = sickert@in.tum.de [LTL_to_GBA] title = Converting Linear-Time Temporal Logic to Generalized Büchi Automata author = Alexander Schimpf , Peter Lammich date = 2014-05-28 topic = Computer science/Automata and formal languages abstract = We formalize linear-time temporal logic (LTL) and the algorithm by Gerth et al. to convert LTL formulas to generalized Büchi automata. We also formalize some syntactic rewrite rules that can be applied to optimize the LTL formula before conversion. Moreover, we integrate the Stuttering Equivalence AFP-Entry by Stefan Merz, adapting the lemma that next-free LTL formula cannot distinguish between stuttering equivalent runs to our setting.

We use the Isabelle Refinement and Collection framework, as well as the Autoref tool, to obtain a refined version of our algorithm, from which efficiently executable code can be extracted. notify = lammich@in.tum.de [Gabow_SCC] title = Verified Efficient Implementation of Gabow's Strongly Connected Components Algorithm author = Peter Lammich date = 2014-05-28 topic = Computer science/Algorithms/Graph, Mathematics/Graph theory abstract = We present an Isabelle/HOL formalization of Gabow's algorithm for finding the strongly connected components of a directed graph. Using data refinement techniques, we extract efficient code that performs comparable to a reference implementation in Java. Our style of formalization allows for re-using large parts of the proofs when defining variants of the algorithm. We demonstrate this by verifying an algorithm for the emptiness check of generalized Büchi automata, re-using most of the existing proofs. notify = lammich@in.tum.de [Promela] title = Promela Formalization author = René Neumann date = 2014-05-28 topic = Computer science/System description languages abstract = We present an executable formalization of the language Promela, the description language for models of the model checker SPIN. This formalization is part of the work for a completely verified model checker (CAVA), but also serves as a useful (and executable!) description of the semantics of the language itself, something that is currently missing. The formalization uses three steps: It takes an abstract syntax tree generated from an SML parser, removes syntactic sugar and enriches it with type information. This further gets translated into a transition system, on which the semantic engine (read: successor function) operates. notify = [CAVA_LTL_Modelchecker] title = A Fully Verified Executable LTL Model Checker author = Javier Esparza , Peter Lammich , René Neumann , Tobias Nipkow , Alexander Schimpf , Jan-Georg Smaus date = 2014-05-28 topic = Computer science/Automata and formal languages abstract = We present an LTL model checker whose code has been completely verified using the Isabelle theorem prover. The checker consists of over 4000 lines of ML code. The code is produced using the Isabelle Refinement Framework, which allows us to split its correctness proof into (1) the proof of an abstract version of the checker, consisting of a few hundred lines of ``formalized pseudocode'', and (2) a verified refinement step in which mathematical sets and other abstract structures are replaced by implementations of efficient structures like red-black trees and functional arrays. This leads to a checker that, while still slower than unverified checkers, can already be used as a trusted reference implementation against which advanced implementations can be tested.

An early version of this model checker is described in the CAV 2013 paper with the same title. notify = lammich@in.tum.de [Fermat3_4] title = Fermat's Last Theorem for Exponents 3 and 4 and the Parametrisation of Pythagorean Triples author = Roelof Oosterhuis <> date = 2007-08-12 topic = Mathematics/Number theory abstract = This document presents the mechanised proofs of

  • Fermat's Last Theorem for exponents 3 and 4 and
  • the parametrisation of Pythagorean Triples.
notify = nipkow@in.tum.de, roelofoosterhuis@gmail.com [Perfect-Number-Thm] title = Perfect Number Theorem author = Mark Ijbema date = 2009-11-22 topic = Mathematics/Number theory abstract = These theories present the mechanised proof of the Perfect Number Theorem. notify = nipkow@in.tum.de [SumSquares] title = Sums of Two and Four Squares author = Roelof Oosterhuis <> date = 2007-08-12 topic = Mathematics/Number theory abstract = This document presents the mechanised proofs of the following results:
  • any prime number of the form 4m+1 can be written as the sum of two squares;
  • any natural number can be written as the sum of four squares
notify = nipkow@in.tum.de, roelofoosterhuis@gmail.com [Lehmer] title = Lehmer's Theorem author = Simon Wimmer , Lars Noschinski date = 2013-07-22 topic = Mathematics/Number theory abstract = In 1927, Lehmer presented criterions for primality, based on the converse of Fermat's litte theorem. This work formalizes the second criterion from Lehmer's paper, a necessary and sufficient condition for primality.

As a side product we formalize some properties of Euler's phi-function, the notion of the order of an element of a group, and the cyclicity of the multiplicative group of a finite field. notify = noschinl@gmail.com, simon.wimmer@tum.de [Pratt_Certificate] title = Pratt's Primality Certificates author = Simon Wimmer , Lars Noschinski date = 2013-07-22 topic = Mathematics/Number theory abstract = In 1975, Pratt introduced a proof system for certifying primes. He showed that a number p is prime iff a primality certificate for p exists. By showing a logarithmic upper bound on the length of the certificates in size of the prime number, he concluded that the decision problem for prime numbers is in NP. This work formalizes soundness and completeness of Pratt's proof system as well as an upper bound for the size of the certificate. notify = noschinl@gmail.com, simon.wimmer@tum.de [Monad_Memo_DP] title = Monadification, Memoization and Dynamic Programming author = Simon Wimmer , Shuwei Hu , Tobias Nipkow topic = Computer science/Programming languages/Transformations, Computer science/Algorithms, Computer science/Functional programming date = 2018-05-22 notify = wimmers@in.tum.de abstract = We present a lightweight framework for the automatic verified (functional or imperative) memoization of recursive functions. Our tool can turn a pure Isabelle/HOL function definition into a monadified version in a state monad or the Imperative HOL heap monad, and prove a correspondence theorem. We provide a variety of memory implementations for the two types of monads. A number of simple techniques allow us to achieve bottom-up computation and space-efficient memoization. The framework’s utility is demonstrated on a number of representative dynamic programming problems. A detailed description of our work can be found in the accompanying paper [2]. [Probabilistic_Timed_Automata] title = Probabilistic Timed Automata author = Simon Wimmer , Johannes Hölzl topic = Mathematics/Probability theory, Computer science/Automata and formal languages date = 2018-05-24 notify = wimmers@in.tum.de, hoelzl@in.tum.de abstract = We present a formalization of probabilistic timed automata (PTA) for which we try to follow the formula MDP + TA = PTA as far as possible: our work starts from our existing formalizations of Markov decision processes (MDP) and timed automata (TA) and combines them modularly. We prove the fundamental result for probabilistic timed automata: the region construction that is known from timed automata carries over to the probabilistic setting. In particular, this allows us to prove that minimum and maximum reachability probabilities can be computed via a reduction to MDP model checking, including the case where one wants to disregard unrealizable behavior. Further information can be found in our ITP paper [2]. [Hidden_Markov_Models] title = Hidden Markov Models author = Simon Wimmer topic = Mathematics/Probability theory, Computer science/Algorithms date = 2018-05-25 notify = wimmers@in.tum.de abstract = This entry contains a formalization of hidden Markov models [3] based on Johannes Hölzl's formalization of discrete time Markov chains [1]. The basic definitions are provided and the correctness of two main (dynamic programming) algorithms for hidden Markov models is proved: the forward algorithm for computing the likelihood of an observed sequence, and the Viterbi algorithm for decoding the most probable hidden state sequence. The Viterbi algorithm is made executable including memoization. Hidden markov models have various applications in natural language processing. For an introduction see Jurafsky and Martin [2]. [ArrowImpossibilityGS] title = Arrow and Gibbard-Satterthwaite author = Tobias Nipkow date = 2008-09-01 topic = Mathematics/Games and economics abstract = This article formalizes two proofs of Arrow's impossibility theorem due to Geanakoplos and derives the Gibbard-Satterthwaite theorem as a corollary. One formalization is based on utility functions, the other one on strict partial orders.

An article about these proofs is found here. notify = nipkow@in.tum.de [SenSocialChoice] title = Some classical results in Social Choice Theory author = Peter Gammie date = 2008-11-09 topic = Mathematics/Games and economics abstract = Drawing on Sen's landmark work "Collective Choice and Social Welfare" (1970), this development proves Arrow's General Possibility Theorem, Sen's Liberal Paradox and May's Theorem in a general setting. The goal was to make precise the classical statements and proofs of these results, and to provide a foundation for more recent results such as the Gibbard-Satterthwaite and Duggan-Schwartz theorems. notify = nipkow@in.tum.de [Vickrey_Clarke_Groves] title = VCG - Combinatorial Vickrey-Clarke-Groves Auctions author = Marco B. Caminati <>, Manfred Kerber , Christoph Lange, Colin Rowat date = 2015-04-30 topic = Mathematics/Games and economics abstract = A VCG auction (named after their inventors Vickrey, Clarke, and Groves) is a generalization of the single-good, second price Vickrey auction to the case of a combinatorial auction (multiple goods, from which any participant can bid on each possible combination). We formalize in this entry VCG auctions, including tie-breaking and prove that the functions for the allocation and the price determination are well-defined. Furthermore we show that the allocation function allocates goods only to participants, only goods in the auction are allocated, and no good is allocated twice. We also show that the price function is non-negative. These properties also hold for the automatically extracted Scala code. notify = mnfrd.krbr@gmail.com [Topology] title = Topology author = Stefan Friedrich <> date = 2004-04-26 topic = Mathematics/Topology abstract = This entry contains two theories. The first, Topology, develops the basic notions of general topology. The second, which can be viewed as a demonstration of the first, is called LList_Topology. It develops the topology of lazy lists. notify = lcp@cl.cam.ac.uk [Knot_Theory] title = Knot Theory author = T.V.H. Prathamesh date = 2016-01-20 topic = Mathematics/Topology abstract = This work contains a formalization of some topics in knot theory. The concepts that were formalized include definitions of tangles, links, framed links and link/tangle equivalence. The formalization is based on a formulation of links in terms of tangles. We further construct and prove the invariance of the Bracket polynomial. Bracket polynomial is an invariant of framed links closely linked to the Jones polynomial. This is perhaps the first attempt to formalize any aspect of knot theory in an interactive proof assistant. notify = prathamesh@imsc.res.in [Graph_Theory] title = Graph Theory author = Lars Noschinski date = 2013-04-28 topic = Mathematics/Graph theory abstract = This development provides a formalization of directed graphs, supporting (labelled) multi-edges and infinite graphs. A polymorphic edge type allows edges to be treated as pairs of vertices, if multi-edges are not required. Formalized properties are i.a. walks (and related concepts), connectedness and subgraphs and basic properties of isomorphisms.

This formalization is used to prove characterizations of Euler Trails, Shortest Paths and Kuratowski subgraphs. notify = noschinl@gmail.com [Planarity_Certificates] title = Planarity Certificates author = Lars Noschinski date = 2015-11-11 topic = Mathematics/Graph theory abstract = This development provides a formalization of planarity based on combinatorial maps and proves that Kuratowski's theorem implies combinatorial planarity. Moreover, it contains verified implementations of programs checking certificates for planarity (i.e., a combinatorial map) or non-planarity (i.e., a Kuratowski subgraph). notify = noschinl@gmail.com [Max-Card-Matching] title = Maximum Cardinality Matching author = Christine Rizkallah date = 2011-07-21 topic = Mathematics/Graph theory abstract =

A matching in a graph G is a subset M of the edges of G such that no two share an endpoint. A matching has maximum cardinality if its cardinality is at least as large as that of any other matching. An odd-set cover OSC of a graph G is a labeling of the nodes of G with integers such that every edge of G is either incident to a node labeled 1 or connects two nodes labeled with the same number i ≥ 2.

This article proves Edmonds theorem:
Let M be a matching in a graph G and let OSC be an odd-set cover of G. For any i ≥ 0, let n(i) be the number of nodes labeled i. If |M| = n(1) + ∑i ≥ 2(n(i) div 2), then M is a maximum cardinality matching.

notify = nipkow@in.tum.de [Girth_Chromatic] title = A Probabilistic Proof of the Girth-Chromatic Number Theorem author = Lars Noschinski date = 2012-02-06 topic = Mathematics/Graph theory abstract = This works presents a formalization of the Girth-Chromatic number theorem in graph theory, stating that graphs with arbitrarily large girth and chromatic number exist. The proof uses the theory of Random Graphs to prove the existence with probabilistic arguments. notify = noschinl@gmail.com [Random_Graph_Subgraph_Threshold] title = Properties of Random Graphs -- Subgraph Containment author = Lars Hupel date = 2014-02-13 topic = Mathematics/Graph theory, Mathematics/Probability theory abstract = Random graphs are graphs with a fixed number of vertices, where each edge is present with a fixed probability. We are interested in the probability that a random graph contains a certain pattern, for example a cycle or a clique. A very high edge probability gives rise to perhaps too many edges (which degrades performance for many algorithms), whereas a low edge probability might result in a disconnected graph. We prove a theorem about a threshold probability such that a higher edge probability will asymptotically almost surely produce a random graph with the desired subgraph. notify = hupel@in.tum.de [Flyspeck-Tame] title = Flyspeck I: Tame Graphs author = Gertrud Bauer <>, Tobias Nipkow date = 2006-05-22 topic = Mathematics/Graph theory abstract = These theories present the verified enumeration of tame plane graphs as defined by Thomas C. Hales in his proof of the Kepler Conjecture in his book Dense Sphere Packings. A Blueprint for Formal Proofs. [CUP 2012]. The values of the constants in the definition of tameness are identical to those in the Flyspeck project. The IJCAR 2006 paper by Nipkow, Bauer and Schultz refers to the original version of Hales' proof, the ITP 2011 paper by Nipkow refers to the Blueprint version of the proof. extra-history = Change history: [2010-11-02]: modified theories to reflect the modified definition of tameness in Hales' revised proof.
[2014-07-03]: modified constants in def of tameness and Archive according to the final state of the Flyspeck proof. notify = nipkow@in.tum.de [Well_Quasi_Orders] title = Well-Quasi-Orders author = Christian Sternagel date = 2012-04-13 topic = Mathematics/Combinatorics abstract = Based on Isabelle/HOL's type class for preorders, we introduce a type class for well-quasi-orders (wqo) which is characterized by the absence of "bad" sequences (our proofs are along the lines of the proof of Nash-Williams, from which we also borrow terminology). Our main results are instantiations for the product type, the list type, and a type of finite trees, which (almost) directly follow from our proofs of (1) Dickson's Lemma, (2) Higman's Lemma, and (3) Kruskal's Tree Theorem. More concretely:
  • If the sets A and B are wqo then their Cartesian product is wqo.
  • If the set A is wqo then the set of finite lists over A is wqo.
  • If the set A is wqo then the set of finite trees over A is wqo.
The research was funded by the Austrian Science Fund (FWF): J3202. extra-history = Change history: [2012-06-11]: Added Kruskal's Tree Theorem.
[2012-12-19]: New variant of Kruskal's tree theorem for terms (as opposed to variadic terms, i.e., trees), plus finite version of the tree theorem as corollary.
[2013-05-16]: Simplified construction of minimal bad sequences.
[2014-07-09]: Simplified proofs of Higman's lemma and Kruskal's tree theorem, based on homogeneous sequences.
[2016-01-03]: An alternative proof of Higman's lemma by open induction.
[2017-06-08]: Proved (classical) equivalence to inductive definition of almost-full relations according to the ITP 2012 paper "Stop When You Are Almost-Full" by Vytiniotis, Coquand, and Wahlstedt. notify = c.sternagel@gmail.com [Marriage] title = Hall's Marriage Theorem author = Dongchen Jiang , Tobias Nipkow date = 2010-12-17 topic = Mathematics/Combinatorics abstract = Two proofs of Hall's Marriage Theorem: one due to Halmos and Vaughan, one due to Rado. extra-history = Change history: [2011-09-09]: Added Rado's proof notify = nipkow@in.tum.de [Bondy] title = Bondy's Theorem author = Jeremy Avigad , Stefan Hetzl date = 2012-10-27 topic = Mathematics/Combinatorics abstract = A proof of Bondy's theorem following B. Bollabas, Combinatorics, 1986, Cambridge University Press. notify = avigad@cmu.edu, hetzl@logic.at [Ramsey-Infinite] title = Ramsey's theorem, infinitary version author = Tom Ridge <> date = 2004-09-20 topic = Mathematics/Combinatorics abstract = This formalization of Ramsey's theorem (infinitary version) is taken from Boolos and Jeffrey, Computability and Logic, 3rd edition, Chapter 26. It differs slightly from the text by assuming a slightly stronger hypothesis. In particular, the induction hypothesis is stronger, holding for any infinite subset of the naturals. This avoids the rather peculiar mapping argument between kj and aikj on p.263, which is unnecessary and slightly mars this really beautiful result. notify = lp15@cam.ac.uk [Derangements] title = Derangements Formula author = Lukas Bulwahn date = 2015-06-27 topic = Mathematics/Combinatorics abstract = The Derangements Formula describes the number of fixpoint-free permutations as a closed formula. This theorem is the 88th theorem in a list of the ``Top 100 Mathematical Theorems''. notify = lukas.bulwahn@gmail.com [Euler_Partition] title = Euler's Partition Theorem author = Lukas Bulwahn date = 2015-11-19 topic = Mathematics/Combinatorics abstract = Euler's Partition Theorem states that the number of partitions with only distinct parts is equal to the number of partitions with only odd parts. The combinatorial proof follows John Harrison's HOL Light formalization. This theorem is the 45th theorem of the Top 100 Theorems list. notify = lukas.bulwahn@gmail.com [Discrete_Summation] title = Discrete Summation author = Florian Haftmann contributors = Amine Chaieb <> date = 2014-04-13 topic = Mathematics/Combinatorics abstract = These theories introduce basic concepts and proofs about discrete summation: shifts, formal summation, falling factorials and stirling numbers. As proof of concept, a simple summation conversion is provided. notify = florian.haftmann@informatik.tu-muenchen.de [Open_Induction] title = Open Induction author = Mizuhito Ogawa <>, Christian Sternagel date = 2012-11-02 topic = Mathematics/Combinatorics abstract = A proof of the open induction schema based on J.-C. Raoult, Proving open properties by induction, Information Processing Letters 29, 1988, pp.19-23.

This research was supported by the Austrian Science Fund (FWF): J3202.

notify = c.sternagel@gmail.com [Category] title = Category Theory to Yoneda's Lemma author = Greg O'Keefe date = 2005-04-21 topic = Mathematics/Category theory license = LGPL abstract = This development proves Yoneda's lemma and aims to be readable by humans. It only defines what is needed for the lemma: categories, functors and natural transformations. Limits, adjunctions and other important concepts are not included. extra-history = Change history: [2010-04-23]: The definition of the constant equinumerous was slightly too weak in the original submission and has been fixed in revision 8c2b5b3c995f. notify = lcp@cl.cam.ac.uk [Category2] title = Category Theory author = Alexander Katovsky date = 2010-06-20 topic = Mathematics/Category theory abstract = This article presents a development of Category Theory in Isabelle/HOL. A Category is defined using records and locales. Functors and Natural Transformations are also defined. The main result that has been formalized is that the Yoneda functor is a full and faithful embedding. We also formalize the completeness of many sorted monadic equational logic. Extensive use is made of the HOLZF theory in both cases. For an informal description see here [pdf]. notify = alexander.katovsky@cantab.net [FunWithFunctions] title = Fun With Functions author = Tobias Nipkow date = 2008-08-26 topic = Mathematics/Misc abstract = This is a collection of cute puzzles of the form ``Show that if a function satisfies the following constraints, it must be ...'' Please add further examples to this collection! notify = nipkow@in.tum.de [FunWithTilings] title = Fun With Tilings author = Tobias Nipkow , Lawrence C. Paulson date = 2008-11-07 topic = Mathematics/Misc abstract = Tilings are defined inductively. It is shown that one form of mutilated chess board cannot be tiled with dominoes, while another one can be tiled with L-shaped tiles. Please add further fun examples of this kind! notify = nipkow@in.tum.de [Lazy-Lists-II] title = Lazy Lists II author = Stefan Friedrich <> date = 2004-04-26 topic = Computer science/Data structures abstract = This theory contains some useful extensions to the LList (lazy list) theory by Larry Paulson, including finite, infinite, and positive llists over an alphabet, as well as the new constants take and drop and the prefix order of llists. Finally, the notions of safety and liveness in the sense of Alpern and Schneider (1985) are defined. notify = lcp@cl.cam.ac.uk [Ribbon_Proofs] title = Ribbon Proofs author = John Wickerson <> date = 2013-01-19 topic = Computer science/Programming languages/Logics abstract = This document concerns the theory of ribbon proofs: a diagrammatic proof system, based on separation logic, for verifying program correctness. We include the syntax, proof rules, and soundness results for two alternative formalisations of ribbon proofs.

Compared to traditional proof outlines, ribbon proofs emphasise the structure of a proof, so are intelligible and pedagogical. Because they contain less redundancy than proof outlines, and allow each proof step to be checked locally, they may be more scalable. Where proof outlines are cumbersome to modify, ribbon proofs can be visually manoeuvred to yield proofs of variant programs. notify = [Koenigsberg_Friendship] title = The Königsberg Bridge Problem and the Friendship Theorem author = Wenda Li date = 2013-07-19 topic = Mathematics/Graph theory abstract = This development provides a formalization of undirected graphs and simple graphs, which are based on Benedikt Nordhoff and Peter Lammich's simple formalization of labelled directed graphs in the archive. Then, with our formalization of graphs, we show both necessary and sufficient conditions for Eulerian trails and circuits as well as the fact that the Königsberg Bridge Problem does not have a solution. In addition, we show the Friendship Theorem in simple graphs. notify = [Tree_Decomposition] title = Tree Decomposition author = Christoph Dittmann notify = date = 2016-05-31 topic = Mathematics/Graph theory abstract = We formalize tree decompositions and tree width in Isabelle/HOL, proving that trees have treewidth 1. We also show that every edge of a tree decomposition is a separation of the underlying graph. As an application of this theorem we prove that complete graphs of size n have treewidth n-1. [Menger] title = Menger's Theorem author = Christoph Dittmann topic = Mathematics/Graph theory date = 2017-02-26 notify = isabelle@christoph-d.de abstract = We present a formalization of Menger's Theorem for directed and undirected graphs in Isabelle/HOL. This well-known result shows that if two non-adjacent distinct vertices u, v in a directed graph have no separator smaller than n, then there exist n internally vertex-disjoint paths from u to v. The version for undirected graphs follows immediately because undirected graphs are a special case of directed graphs. [IEEE_Floating_Point] title = A Formal Model of IEEE Floating Point Arithmetic author = Lei Yu contributors = Fabian Hellauer , Fabian Immler date = 2013-07-27 topic = Computer science/Data structures abstract = This development provides a formal model of IEEE-754 floating-point arithmetic. This formalization, including formal specification of the standard and proofs of important properties of floating-point arithmetic, forms the foundation for verifying programs with floating-point computation. There is also a code generation setup for floats so that we can execute programs using this formalization in functional programming languages. notify = lp15@cam.ac.uk, immler@in.tum.de extra-history = Change history: [2017-09-25]: Added conversions from and to software floating point numbers (by Fabian Hellauer and Fabian Immler).
[2018-02-05]: 'Modernized' representation following the formalization in HOL4: former "float_format" and predicate "is_valid" is now encoded in a type "('e, 'f) float" where 'e and 'f encode the size of exponent and fraction. [Native_Word] title = Native Word author = Andreas Lochbihler contributors = Peter Lammich date = 2013-09-17 topic = Computer science/Data structures abstract = This entry makes machine words and machine arithmetic available for code generation from Isabelle/HOL. It provides a common abstraction that hides the differences between the different target languages. The code generator maps these operations to the APIs of the target languages. Apart from that, we extend the available bit operations on types int and integer, and map them to the operations in the target languages. extra-history = Change history: [2013-11-06]: added conversion function between native words and characters (revision fd23d9a7fe3a)
[2014-03-31]: added words of default size in the target language (by Peter Lammich) (revision 25caf5065833)
[2014-10-06]: proper test setup with compilation and execution of tests in all target languages (revision 5d7a1c9ae047)
[2017-09-02]: added 64-bit words (revision c89f86244e3c)
[2018-07-15]: added cast operators for default-size words (revision fc1f1fb8dd30)
notify = mail@andreas-lochbihler.de [XML] title = XML author = Christian Sternagel , René Thiemann date = 2014-10-03 topic = Computer science/Functional programming, Computer science/Data structures abstract = This entry provides an XML library for Isabelle/HOL. This includes parsing and pretty printing of XML trees as well as combinators for transforming XML trees into arbitrary user-defined data. The main contribution of this entry is an interface (fit for code generation) that allows for communication between verified programs formalized in Isabelle/HOL and the outside world via XML. This library was developed as part of the IsaFoR/CeTA project to which we refer for examples of its usage. notify = c.sternagel@gmail.com, rene.thiemann@uibk.ac.at [HereditarilyFinite] title = The Hereditarily Finite Sets author = Lawrence C. Paulson date = 2013-11-17 topic = Logic/Set theory abstract = The theory of hereditarily finite sets is formalised, following the development of Swierczkowski. An HF set is a finite collection of other HF sets; they enjoy an induction principle and satisfy all the axioms of ZF set theory apart from the axiom of infinity, which is negated. All constructions that are possible in ZF set theory (Cartesian products, disjoint sums, natural numbers, functions) without using infinite sets are possible here. The definition of addition for the HF sets follows Kirby. This development forms the foundation for the Isabelle proof of Gödel's incompleteness theorems, which has been formalised separately. extra-history = Change history: [2015-02-23]: Added the theory "Finitary" defining the class of types that can be embedded in hf, including int, char, option, list, etc. notify = lp15@cam.ac.uk [Incompleteness] title = Gödel's Incompleteness Theorems author = Lawrence C. Paulson date = 2013-11-17 topic = Logic/Proof theory abstract = Gödel's two incompleteness theorems are formalised, following a careful presentation by Swierczkowski, in the theory of hereditarily finite sets. This represents the first ever machine-assisted proof of the second incompleteness theorem. Compared with traditional formalisations using Peano arithmetic (see e.g. Boolos), coding is simpler, with no need to formalise the notion of multiplication (let alone that of a prime number) in the formalised calculus upon which the theorem is based. However, other technical problems had to be solved in order to complete the argument. notify = lp15@cam.ac.uk [Finite_Automata_HF] title = Finite Automata in Hereditarily Finite Set Theory author = Lawrence C. Paulson date = 2015-02-05 topic = Computer science/Automata and formal languages abstract = Finite Automata, both deterministic and non-deterministic, for regular languages. The Myhill-Nerode Theorem. Closure under intersection, concatenation, etc. Regular expressions define regular languages. Closure under reversal; the powerset construction mapping NFAs to DFAs. Left and right languages; minimal DFAs. Brzozowski's minimization algorithm. Uniqueness up to isomorphism of minimal DFAs. notify = lp15@cam.ac.uk [Decreasing-Diagrams] title = Decreasing Diagrams author = Harald Zankl license = LGPL date = 2013-11-01 topic = Logic/Rewriting abstract = This theory contains a formalization of decreasing diagrams showing that any locally decreasing abstract rewrite system is confluent. We consider the valley (van Oostrom, TCS 1994) and the conversion version (van Oostrom, RTA 2008) and closely follow the original proofs. As an application we prove Newman's lemma. notify = Harald.Zankl@uibk.ac.at [Decreasing-Diagrams-II] title = Decreasing Diagrams II author = Bertram Felgenhauer license = LGPL date = 2015-08-20 topic = Logic/Rewriting abstract = This theory formalizes the commutation version of decreasing diagrams for Church-Rosser modulo. The proof follows Felgenhauer and van Oostrom (RTA 2013). The theory also provides important specializations, in particular van Oostrom’s conversion version (TCS 2008) of decreasing diagrams. notify = bertram.felgenhauer@uibk.ac.at [GoedelGod] title = Gödel's God in Isabelle/HOL author = Christoph Benzmüller , Bruno Woltzenlogel Paleo date = 2013-11-12 topic = Logic/Philosophical aspects abstract = Dana Scott's version of Gödel's proof of God's existence is formalized in quantified modal logic KB (QML KB). QML KB is modeled as a fragment of classical higher-order logic (HOL); thus, the formalization is essentially a formalization in HOL. notify = lp15@cam.ac.uk, c.benzmueller@fu-berlin.de [Types_Tableaus_and_Goedels_God] title = Types, Tableaus and Gödel’s God in Isabelle/HOL author = David Fuenmayor , Christoph Benzmüller topic = Logic/Philosophical aspects date = 2017-05-01 notify = davfuenmayor@gmail.com, c.benzmueller@gmail.com abstract = A computer-formalisation of the essential parts of Fitting's textbook "Types, Tableaus and Gödel's God" in Isabelle/HOL is presented. In particular, Fitting's (and Anderson's) variant of the ontological argument is verified and confirmed. This variant avoids the modal collapse, which has been criticised as an undesirable side-effect of Kurt Gödel's (and Dana Scott's) versions of the ontological argument. Fitting's work is employing an intensional higher-order modal logic, which we shallowly embed here in classical higher-order logic. We then utilize the embedded logic for the formalisation of Fitting's argument. (See also the earlier AFP entry ``Gödel's God in Isabelle/HOL''.) [GewirthPGCProof] title = Formalisation and Evaluation of Alan Gewirth's Proof for the Principle of Generic Consistency in Isabelle/HOL author = David Fuenmayor , Christoph Benzmüller topic = Logic/Philosophical aspects date = 2018-10-30 notify = davfuenmayor@gmail.com, c.benzmueller@gmail.com abstract = An ambitious ethical theory ---Alan Gewirth's "Principle of Generic Consistency"--- is encoded and analysed in Isabelle/HOL. Gewirth's theory has stirred much attention in philosophy and ethics and has been proposed as a potential means to bound the impact of artificial general intelligence. extra-history = Change history: [2019-04-09]: added proof for a stronger variant of the PGC and examplary inferences (revision 88182cb0a2f6)
[Lowe_Ontological_Argument] title = Computer-assisted Reconstruction and Assessment of E. J. Lowe's Modal Ontological Argument author = David Fuenmayor , Christoph Benzmüller topic = Logic/Philosophical aspects date = 2017-09-21 notify = davfuenmayor@gmail.com, c.benzmueller@gmail.com abstract = Computers may help us to understand --not just verify-- philosophical arguments. By utilizing modern proof assistants in an iterative interpretive process, we can reconstruct and assess an argument by fully formal means. Through the mechanization of a variant of St. Anselm's ontological argument by E. J. Lowe, which is a paradigmatic example of a natural-language argument with strong ties to metaphysics and religion, we offer an ideal showcase for our computer-assisted interpretive method. [AnselmGod] title = Anselm's God in Isabelle/HOL author = Ben Blumson topic = Logic/Philosophical aspects date = 2017-09-06 notify = benblumson@gmail.com abstract = Paul Oppenheimer and Edward Zalta's formalisation of Anselm's ontological argument for the existence of God is automated by embedding a free logic for definite descriptions within Isabelle/HOL. [Tail_Recursive_Functions] title = A General Method for the Proof of Theorems on Tail-recursive Functions author = Pasquale Noce date = 2013-12-01 topic = Computer science/Functional programming abstract =

Tail-recursive function definitions are sometimes more straightforward than alternatives, but proving theorems on them may be roundabout because of the peculiar form of the resulting recursion induction rules.

This paper describes a proof method that provides a general solution to this problem by means of suitable invariants over inductive sets, and illustrates the application of such method by examining two case studies.

notify = pasquale.noce.lavoro@gmail.com [CryptoBasedCompositionalProperties] title = Compositional Properties of Crypto-Based Components author = Maria Spichkova date = 2014-01-11 topic = Computer science/Security abstract = This paper presents an Isabelle/HOL set of theories which allows the specification of crypto-based components and the verification of their composition properties wrt. cryptographic aspects. We introduce a formalisation of the security property of data secrecy, the corresponding definitions and proofs. Please note that here we import the Isabelle/HOL theory ListExtras.thy, presented in the AFP entry FocusStreamsCaseStudies-AFP. notify = maria.spichkova@rmit.edu.au [Featherweight_OCL] title = Featherweight OCL: A Proposal for a Machine-Checked Formal Semantics for OCL 2.5 author = Achim D. Brucker , Frédéric Tuong , Burkhart Wolff date = 2014-01-16 topic = Computer science/System description languages abstract = The Unified Modeling Language (UML) is one of the few modeling languages that is widely used in industry. While UML is mostly known as diagrammatic modeling language (e.g., visualizing class models), it is complemented by a textual language, called Object Constraint Language (OCL). The current version of OCL is based on a four-valued logic that turns UML into a formal language. Any type comprises the elements "invalid" and "null" which are propagated as strict and non-strict, respectively. Unfortunately, the former semi-formal semantics of this specification language, captured in the "Annex A" of the OCL standard, leads to different interpretations of corner cases. We formalize the core of OCL: denotational definitions, a logical calculus and operational rules that allow for the execution of OCL expressions by a mixture of term rewriting and code compilation. Our formalization reveals several inconsistencies and contradictions in the current version of the OCL standard. Overall, this document is intended to provide the basis for a machine-checked text "Annex A" of the OCL standard targeting at tool implementors. extra-history = Change history: [2015-10-13]: afp-devel@ea3b38fc54d6 and hol-testgen@12148
   Update of Featherweight OCL including a change in the abstract.
[2014-01-16]: afp-devel@9091ce05cb20 and hol-testgen@10241
   New Entry: Featherweight OCL notify = brucker@spamfence.net, tuong@users.gforge.inria.fr, wolff@lri.fr [Relation_Algebra] title = Relation Algebra author = Alasdair Armstrong <>, Simon Foster , Georg Struth , Tjark Weber date = 2014-01-25 topic = Mathematics/Algebra abstract = Tarski's algebra of binary relations is formalised along the lines of the standard textbooks of Maddux and Schmidt and Ströhlein. This includes relation-algebraic concepts such as subidentities, vectors and a domain operation as well as various notions associated to functions. Relation algebras are also expanded by a reflexive transitive closure operation, and they are linked with Kleene algebras and models of binary relations and Boolean matrices. notify = g.struth@sheffield.ac.uk, tjark.weber@it.uu.se [PSemigroupsConvolution] title = Partial Semigroups and Convolution Algebras author = Brijesh Dongol , Victor B. F. Gomes , Ian J. Hayes , Georg Struth topic = Mathematics/Algebra date = 2017-06-13 notify = g.struth@sheffield.ac.uk, victor.gomes@cl.cam.ac.uk abstract = Partial Semigroups are relevant to the foundations of quantum mechanics and combinatorics as well as to interval and separation logics. Convolution algebras can be understood either as algebras of generalised binary modalities over ternary Kripke frames, in particular over partial semigroups, or as algebras of quantale-valued functions which are equipped with a convolution-style operation of multiplication that is parametrised by a ternary relation. Convolution algebras provide algebraic semantics for various substructural logics, including categorial, relevance and linear logics, for separation logic and for interval logics; they cover quantitative and qualitative applications. These mathematical components for partial semigroups and convolution algebras provide uniform foundations from which models of computation based on relations, program traces or pomsets, and verification components for separation or interval temporal logics can be built with little effort. [Secondary_Sylow] title = Secondary Sylow Theorems author = Jakob von Raumer date = 2014-01-28 topic = Mathematics/Algebra abstract = These theories extend the existing proof of the first Sylow theorem (written by Florian Kammueller and L. C. Paulson) by what are often called the second, third and fourth Sylow theorems. These theorems state propositions about the number of Sylow p-subgroups of a group and the fact that they are conjugate to each other. The proofs make use of an implementation of group actions and their properties. notify = psxjv4@nottingham.ac.uk [Jordan_Hoelder] title = The Jordan-Hölder Theorem author = Jakob von Raumer date = 2014-09-09 topic = Mathematics/Algebra abstract = This submission contains theories that lead to a formalization of the proof of the Jordan-Hölder theorem about composition series of finite groups. The theories formalize the notions of isomorphism classes of groups, simple groups, normal series, composition series, maximal normal subgroups. Furthermore, they provide proofs of the second isomorphism theorem for groups, the characterization theorem for maximal normal subgroups as well as many useful lemmas about normal subgroups and factor groups. The proof is inspired by course notes of Stuart Rankin. notify = psxjv4@nottingham.ac.uk [Cayley_Hamilton] title = The Cayley-Hamilton Theorem author = Stephan Adelsberger , Stefan Hetzl , Florian Pollak date = 2014-09-15 topic = Mathematics/Algebra abstract = This document contains a proof of the Cayley-Hamilton theorem based on the development of matrices in HOL/Multivariate Analysis. notify = stvienna@gmail.com [Probabilistic_Noninterference] title = Probabilistic Noninterference author = Andrei Popescu , Johannes Hölzl date = 2014-03-11 topic = Computer science/Security abstract = We formalize a probabilistic noninterference for a multi-threaded language with uniform scheduling, where probabilistic behaviour comes from both the scheduler and the individual threads. We define notions probabilistic noninterference in two variants: resumption-based and trace-based. For the resumption-based notions, we prove compositionality w.r.t. the language constructs and establish sound type-system-like syntactic criteria. This is a formalization of the mathematical development presented at CPP 2013 and CALCO 2013. It is the probabilistic variant of the Possibilistic Noninterference AFP entry. notify = hoelzl@in.tum.de [HyperCTL] title = A shallow embedding of HyperCTL* author = Markus N. Rabe , Peter Lammich , Andrei Popescu date = 2014-04-16 topic = Computer science/Security, Logic/General logic/Temporal logic abstract = We formalize HyperCTL*, a temporal logic for expressing security properties. We first define a shallow embedding of HyperCTL*, within which we prove inductive and coinductive rules for the operators. Then we show that a HyperCTL* formula captures Goguen-Meseguer noninterference, a landmark information flow property. We also define a deep embedding and connect it to the shallow embedding by a denotational semantics, for which we prove sanity w.r.t. dependence on the free variables. Finally, we show that under some finiteness assumptions about the model, noninterference is given by a (finitary) syntactic formula. notify = uuomul@yahoo.com [Bounded_Deducibility_Security] title = Bounded-Deducibility Security author = Andrei Popescu , Peter Lammich date = 2014-04-22 topic = Computer science/Security abstract = This is a formalization of bounded-deducibility security (BD security), a flexible notion of information-flow security applicable to arbitrary input-output automata. It generalizes Sutherland's classic notion of nondeducibility by factoring in declassification bounds and trigger, whereas nondeducibility states that, in a system, information cannot flow between specified sources and sinks, BD security indicates upper bounds for the flow and triggers under which these upper bounds are no longer guaranteed. notify = uuomul@yahoo.com, lammich@in.tum.de [Network_Security_Policy_Verification] title = Network Security Policy Verification author = Cornelius Diekmann date = 2014-07-04 topic = Computer science/Security abstract = We present a unified theory for verifying network security policies. A security policy is represented as directed graph. To check high-level security goals, security invariants over the policy are expressed. We cover monotonic security invariants, i.e. prohibiting more does not harm security. We provide the following contributions for the security invariant theory.
  • Secure auto-completion of scenario-specific knowledge, which eases usability.
  • Security violations can be repaired by tightening the policy iff the security invariants hold for the deny-all policy.
  • An algorithm to compute a security policy.
  • A formalization of stateful connection semantics in network security mechanisms.
  • An algorithm to compute a secure stateful implementation of a policy.
  • An executable implementation of all the theory.
  • Examples, ranging from an aircraft cabin data network to the analysis of a large real-world firewall.
  • More examples: A fully automated translation of high-level security goals to both firewall and SDN configurations (see Examples/Distributed_WebApp.thy).
For a detailed description, see extra-history = Change history: [2015-04-14]: Added Distributed WebApp example and improved graphviz visualization (revision 4dde08ca2ab8)
notify = diekmann@net.in.tum.de [Abstract_Completeness] title = Abstract Completeness author = Jasmin Christian Blanchette , Andrei Popescu , Dmitriy Traytel date = 2014-04-16 topic = Logic/Proof theory abstract = A formalization of an abstract property of possibly infinite derivation trees (modeled by a codatatype), representing the core of a proof (in Beth/Hintikka style) of the first-order logic completeness theorem, independent of the concrete syntax or inference rules. This work is described in detail in the IJCAR 2014 publication by the authors. The abstract proof can be instantiated for a wide range of Gentzen and tableau systems as well as various flavors of FOL---e.g., with or without predicates, equality, or sorts. Here, we give only a toy example instantiation with classical propositional logic. A more serious instance---many-sorted FOL with equality---is described elsewhere [Blanchette and Popescu, FroCoS 2013]. notify = traytel@in.tum.de [Pop_Refinement] title = Pop-Refinement author = Alessandro Coglio date = 2014-07-03 topic = Computer science/Programming languages/Misc abstract = Pop-refinement is an approach to stepwise refinement, carried out inside an interactive theorem prover by constructing a monotonically decreasing sequence of predicates over deeply embedded target programs. The sequence starts with a predicate that characterizes the possible implementations, and ends with a predicate that characterizes a unique program in explicit syntactic form. Pop-refinement enables more requirements (e.g. program-level and non-functional) to be captured in the initial specification and preserved through refinement. Security requirements expressed as hyperproperties (i.e. predicates over sets of traces) are always preserved by pop-refinement, unlike the popular notion of refinement as trace set inclusion. Two simple examples in Isabelle/HOL are presented, featuring program-level requirements, non-functional requirements, and hyperproperties. notify = coglio@kestrel.edu [VectorSpace] title = Vector Spaces author = Holden Lee date = 2014-08-29 topic = Mathematics/Algebra abstract = This formalisation of basic linear algebra is based completely on locales, building off HOL-Algebra. It includes basic definitions: linear combinations, span, linear independence; linear transformations; interpretation of function spaces as vector spaces; the direct sum of vector spaces, sum of subspaces; the replacement theorem; existence of bases in finite-dimensional; vector spaces, definition of dimension; the rank-nullity theorem. Some concepts are actually defined and proved for modules as they also apply there. Infinite-dimensional vector spaces are supported, but dimension is only supported for finite-dimensional vector spaces. The proofs are standard; the proofs of the replacement theorem and rank-nullity theorem roughly follow the presentation in Linear Algebra by Friedberg, Insel, and Spence. The rank-nullity theorem generalises the existing development in the Archive of Formal Proof (originally using type classes, now using a mix of type classes and locales). notify = holdenl@princeton.edu [Special_Function_Bounds] title = Real-Valued Special Functions: Upper and Lower Bounds author = Lawrence C. Paulson date = 2014-08-29 topic = Mathematics/Analysis abstract = This development proves upper and lower bounds for several familiar real-valued functions. For sin, cos, exp and sqrt, it defines and verifies infinite families of upper and lower bounds, mostly based on Taylor series expansions. For arctan, ln and exp, it verifies a finite collection of upper and lower bounds, originally obtained from the functions' continued fraction expansions using the computer algebra system Maple. A common theme in these proofs is to take the difference between a function and its approximation, which should be zero at one point, and then consider the sign of the derivative. The immediate purpose of this development is to verify axioms used by MetiTarski, an automatic theorem prover for real-valued special functions. Crucial to MetiTarski's operation is the provision of upper and lower bounds for each function of interest. notify = lp15@cam.ac.uk [Landau_Symbols] title = Landau Symbols author = Manuel Eberl date = 2015-07-14 topic = Mathematics/Analysis abstract = This entry provides Landau symbols to describe and reason about the asymptotic growth of functions for sufficiently large inputs. A number of simplification procedures are provided for additional convenience: cancelling of dominated terms in sums under a Landau symbol, cancelling of common factors in products, and a decision procedure for Landau expressions containing products of powers of functions like x, ln(x), ln(ln(x)) etc. notify = eberlm@in.tum.de [Error_Function] title = The Error Function author = Manuel Eberl topic = Mathematics/Analysis date = 2018-02-06 notify = eberlm@in.tum.de abstract =

This entry provides the definitions and basic properties of the complex and real error function erf and the complementary error function erfc. Additionally, it gives their full asymptotic expansions.

[Akra_Bazzi] title = The Akra-Bazzi theorem and the Master theorem author = Manuel Eberl date = 2015-07-14 topic = Mathematics/Analysis abstract = This article contains a formalisation of the Akra-Bazzi method based on a proof by Leighton. It is a generalisation of the well-known Master Theorem for analysing the complexity of Divide & Conquer algorithms. We also include a generalised version of the Master theorem based on the Akra-Bazzi theorem, which is easier to apply than the Akra-Bazzi theorem itself.

Some proof methods that facilitate applying the Master theorem are also included. For a more detailed explanation of the formalisation and the proof methods, see the accompanying paper (publication forthcoming). notify = eberlm@in.tum.de [Dirichlet_Series] title = Dirichlet Series author = Manuel Eberl topic = Mathematics/Number theory date = 2017-10-12 notify = eberlm@in.tum.de abstract = This entry is a formalisation of much of Chapters 2, 3, and 11 of Apostol's “Introduction to Analytic Number Theory”. This includes:

  • Definitions and basic properties for several number-theoretic functions (Euler's φ, Möbius μ, Liouville's λ, the divisor function σ, von Mangoldt's Λ)
  • Executable code for most of these functions, the most efficient implementations using the factoring algorithm by Thiemann et al.
  • Dirichlet products and formal Dirichlet series
  • Analytic results connecting convergent formal Dirichlet series to complex functions
  • Euler product expansions
  • Asymptotic estimates of number-theoretic functions including the density of squarefree integers and the average number of divisors of a natural number
These results are useful as a basis for developing more number-theoretic results, such as the Prime Number Theorem. [Gauss_Sums] title = Gauss Sums and the Pólya–Vinogradov Inequality author = Rodrigo Raya , Manuel Eberl topic = Mathematics/Number theory date = 2019-12-10 notify = manuel.eberl@tum.de abstract =

This article provides a full formalisation of Chapter 8 of Apostol's Introduction to Analytic Number Theory. Subjects that are covered are:

  • periodic arithmetic functions and their finite Fourier series
  • (generalised) Ramanujan sums
  • Gauss sums and separable characters
  • induced moduli and primitive characters
  • the Pólya—Vinogradov inequality
[Zeta_Function] title = The Hurwitz and Riemann ζ Functions author = Manuel Eberl topic = Mathematics/Number theory, Mathematics/Analysis date = 2017-10-12 notify = eberlm@in.tum.de abstract =

This entry builds upon the results about formal and analytic Dirichlet series to define the Hurwitz ζ function ζ(a,s) and, based on that, the Riemann ζ function ζ(s). This is done by first defining them for ℜ(z) > 1 and then successively extending the domain to the left using the Euler–MacLaurin formula.

Apart from the most basic facts such as analyticity, the following results are provided:

  • the Stieltjes constants and the Laurent expansion of ζ(s) at s = 1
  • the non-vanishing of ζ(s) for ℜ(z) ≥ 1
  • the relationship between ζ(a,s) and Γ
  • the special values at negative integers and positive even integers
  • Hurwitz's formula and the reflection formula for ζ(s)
  • the Hadjicostas–Chapman formula

The entry also contains Euler's analytic proof of the infinitude of primes, based on the fact that ζ(s) has a pole at s = 1.

[Linear_Recurrences] title = Linear Recurrences author = Manuel Eberl topic = Mathematics/Analysis date = 2017-10-12 notify = eberlm@in.tum.de abstract =

Linear recurrences with constant coefficients are an interesting class of recurrence equations that can be solved explicitly. The most famous example are certainly the Fibonacci numbers with the equation f(n) = f(n-1) + f(n - 2) and the quite non-obvious closed form (φn - (-φ)-n) / √5 where φ is the golden ratio.

In this work, I build on existing tools in Isabelle – such as formal power series and polynomial factorisation algorithms – to develop a theory of these recurrences and derive a fully executable solver for them that can be exported to programming languages like Haskell.

[Lambert_W] title = The Lambert W Function on the Reals author = Manuel Eberl topic = Mathematics/Analysis date = 2020-04-24 notify = eberlm@in.tum.de abstract =

The Lambert W function is a multi-valued function defined as the inverse function of xx ex. Besides numerous applications in combinatorics, physics, and engineering, it also frequently occurs when solving equations containing both ex and x, or both x and log x.

This article provides a definition of the two real-valued branches W0(x) and W-1(x) and proves various properties such as basic identities and inequalities, monotonicity, differentiability, asymptotic expansions, and the MacLaurin series of W0(x) at x = 0.

[Cartan_FP] title = The Cartan Fixed Point Theorems author = Lawrence C. Paulson date = 2016-03-08 topic = Mathematics/Analysis abstract = The Cartan fixed point theorems concern the group of holomorphic automorphisms on a connected open set of Cn. Ciolli et al. have formalised the one-dimensional case of these theorems in HOL Light. This entry contains their proofs, ported to Isabelle/HOL. Thus it addresses the authors' remark that "it would be important to write a formal proof in a language that can be read by both humans and machines". notify = lp15@cam.ac.uk [Gauss_Jordan] title = Gauss-Jordan Algorithm and Its Applications author = Jose Divasón , Jesús Aransay topic = Computer science/Algorithms/Mathematical date = 2014-09-03 abstract = The Gauss-Jordan algorithm states that any matrix over a field can be transformed by means of elementary row operations to a matrix in reduced row echelon form. The formalization is based on the Rank Nullity Theorem entry of the AFP and on the HOL-Multivariate-Analysis session of Isabelle, where matrices are represented as functions over finite types. We have set up the code generator to make this representation executable. In order to improve the performance, a refinement to immutable arrays has been carried out. We have formalized some of the applications of the Gauss-Jordan algorithm. Thanks to this development, the following facts can be computed over matrices whose elements belong to a field: Ranks, Determinants, Inverses, Bases and dimensions and Solutions of systems of linear equations. Code can be exported to SML and Haskell. notify = jose.divasonm@unirioja.es, jesus-maria.aransay@unirioja.es [Echelon_Form] title = Echelon Form author = Jose Divasón , Jesús Aransay topic = Computer science/Algorithms/Mathematical, Mathematics/Algebra date = 2015-02-12 abstract = We formalize an algorithm to compute the Echelon Form of a matrix. We have proved its existence over Bézout domains and made it executable over Euclidean domains, such as the integer ring and the univariate polynomials over a field. This allows us to compute determinants, inverses and characteristic polynomials of matrices. The work is based on the HOL-Multivariate Analysis library, and on both the Gauss-Jordan and Cayley-Hamilton AFP entries. As a by-product, some algebraic structures have been implemented (principal ideal domains, Bézout domains...). The algorithm has been refined to immutable arrays and code can be generated to functional languages as well. notify = jose.divasonm@unirioja.es, jesus-maria.aransay@unirioja.es [QR_Decomposition] title = QR Decomposition author = Jose Divasón , Jesús Aransay topic = Computer science/Algorithms/Mathematical, Mathematics/Algebra date = 2015-02-12 abstract = QR decomposition is an algorithm to decompose a real matrix A into the product of two other matrices Q and R, where Q is orthogonal and R is invertible and upper triangular. The algorithm is useful for the least squares problem; i.e., the computation of the best approximation of an unsolvable system of linear equations. As a side-product, the Gram-Schmidt process has also been formalized. A refinement using immutable arrays is presented as well. The development relies, among others, on the AFP entry "Implementing field extensions of the form Q[sqrt(b)]" by René Thiemann, which allows execution of the algorithm using symbolic computations. Verified code can be generated and executed using floats as well. extra-history = Change history: [2015-06-18]: The second part of the Fundamental Theorem of Linear Algebra has been generalized to more general inner product spaces. notify = jose.divasonm@unirioja.es, jesus-maria.aransay@unirioja.es [Hermite] title = Hermite Normal Form author = Jose Divasón , Jesús Aransay topic = Computer science/Algorithms/Mathematical, Mathematics/Algebra date = 2015-07-07 abstract = Hermite Normal Form is a canonical matrix analogue of Reduced Echelon Form, but involving matrices over more general rings. In this work we formalise an algorithm to compute the Hermite Normal Form of a matrix by means of elementary row operations, taking advantage of the Echelon Form AFP entry. We have proven the correctness of such an algorithm and refined it to immutable arrays. Furthermore, we have also formalised the uniqueness of the Hermite Normal Form of a matrix. Code can be exported and some examples of execution involving integer matrices and polynomial matrices are presented as well. notify = jose.divasonm@unirioja.es, jesus-maria.aransay@unirioja.es [Imperative_Insertion_Sort] title = Imperative Insertion Sort author = Christian Sternagel date = 2014-09-25 topic = Computer science/Algorithms abstract = The insertion sort algorithm of Cormen et al. (Introduction to Algorithms) is expressed in Imperative HOL and proved to be correct and terminating. For this purpose we also provide a theory about imperative loop constructs with accompanying induction/invariant rules for proving partial and total correctness. Furthermore, the formalized algorithm is fit for code generation. notify = lp15@cam.ac.uk [Stream_Fusion_Code] title = Stream Fusion in HOL with Code Generation author = Andreas Lochbihler , Alexandra Maximova date = 2014-10-10 topic = Computer science/Functional programming abstract = Stream Fusion is a system for removing intermediate list data structures from functional programs, in particular Haskell. This entry adapts stream fusion to Isabelle/HOL and its code generator. We define stream types for finite and possibly infinite lists and stream versions for most of the fusible list functions in the theories List and Coinductive_List, and prove them correct with respect to the conversion functions between lists and streams. The Stream Fusion transformation itself is implemented as a simproc in the preprocessor of the code generator. [Brian Huffman's AFP entry formalises stream fusion in HOLCF for the domain of lazy lists to prove the GHC compiler rewrite rules correct. In contrast, this work enables Isabelle's code generator to perform stream fusion itself. To that end, it covers both finite and coinductive lists from the HOL library and the Coinductive entry. The fusible list functions require specification and proof principles different from Huffman's.] notify = mail@andreas-lochbihler.de [Case_Labeling] title = Generating Cases from Labeled Subgoals author = Lars Noschinski date = 2015-07-21 topic = Tools, Computer science/Programming languages/Misc abstract = Isabelle/Isar provides named cases to structure proofs. This article contains an implementation of a proof method casify, which can be used to easily extend proof tools with support for named cases. Such a proof tool must produce labeled subgoals, which are then interpreted by casify.

As examples, this work contains verification condition generators producing named cases for three languages: The Hoare language from HOL/Library, a monadic language for computations with failure (inspired by the AutoCorres tool), and a language of conditional expressions. These VCGs are demonstrated by a number of example programs. notify = noschinl@gmail.com [DPT-SAT-Solver] title = A Fast SAT Solver for Isabelle in Standard ML topic = Tools author = Armin Heller <> date = 2009-12-09 abstract = This contribution contains a fast SAT solver for Isabelle written in Standard ML. By loading the theory DPT_SAT_Solver, the SAT solver installs itself (under the name ``dptsat'') and certain Isabelle tools like Refute will start using it automatically. This is a port of the DPT (Decision Procedure Toolkit) SAT Solver written in OCaml. notify = jasmin.blanchette@gmail.com [Rep_Fin_Groups] title = Representations of Finite Groups topic = Mathematics/Algebra author = Jeremy Sylvestre date = 2015-08-12 abstract = We provide a formal framework for the theory of representations of finite groups, as modules over the group ring. Along the way, we develop the general theory of groups (relying on the group_add class for the basics), modules, and vector spaces, to the extent required for theory of group representations. We then provide formal proofs of several important introductory theorems in the subject, including Maschke's theorem, Schur's lemma, and Frobenius reciprocity. We also prove that every irreducible representation is isomorphic to a submodule of the group ring, leading to the fact that for a finite group there are only finitely many isomorphism classes of irreducible representations. In all of this, no restriction is made on the characteristic of the ring or field of scalars until the definition of a group representation, and then the only restriction made is that the characteristic must not divide the order of the group. notify = jsylvest@ualberta.ca [Noninterference_Inductive_Unwinding] title = The Inductive Unwinding Theorem for CSP Noninterference Security topic = Computer science/Security author = Pasquale Noce date = 2015-08-18 abstract =

The necessary and sufficient condition for CSP noninterference security stated by the Ipurge Unwinding Theorem is expressed in terms of a pair of event lists varying over the set of process traces. This does not render it suitable for the subsequent application of rule induction in the case of a process defined inductively, since rule induction may rather be applied to a single variable ranging over an inductively defined set.

Starting from the Ipurge Unwinding Theorem, this paper derives a necessary and sufficient condition for CSP noninterference security that involves a single event list varying over the set of process traces, and is thus suitable for rule induction; hence its name, Inductive Unwinding Theorem. Similarly to the Ipurge Unwinding Theorem, the new theorem only requires to consider individual accepted and refused events for each process trace, and applies to the general case of a possibly intransitive noninterference policy. Specific variants of this theorem are additionally proven for deterministic processes and trace set processes.

notify = pasquale.noce.lavoro@gmail.com [Password_Authentication_Protocol] title = Verification of a Diffie-Hellman Password-based Authentication Protocol by Extending the Inductive Method author = Pasquale Noce topic = Computer science/Security date = 2017-01-03 notify = pasquale.noce.lavoro@gmail.com abstract = This paper constructs a formal model of a Diffie-Hellman password-based authentication protocol between a user and a smart card, and proves its security. The protocol provides for the dispatch of the user's password to the smart card on a secure messaging channel established by means of Password Authenticated Connection Establishment (PACE), where the mapping method being used is Chip Authentication Mapping. By applying and suitably extending Paulson's Inductive Method, this paper proves that the protocol establishes trustworthy secure messaging channels, preserves the secrecy of users' passwords, and provides an effective mutual authentication service. What is more, these security properties turn out to hold independently of the secrecy of the PACE authentication key. [Jordan_Normal_Form] title = Matrices, Jordan Normal Forms, and Spectral Radius Theory topic = Mathematics/Algebra author = René Thiemann , Akihisa Yamada contributors = Alexander Bentkamp date = 2015-08-21 abstract =

Matrix interpretations are useful as measure functions in termination proving. In order to use these interpretations also for complexity analysis, the growth rate of matrix powers has to examined. Here, we formalized a central result of spectral radius theory, namely that the growth rate is polynomially bounded if and only if the spectral radius of a matrix is at most one.

To formally prove this result we first studied the growth rates of matrices in Jordan normal form, and prove the result that every complex matrix has a Jordan normal form using a constructive prove via Schur decomposition.

The whole development is based on a new abstract type for matrices, which is also executable by a suitable setup of the code generator. It completely subsumes our former AFP-entry on executable matrices, and its main advantage is its close connection to the HMA-representation which allowed us to easily adapt existing proofs on determinants.

All the results have been applied to improve CeTA, our certifier to validate termination and complexity proof certificates.

extra-history = Change history: [2016-01-07]: Added Schur-decomposition, Gram-Schmidt orthogonalization, uniqueness of Jordan normal forms
[2018-04-17]: Integrated lemmas from deep-learning AFP-entry of Alexander Bentkamp notify = rene.thiemann@uibk.ac.at, ayamada@trs.cm.is.nagoya-u.ac.jp [LTL_to_DRA] title = Converting Linear Temporal Logic to Deterministic (Generalized) Rabin Automata topic = Computer science/Automata and formal languages author = Salomon Sickert date = 2015-09-04 abstract = Recently, Javier Esparza and Jan Kretinsky proposed a new method directly translating linear temporal logic (LTL) formulas to deterministic (generalized) Rabin automata. Compared to the existing approaches of constructing a non-deterministic Buechi-automaton in the first step and then applying a determinization procedure (e.g. some variant of Safra's construction) in a second step, this new approach preservers a relation between the formula and the states of the resulting automaton. While the old approach produced a monolithic structure, the new method is compositional. Furthermore, in some cases the resulting automata are much smaller than the automata generated by existing approaches. In order to ensure the correctness of the construction, this entry contains a complete formalisation and verification of the translation. Furthermore from this basis executable code is generated. extra-history = Change history: [2015-09-23]: Enable code export for the eager unfolding optimisation and reduce running time of the generated tool. Moreover, add support for the mlton SML compiler.
[2016-03-24]: Make use of the LTL entry and include the simplifier. notify = sickert@in.tum.de [Timed_Automata] title = Timed Automata author = Simon Wimmer date = 2016-03-08 topic = Computer science/Automata and formal languages abstract = Timed automata are a widely used formalism for modeling real-time systems, which is employed in a class of successful model checkers such as UPPAAL [LPY97], HyTech [HHWt97] or Kronos [Yov97]. This work formalizes the theory for the subclass of diagonal-free timed automata, which is sufficient to model many interesting problems. We first define the basic concepts and semantics of diagonal-free timed automata. Based on this, we prove two types of decidability results for the language emptiness problem. The first is the classic result of Alur and Dill [AD90, AD94], which uses a finite partitioning of the state space into so-called `regions`. Our second result focuses on an approach based on `Difference Bound Matrices (DBMs)`, which is practically used by model checkers. We prove the correctness of the basic forward analysis operations on DBMs. One of these operations is the Floyd-Warshall algorithm for the all-pairs shortest paths problem. To obtain a finite search space, a widening operation has to be used for this kind of analysis. We use Patricia Bouyer's [Bou04] approach to prove that this widening operation is correct in the sense that DBM-based forward analysis in combination with the widening operation also decides language emptiness. The interesting property of this proof is that the first decidability result is reused to obtain the second one. notify = wimmers@in.tum.de [Parity_Game] title = Positional Determinacy of Parity Games author = Christoph Dittmann date = 2015-11-02 topic = Mathematics/Games and economics, Mathematics/Graph theory abstract = We present a formalization of parity games (a two-player game on directed graphs) and a proof of their positional determinacy in Isabelle/HOL. This proof works for both finite and infinite games. notify = [Ergodic_Theory] title = Ergodic Theory author = Sebastien Gouezel contributors = Manuel Eberl date = 2015-12-01 topic = Mathematics/Probability theory abstract = Ergodic theory is the branch of mathematics that studies the behaviour of measure preserving transformations, in finite or infinite measure. It interacts both with probability theory (mainly through measure theory) and with geometry as a lot of interesting examples are from geometric origin. We implement the first definitions and theorems of ergodic theory, including notably Poicaré recurrence theorem for finite measure preserving systems (together with the notion of conservativity in general), induced maps, Kac's theorem, Birkhoff theorem (arguably the most important theorem in ergodic theory), and variations around it such as conservativity of the corresponding skew product, or Atkinson lemma. notify = sebastien.gouezel@univ-rennes1.fr, hoelzl@in.tum.de [Latin_Square] title = Latin Square author = Alexander Bentkamp date = 2015-12-02 topic = Mathematics/Combinatorics abstract = A Latin Square is a n x n table filled with integers from 1 to n where each number appears exactly once in each row and each column. A Latin Rectangle is a partially filled n x n table with r filled rows and n-r empty rows, such that each number appears at most once in each row and each column. The main result of this theory is that any Latin Rectangle can be completed to a Latin Square. notify = bentkamp@gmail.com [Deep_Learning] title = Expressiveness of Deep Learning author = Alexander Bentkamp date = 2016-11-10 topic = Computer science/Machine learning, Mathematics/Analysis abstract = Deep learning has had a profound impact on computer science in recent years, with applications to search engines, image recognition and language processing, bioinformatics, and more. Recently, Cohen et al. provided theoretical evidence for the superiority of deep learning over shallow learning. This formalization of their work simplifies and generalizes the original proof, while working around the limitations of the Isabelle type system. To support the formalization, I developed reusable libraries of formalized mathematics, including results about the matrix rank, the Lebesgue measure, and multivariate polynomials, as well as a library for tensor analysis. notify = bentkamp@gmail.com [Inductive_Inference] title = Some classical results in inductive inference of recursive functions author = Frank J. Balbach topic = Logic/Computability, Computer science/Machine learning date = 2020-08-31 notify = frank-balbach@gmx.de abstract =

This entry formalizes some classical concepts and results from inductive inference of recursive functions. In the basic setting a partial recursive function ("strategy") must identify ("learn") all functions from a set ("class") of recursive functions. To that end the strategy receives more and more values $f(0), f(1), f(2), \ldots$ of some function $f$ from the given class and in turn outputs descriptions of partial recursive functions, for example, Gödel numbers. The strategy is considered successful if the sequence of outputs ("hypotheses") converges to a description of $f$. A class of functions learnable in this sense is called "learnable in the limit". The set of all these classes is denoted by LIM.

Other types of inference considered are finite learning (FIN), behaviorally correct learning in the limit (BC), and some variants of LIM with restrictions on the hypotheses: total learning (TOTAL), consistent learning (CONS), and class-preserving learning (CP). The main results formalized are the proper inclusions $\mathrm{FIN} \subset \mathrm{CP} \subset \mathrm{TOTAL} \subset \mathrm{CONS} \subset \mathrm{LIM} \subset \mathrm{BC} \subset 2^{\mathcal{R}}$, where $\mathcal{R}$ is the set of all total recursive functions. Further results show that for all these inference types except CONS, strategies can be assumed to be total recursive functions; that all inference types but CP are closed under the subset relation between classes; and that no inference type is closed under the union of classes.

The above is based on a formalization of recursive functions heavily inspired by the Universal Turing Machine entry by Xu et al., but different in that it models partial functions with codomain nat option. The formalization contains a construction of a universal partial recursive function, without resorting to Turing machines, introduces decidability and recursive enumerability, and proves some standard results: existence of a Kleene normal form, the s-m-n theorem, Rice's theorem, and assorted fixed-point theorems (recursion theorems) by Kleene, Rogers, and Smullyan.

[Applicative_Lifting] title = Applicative Lifting author = Andreas Lochbihler , Joshua Schneider <> date = 2015-12-22 topic = Computer science/Functional programming abstract = Applicative functors augment computations with effects by lifting function application to types which model the effects. As the structure of the computation cannot depend on the effects, applicative expressions can be analysed statically. This allows us to lift universally quantified equations to the effectful types, as observed by Hinze. Thus, equational reasoning over effectful computations can be reduced to pure types.

This entry provides a package for registering applicative functors and two proof methods for lifting of equations over applicative functors. The first method normalises applicative expressions according to the laws of applicative functors. This way, equations whose two sides contain the same list of variables can be lifted to every applicative functor.

To lift larger classes of equations, the second method exploits a number of additional properties (e.g., commutativity of effects) provided the properties have been declared for the concrete applicative functor at hand upon registration.

We declare several types from the Isabelle library as applicative functors and illustrate the use of the methods with two examples: the lifting of the arithmetic type class hierarchy to streams and the verification of a relabelling function on binary trees. We also formalise and verify the normalisation algorithm used by the first proof method.

extra-history = Change history: [2016-03-03]: added formalisation of lifting with combinators
[2016-06-10]: implemented automatic derivation of lifted combinator reductions; support arbitrary lifted relations using relators; improved compatibility with locale interpretation (revision ec336f354f37)
notify = mail@andreas-lochbihler.de [Stern_Brocot] title = The Stern-Brocot Tree author = Peter Gammie , Andreas Lochbihler date = 2015-12-22 topic = Mathematics/Number theory abstract = The Stern-Brocot tree contains all rational numbers exactly once and in their lowest terms. We formalise the Stern-Brocot tree as a coinductive tree using recursive and iterative specifications, which we have proven equivalent, and show that it indeed contains all the numbers as stated. Following Hinze, we prove that the Stern-Brocot tree can be linearised looplessly into Stern's diatonic sequence (also known as Dijkstra's fusc function) and that it is a permutation of the Bird tree.

The reasoning stays at an abstract level by appealing to the uniqueness of solutions of guarded recursive equations and lifting algebraic laws point-wise to trees and streams using applicative functors.

notify = mail@andreas-lochbihler.de [Algebraic_Numbers] title = Algebraic Numbers in Isabelle/HOL topic = Mathematics/Algebra author = René Thiemann , Akihisa Yamada , Sebastiaan Joosten contributors = Manuel Eberl date = 2015-12-22 abstract = Based on existing libraries for matrices, factorization of rational polynomials, and Sturm's theorem, we formalized algebraic numbers in Isabelle/HOL. Our development serves as an implementation for real and complex numbers, and it admits to compute roots and completely factorize real and complex polynomials, provided that all coefficients are rational numbers. Moreover, we provide two implementations to display algebraic numbers, an injective and expensive one, or a faster but approximative version.

To this end, we mechanized several results on resultants, which also required us to prove that polynomials over a unique factorization domain form again a unique factorization domain.

extra-history = Change history: [2016-01-29]: Split off Polynomial Interpolation and Polynomial Factorization
[2017-04-16]: Use certified Berlekamp-Zassenhaus factorization, use subresultant algorithm for computing resultants, improved bisection algorithm notify = rene.thiemann@uibk.ac.at, ayamada@trs.cm.is.nagoya-u.ac.jp, sebastiaan.joosten@uibk.ac.at [Polynomial_Interpolation] title = Polynomial Interpolation topic = Mathematics/Algebra author = René Thiemann , Akihisa Yamada date = 2016-01-29 abstract = We formalized three algorithms for polynomial interpolation over arbitrary fields: Lagrange's explicit expression, the recursive algorithm of Neville and Aitken, and the Newton interpolation in combination with an efficient implementation of divided differences. Variants of these algorithms for integer polynomials are also available, where sometimes the interpolation can fail; e.g., there is no linear integer polynomial p such that p(0) = 0 and p(2) = 1. Moreover, for the Newton interpolation for integer polynomials, we proved that all intermediate results that are computed during the algorithm must be integers. This admits an early failure detection in the implementation. Finally, we proved the uniqueness of polynomial interpolation.

The development also contains improved code equations to speed up the division of integers in target languages. notify = rene.thiemann@uibk.ac.at, ayamada@trs.cm.is.nagoya-u.ac.jp [Polynomial_Factorization] title = Polynomial Factorization topic = Mathematics/Algebra author = René Thiemann , Akihisa Yamada date = 2016-01-29 abstract = Based on existing libraries for polynomial interpolation and matrices, we formalized several factorization algorithms for polynomials, including Kronecker's algorithm for integer polynomials, Yun's square-free factorization algorithm for field polynomials, and Berlekamp's algorithm for polynomials over finite fields. By combining the last one with Hensel's lifting, we derive an efficient factorization algorithm for the integer polynomials, which is then lifted for rational polynomials by mechanizing Gauss' lemma. Finally, we assembled a combined factorization algorithm for rational polynomials, which combines all the mentioned algorithms and additionally uses the explicit formula for roots of quadratic polynomials and a rational root test.

As side products, we developed division algorithms for polynomials over integral domains, as well as primality-testing and prime-factorization algorithms for integers. notify = rene.thiemann@uibk.ac.at, ayamada@trs.cm.is.nagoya-u.ac.jp [Perron_Frobenius] title = Perron-Frobenius Theorem for Spectral Radius Analysis author = Jose Divasón , Ondřej Kunčar , René Thiemann , Akihisa Yamada notify = rene.thiemann@uibk.ac.at date = 2016-05-20 topic = Mathematics/Algebra abstract =

The spectral radius of a matrix A is the maximum norm of all eigenvalues of A. In previous work we already formalized that for a complex matrix A, the values in An grow polynomially in n if and only if the spectral radius is at most one. One problem with the above characterization is the determination of all complex eigenvalues. In case A contains only non-negative real values, a simplification is possible with the help of the Perron–Frobenius theorem, which tells us that it suffices to consider only the real eigenvalues of A, i.e., applying Sturm's method can decide the polynomial growth of An.

We formalize the Perron–Frobenius theorem based on a proof via Brouwer's fixpoint theorem, which is available in the HOL multivariate analysis (HMA) library. Since the results on the spectral radius is based on matrices in the Jordan normal form (JNF) library, we further develop a connection which allows us to easily transfer theorems between HMA and JNF. With this connection we derive the combined result: if A is a non-negative real matrix, and no real eigenvalue of A is strictly larger than one, then An is polynomially bounded in n.

extra-history = Change history: [2017-10-18]: added Perron-Frobenius theorem for irreducible matrices with generalization (revision bda1f1ce8a1c)
[2018-05-17]: prove conjecture of CPP'18 paper: Jordan blocks of spectral radius have maximum size (revision ffdb3794e5d5) [Stochastic_Matrices] title = Stochastic Matrices and the Perron-Frobenius Theorem author = René Thiemann topic = Mathematics/Algebra, Computer science/Automata and formal languages date = 2017-11-22 notify = rene.thiemann@uibk.ac.at abstract = Stochastic matrices are a convenient way to model discrete-time and finite state Markov chains. The Perron–Frobenius theorem tells us something about the existence and uniqueness of non-negative eigenvectors of a stochastic matrix. In this entry, we formalize stochastic matrices, link the formalization to the existing AFP-entry on Markov chains, and apply the Perron–Frobenius theorem to prove that stationary distributions always exist, and they are unique if the stochastic matrix is irreducible. [Formal_SSA] title = Verified Construction of Static Single Assignment Form author = Sebastian Ullrich , Denis Lohner date = 2016-02-05 topic = Computer science/Algorithms, Computer science/Programming languages/Transformations abstract =

We define a functional variant of the static single assignment (SSA) form construction algorithm described by Braun et al., which combines simplicity and efficiency. The definition is based on a general, abstract control flow graph representation using Isabelle locales.

We prove that the algorithm's output is semantically equivalent to the input according to a small-step semantics, and that it is in minimal SSA form for the common special case of reducible inputs. We then show the satisfiability of the locale assumptions by giving instantiations for a simple While language.

Furthermore, we use a generic instantiation based on typedefs in order to extract OCaml code and replace the unverified SSA construction algorithm of the CompCertSSA project with it.

A more detailed description of the verified SSA construction can be found in the paper Verified Construction of Static Single Assignment Form, CC 2016.

notify = denis.lohner@kit.edu [Minimal_SSA] title = Minimal Static Single Assignment Form author = Max Wagner , Denis Lohner topic = Computer science/Programming languages/Transformations date = 2017-01-17 notify = denis.lohner@kit.edu abstract =

This formalization is an extension to "Verified Construction of Static Single Assignment Form". In their work, the authors have shown that Braun et al.'s static single assignment (SSA) construction algorithm produces minimal SSA form for input programs with a reducible control flow graph (CFG). However Braun et al. also proposed an extension to their algorithm that they claim produces minimal SSA form even for irreducible CFGs.
In this formalization we support that claim by giving a mechanized proof.

As the extension of Braun et al.'s algorithm aims for removing so-called redundant strongly connected components of phi functions, we show that this suffices to guarantee minimality according to Cytron et al..

[PropResPI] title = Propositional Resolution and Prime Implicates Generation author = Nicolas Peltier notify = Nicolas.Peltier@imag.fr date = 2016-03-11 topic = Logic/General logic/Mechanization of proofs abstract = We provide formal proofs in Isabelle-HOL (using mostly structured Isar proofs) of the soundness and completeness of the Resolution rule in propositional logic. The completeness proofs take into account the usual redundancy elimination rules (tautology elimination and subsumption), and several refinements of the Resolution rule are considered: ordered resolution (with selection functions), positive and negative resolution, semantic resolution and unit resolution (the latter refinement is complete only for clause sets that are Horn- renamable). We also define a concrete procedure for computing saturated sets and establish its soundness and completeness. The clause sets are not assumed to be finite, so that the results can be applied to formulas obtained by grounding sets of first-order clauses (however, a total ordering among atoms is assumed to be given). Next, we show that the unrestricted Resolution rule is deductive- complete, in the sense that it is able to generate all (prime) implicates of any set of propositional clauses (i.e., all entailment- minimal, non-valid, clausal consequences of the considered set). The generation of prime implicates is an important problem, with many applications in artificial intelligence and verification (for abductive reasoning, knowledge compilation, diagnosis, debugging etc.). We also show that implicates can be computed in an incremental way, by fixing an ordering among all the atoms in the considered sets and resolving upon these atoms one by one in the considered order (with no backtracking). This feature is critical for the efficient computation of prime implicates. Building on these results, we provide a procedure for computing such implicates and establish its soundness and completeness. [SuperCalc] title = A Variant of the Superposition Calculus author = Nicolas Peltier notify = Nicolas.Peltier@imag.fr date = 2016-09-06 topic = Logic/Proof theory abstract = We provide a formalization of a variant of the superposition calculus, together with formal proofs of soundness and refutational completeness (w.r.t. the usual redundancy criteria based on clause ordering). This version of the calculus uses all the standard restrictions of the superposition rules, together with the following refinement, inspired by the basic superposition calculus: each clause is associated with a set of terms which are assumed to be in normal form -- thus any application of the replacement rule on these terms is blocked. The set is initially empty and terms may be added or removed at each inference step. The set of terms that are assumed to be in normal form includes any term introduced by previous unifiers as well as any term occurring in the parent clauses at a position that is smaller (according to some given ordering on positions) than a previously replaced term. The standard superposition calculus corresponds to the case where the set of irreducible terms is always empty. [Nominal2] title = Nominal 2 author = Christian Urban , Stefan Berghofer , Cezary Kaliszyk date = 2013-02-21 topic = Tools abstract =

Dealing with binders, renaming of bound variables, capture-avoiding substitution, etc., is very often a major problem in formal proofs, especially in proofs by structural and rule induction. Nominal Isabelle is designed to make such proofs easy to formalise: it provides an infrastructure for declaring nominal datatypes (that is alpha-equivalence classes) and for defining functions over them by structural recursion. It also provides induction principles that have Barendregt’s variable convention already built in.

This entry can be used as a more advanced replacement for HOL/Nominal in the Isabelle distribution.

notify = christian.urban@kcl.ac.uk [First_Welfare_Theorem] title = Microeconomics and the First Welfare Theorem author = Julian Parsert , Cezary Kaliszyk topic = Mathematics/Games and economics license = LGPL date = 2017-09-01 notify = julian.parsert@uibk.ac.at, cezary.kaliszyk@uibk.ac.at abstract = Economic activity has always been a fundamental part of society. Due to modern day politics, economic theory has gained even more influence on our lives. Thus we want models and theories to be as precise as possible. This can be achieved using certification with the help of formal proof technology. Hence we will use Isabelle/HOL to construct two economic models, that of the the pure exchange economy and a version of the Arrow-Debreu Model. We will prove that the First Theorem of Welfare Economics holds within both. The theorem is the mathematical formulation of Adam Smith's famous invisible hand and states that a group of self-interested and rational actors will eventually achieve an efficient allocation of goods and services. extra-history = Change history: [2018-06-17]: Added some lemmas and a theory file, also introduced Microeconomics folder.
[Noninterference_Sequential_Composition] title = Conservation of CSP Noninterference Security under Sequential Composition author = Pasquale Noce date = 2016-04-26 topic = Computer science/Security, Computer science/Concurrency/Process calculi abstract =

In his outstanding work on Communicating Sequential Processes, Hoare has defined two fundamental binary operations allowing to compose the input processes into another, typically more complex, process: sequential composition and concurrent composition. Particularly, the output of the former operation is a process that initially behaves like the first operand, and then like the second operand once the execution of the first one has terminated successfully, as long as it does.

This paper formalizes Hoare's definition of sequential composition and proves, in the general case of a possibly intransitive policy, that CSP noninterference security is conserved under this operation, provided that successful termination cannot be affected by confidential events and cannot occur as an alternative to other events in the traces of the first operand. Both of these assumptions are shown, by means of counterexamples, to be necessary for the theorem to hold.

notify = pasquale.noce.lavoro@gmail.com [Noninterference_Concurrent_Composition] title = Conservation of CSP Noninterference Security under Concurrent Composition author = Pasquale Noce notify = pasquale.noce.lavoro@gmail.com date = 2016-06-13 topic = Computer science/Security, Computer science/Concurrency/Process calculi abstract =

In his outstanding work on Communicating Sequential Processes, Hoare has defined two fundamental binary operations allowing to compose the input processes into another, typically more complex, process: sequential composition and concurrent composition. Particularly, the output of the latter operation is a process in which any event not shared by both operands can occur whenever the operand that admits the event can engage in it, whereas any event shared by both operands can occur just in case both can engage in it.

This paper formalizes Hoare's definition of concurrent composition and proves, in the general case of a possibly intransitive policy, that CSP noninterference security is conserved under this operation. This result, along with the previous analogous one concerning sequential composition, enables the construction of more and more complex processes enforcing noninterference security by composing, sequentially or concurrently, simpler secure processes, whose security can in turn be proven using either the definition of security, or unwinding theorems.

[ROBDD] title = Algorithms for Reduced Ordered Binary Decision Diagrams author = Julius Michaelis , Maximilian Haslbeck , Peter Lammich , Lars Hupel date = 2016-04-27 topic = Computer science/Algorithms, Computer science/Data structures abstract = We present a verified and executable implementation of ROBDDs in Isabelle/HOL. Our implementation relates pointer-based computation in the Heap monad to operations on an abstract definition of boolean functions. Internally, we implemented the if-then-else combinator in a recursive fashion, following the Shannon decomposition of the argument functions. The implementation mixes and adapts known techniques and is built with efficiency in mind. notify = bdd@liftm.de, haslbecm@in.tum.de [No_FTL_observers] title = No Faster-Than-Light Observers author = Mike Stannett , István Németi date = 2016-04-28 topic = Mathematics/Physics abstract = We provide a formal proof within First Order Relativity Theory that no observer can travel faster than the speed of light. Originally reported in Stannett & Németi (2014) "Using Isabelle/HOL to verify first-order relativity theory", Journal of Automated Reasoning 52(4), pp. 361-378. notify = m.stannett@sheffield.ac.uk [Groebner_Bases] title = Gröbner Bases Theory author = Fabian Immler , Alexander Maletzky date = 2016-05-02 topic = Mathematics/Algebra, Computer science/Algorithms/Mathematical abstract = This formalization is concerned with the theory of Gröbner bases in (commutative) multivariate polynomial rings over fields, originally developed by Buchberger in his 1965 PhD thesis. Apart from the statement and proof of the main theorem of the theory, the formalization also implements Buchberger's algorithm for actually computing Gröbner bases as a tail-recursive function, thus allowing to effectively decide ideal membership in finitely generated polynomial ideals. Furthermore, all functions can be executed on a concrete representation of multivariate polynomials as association lists. extra-history = Change history: [2019-04-18]: Specialized Gröbner bases to less abstract representation of polynomials, where power-products are represented as polynomial mappings.
notify = alexander.maletzky@risc.jku.at [Nullstellensatz] title = Hilbert's Nullstellensatz author = Alexander Maletzky topic = Mathematics/Algebra, Mathematics/Geometry date = 2019-06-16 notify = alexander.maletzky@risc-software.at abstract = This entry formalizes Hilbert's Nullstellensatz, an important theorem in algebraic geometry that can be viewed as the generalization of the Fundamental Theorem of Algebra to multivariate polynomials: If a set of (multivariate) polynomials over an algebraically closed field has no common zero, then the ideal it generates is the entire polynomial ring. The formalization proves several equivalent versions of this celebrated theorem: the weak Nullstellensatz, the strong Nullstellensatz (connecting algebraic varieties and radical ideals), and the field-theoretic Nullstellensatz. The formalization follows Chapter 4.1. of Ideals, Varieties, and Algorithms by Cox, Little and O'Shea. [Bell_Numbers_Spivey] title = Spivey's Generalized Recurrence for Bell Numbers author = Lukas Bulwahn date = 2016-05-04 topic = Mathematics/Combinatorics abstract = This entry defines the Bell numbers as the cardinality of set partitions for a carrier set of given size, and derives Spivey's generalized recurrence relation for Bell numbers following his elegant and intuitive combinatorial proof.

As the set construction for the combinatorial proof requires construction of three intermediate structures, the main difficulty of the formalization is handling the overall combinatorial argument in a structured way. The introduced proof structure allows us to compose the combinatorial argument from its subparts, and supports to keep track how the detailed proof steps are related to the overall argument. To obtain this structure, this entry uses set monad notation for the set construction's definition, introduces suitable predicates and rules, and follows a repeating structure in its Isar proof. notify = lukas.bulwahn@gmail.com [Randomised_Social_Choice] title = Randomised Social Choice Theory author = Manuel Eberl date = 2016-05-05 topic = Mathematics/Games and economics abstract = This work contains a formalisation of basic Randomised Social Choice, including Stochastic Dominance and Social Decision Schemes (SDSs) along with some of their most important properties (Anonymity, Neutrality, ex-post- and SD-Efficiency, SD-Strategy-Proofness) and two particular SDSs – Random Dictatorship and Random Serial Dictatorship (with proofs of the properties that they satisfy). Many important properties of these concepts are also proven – such as the two equivalent characterisations of Stochastic Dominance and the fact that SD-efficiency of a lottery only depends on the support. The entry also provides convenient commands to define Preference Profiles, prove their well-formedness, and automatically derive restrictions that sufficiently nice SDSs need to satisfy on the defined profiles. Currently, the formalisation focuses on weak preferences and Stochastic Dominance, but it should be easy to extend it to other domains – such as strict preferences – or other lottery extensions – such as Bilinear Dominance or Pairwise Comparison. notify = eberlm@in.tum.de [SDS_Impossibility] title = The Incompatibility of SD-Efficiency and SD-Strategy-Proofness author = Manuel Eberl date = 2016-05-04 topic = Mathematics/Games and economics abstract = This formalisation contains the proof that there is no anonymous and neutral Social Decision Scheme for at least four voters and alternatives that fulfils both SD-Efficiency and SD-Strategy- Proofness. The proof is a fully structured and quasi-human-redable one. It was derived from the (unstructured) SMT proof of the case for exactly four voters and alternatives by Brandl et al. Their proof relies on an unverified translation of the original problem to SMT, and the proof that lifts the argument for exactly four voters and alternatives to the general case is also not machine-checked. In this Isabelle proof, on the other hand, all of these steps are fully proven and machine-checked. This is particularly important seeing as a previously published informal proof of a weaker statement contained a mistake in precisely this lifting step. notify = eberlm@in.tum.de [Median_Of_Medians_Selection] title = The Median-of-Medians Selection Algorithm author = Manuel Eberl topic = Computer science/Algorithms date = 2017-12-21 notify = eberlm@in.tum.de abstract =

This entry provides an executable functional implementation of the Median-of-Medians algorithm for selecting the k-th smallest element of an unsorted list deterministically in linear time. The size bounds for the recursive call that lead to the linear upper bound on the run-time of the algorithm are also proven.

[Mason_Stothers] title = The Mason–Stothers Theorem author = Manuel Eberl topic = Mathematics/Algebra date = 2017-12-21 notify = eberlm@in.tum.de abstract =

This article provides a formalisation of Snyder’s simple and elegant proof of the Mason–Stothers theorem, which is the polynomial analogue of the famous abc Conjecture for integers. Remarkably, Snyder found this very elegant proof when he was still a high-school student.

In short, the statement of the theorem is that three non-zero coprime polynomials A, B, C over a field which sum to 0 and do not all have vanishing derivatives fulfil max{deg(A), deg(B), deg(C)} < deg(rad(ABC)) where the rad(P) denotes the radical of P, i. e. the product of all unique irreducible factors of P.

This theorem also implies a kind of polynomial analogue of Fermat’s Last Theorem for polynomials: except for trivial cases, An + Bn + Cn = 0 implies n ≤ 2 for coprime polynomials A, B, C over a field.

[FLP] title = A Constructive Proof for FLP author = Benjamin Bisping , Paul-David Brodmann , Tim Jungnickel , Christina Rickmann , Henning Seidler , Anke Stüber , Arno Wilhelm-Weidner , Kirstin Peters , Uwe Nestmann date = 2016-05-18 topic = Computer science/Concurrency abstract = The impossibility of distributed consensus with one faulty process is a result with important consequences for real world distributed systems e.g., commits in replicated databases. Since proofs are not immune to faults and even plausible proofs with a profound formalism can conclude wrong results, we validate the fundamental result named FLP after Fischer, Lynch and Paterson. We present a formalization of distributed systems and the aforementioned consensus problem. Our proof is based on Hagen Völzer's paper "A constructive proof for FLP". In addition to the enhanced confidence in the validity of Völzer's proof, we contribute the missing gaps to show the correctness in Isabelle/HOL. We clarify the proof details and even prove fairness of the infinite execution that contradicts consensus. Our Isabelle formalization can also be reused for further proofs of properties of distributed systems. notify = henning.seidler@mailbox.tu-berlin.de [IMAP-CRDT] title = The IMAP CmRDT author = Tim Jungnickel , Lennart Oldenburg <>, Matthias Loibl <> topic = Computer science/Algorithms/Distributed, Computer science/Data structures date = 2017-11-09 notify = tim.jungnickel@tu-berlin.de abstract = We provide our Isabelle/HOL formalization of a Conflict-free Replicated Datatype for Internet Message Access Protocol commands. We show that Strong Eventual Consistency (SEC) is guaranteed by proving the commutativity of concurrent operations. We base our formalization on the recently proposed "framework for establishing Strong Eventual Consistency for Conflict-free Replicated Datatypes" (AFP.CRDT) from Gomes et al. Hence, we provide an additional example of how the recently proposed framework can be used to design and prove CRDTs. [Incredible_Proof_Machine] title = The meta theory of the Incredible Proof Machine author = Joachim Breitner , Denis Lohner date = 2016-05-20 topic = Logic/Proof theory abstract = The Incredible Proof Machine is an interactive visual theorem prover which represents proofs as port graphs. We model this proof representation in Isabelle, and prove that it is just as powerful as natural deduction. notify = mail@joachim-breitner.de [Word_Lib] title = Finite Machine Word Library author = Joel Beeren<>, Matthew Fernandez<>, Xin Gao<>, Gerwin Klein , Rafal Kolanski<>, Japheth Lim<>, Corey Lewis<>, Daniel Matichuk<>, Thomas Sewell<> notify = kleing@unsw.edu.au date = 2016-06-09 topic = Computer science/Data structures abstract = This entry contains an extension to the Isabelle library for fixed-width machine words. In particular, the entry adds quickcheck setup for words, printing as hexadecimals, additional operations, reasoning about alignment, signed words, enumerations of words, normalisation of word numerals, and an extensive library of properties about generic fixed-width words, as well as an instantiation of many of these to the commonly used 32 and 64-bit bases. [Catalan_Numbers] title = Catalan Numbers author = Manuel Eberl notify = eberlm@in.tum.de date = 2016-06-21 topic = Mathematics/Combinatorics abstract =

In this work, we define the Catalan numbers Cn and prove several equivalent definitions (including some closed-form formulae). We also show one of their applications (counting the number of binary trees of size n), prove the asymptotic growth approximation Cn ∼ 4n / (√π · n1.5), and provide reasonably efficient executable code to compute them.

The derivation of the closed-form formulae uses algebraic manipulations of the ordinary generating function of the Catalan numbers, and the asymptotic approximation is then done using generalised binomial coefficients and the Gamma function. Thanks to these highly non-elementary mathematical tools, the proofs are very short and simple.

[Fisher_Yates] title = Fisher–Yates shuffle author = Manuel Eberl notify = eberlm@in.tum.de date = 2016-09-30 topic = Computer science/Algorithms abstract =

This work defines and proves the correctness of the Fisher–Yates algorithm for shuffling – i.e. producing a random permutation – of a list. The algorithm proceeds by traversing the list and in each step swapping the current element with a random element from the remaining list.

[Bertrands_Postulate] title = Bertrand's postulate author = Julian Biendarra<>, Manuel Eberl contributors = Lawrence C. Paulson topic = Mathematics/Number theory date = 2017-01-17 notify = eberlm@in.tum.de abstract =

Bertrand's postulate is an early result on the distribution of prime numbers: For every positive integer n, there exists a prime number that lies strictly between n and 2n. The proof is ported from John Harrison's formalisation in HOL Light. It proceeds by first showing that the property is true for all n greater than or equal to 600 and then showing that it also holds for all n below 600 by case distinction.

[Rewriting_Z] title = The Z Property author = Bertram Felgenhauer<>, Julian Nagele<>, Vincent van Oostrom<>, Christian Sternagel notify = bertram.felgenhauer@uibk.ac.at, julian.nagele@uibk.ac.at, c.sternagel@gmail.com date = 2016-06-30 topic = Logic/Rewriting abstract = We formalize the Z property introduced by Dehornoy and van Oostrom. First we show that for any abstract rewrite system, Z implies confluence. Then we give two examples of proofs using Z: confluence of lambda-calculus with respect to beta-reduction and confluence of combinatory logic. [Resolution_FOL] title = The Resolution Calculus for First-Order Logic author = Anders Schlichtkrull notify = andschl@dtu.dk date = 2016-06-30 topic = Logic/General logic/Mechanization of proofs abstract = This theory is a formalization of the resolution calculus for first-order logic. It is proven sound and complete. The soundness proof uses the substitution lemma, which shows a correspondence between substitutions and updates to an environment. The completeness proof uses semantic trees, i.e. trees whose paths are partial Herbrand interpretations. It employs Herbrand's theorem in a formulation which states that an unsatisfiable set of clauses has a finite closed semantic tree. It also uses the lifting lemma which lifts resolution derivation steps from the ground world up to the first-order world. The theory is presented in a paper in the Journal of Automated Reasoning [Sch18] which extends a paper presented at the International Conference on Interactive Theorem Proving [Sch16]. An earlier version was presented in an MSc thesis [Sch15]. The formalization mostly follows textbooks by Ben-Ari [BA12], Chang and Lee [CL73], and Leitsch [Lei97]. The theory is part of the IsaFoL project [IsaFoL].

[Sch18] Anders Schlichtkrull. "Formalization of the Resolution Calculus for First-Order Logic". Journal of Automated Reasoning, 2018.
[Sch16] Anders Schlichtkrull. "Formalization of the Resolution Calculus for First-Order Logic". In: ITP 2016. Vol. 9807. LNCS. Springer, 2016.
[Sch15] Anders Schlichtkrull. "Formalization of Resolution Calculus in Isabelle". https://people.compute.dtu.dk/andschl/Thesis.pdf. MSc thesis. Technical University of Denmark, 2015.
[BA12] Mordechai Ben-Ari. Mathematical Logic for Computer Science. 3rd. Springer, 2012.
[CL73] Chin-Liang Chang and Richard Char-Tung Lee. Symbolic Logic and Mechanical Theorem Proving. 1st. Academic Press, Inc., 1973.
[Lei97] Alexander Leitsch. The Resolution Calculus. Texts in theoretical computer science. Springer, 1997.
[IsaFoL] IsaFoL authors. IsaFoL: Isabelle Formalization of Logic. https://bitbucket.org/jasmin_blanchette/isafol. extra-history = Change history: [2018-01-24]: added several new versions of the soundness and completeness theorems as described in the paper [Sch18].
[2018-03-20]: added a concrete instance of the unification and completeness theorems using the First-Order Terms AFP-entry from IsaFoR as described in the papers [Sch16] and [Sch18]. [Surprise_Paradox] title = Surprise Paradox author = Joachim Breitner notify = mail@joachim-breitner.de date = 2016-07-17 topic = Logic/Proof theory abstract = In 1964, Fitch showed that the paradox of the surprise hanging can be resolved by showing that the judge’s verdict is inconsistent. His formalization builds on Gödel’s coding of provability. In this theory, we reproduce his proof in Isabelle, building on Paulson’s formalisation of Gödel’s incompleteness theorems. [Ptolemys_Theorem] title = Ptolemy's Theorem author = Lukas Bulwahn notify = lukas.bulwahn@gmail.com date = 2016-08-07 topic = Mathematics/Geometry abstract = This entry provides an analytic proof to Ptolemy's Theorem using polar form transformation and trigonometric identities. In this formalization, we use ideas from John Harrison's HOL Light formalization and the proof sketch on the Wikipedia entry of Ptolemy's Theorem. This theorem is the 95th theorem of the Top 100 Theorems list. [Falling_Factorial_Sum] title = The Falling Factorial of a Sum author = Lukas Bulwahn topic = Mathematics/Combinatorics date = 2017-12-22 notify = lukas.bulwahn@gmail.com abstract = This entry shows that the falling factorial of a sum can be computed with an expression using binomial coefficients and the falling factorial of its summands. The entry provides three different proofs: a combinatorial proof, an induction proof and an algebraic proof using the Vandermonde identity. The three formalizations try to follow their informal presentations from a Mathematics Stack Exchange page as close as possible. The induction and algebraic formalization end up to be very close to their informal presentation, whereas the combinatorial proof first requires the introduction of list interleavings, and significant more detail than its informal presentation. [InfPathElimination] title = Infeasible Paths Elimination by Symbolic Execution Techniques: Proof of Correctness and Preservation of Paths author = Romain Aissat<>, Frederic Voisin<>, Burkhart Wolff notify = wolff@lri.fr date = 2016-08-18 topic = Computer science/Programming languages/Static analysis abstract = TRACER is a tool for verifying safety properties of sequential C programs. TRACER attempts at building a finite symbolic execution graph which over-approximates the set of all concrete reachable states and the set of feasible paths. We present an abstract framework for TRACER and similar CEGAR-like systems. The framework provides 1) a graph- transformation based method for reducing the feasible paths in control-flow graphs, 2) a model for symbolic execution, subsumption, predicate abstraction and invariant generation. In this framework we formally prove two key properties: correct construction of the symbolic states and preservation of feasible paths. The framework focuses on core operations, leaving to concrete prototypes to “fit in” heuristics for combining them. The accompanying paper (published in ITP 2016) can be found at https://www.lri.fr/∼wolff/papers/conf/2016-itp-InfPathsNSE.pdf. [Stirling_Formula] title = Stirling's formula author = Manuel Eberl notify = eberlm@in.tum.de date = 2016-09-01 topic = Mathematics/Analysis abstract =

This work contains a proof of Stirling's formula both for the factorial $n! \sim \sqrt{2\pi n} (n/e)^n$ on natural numbers and the real Gamma function $\Gamma(x)\sim \sqrt{2\pi/x} (x/e)^x$. The proof is based on work by Graham Jameson.

This is then extended to the full asymptotic expansion $$\log\Gamma(z) = \big(z - \tfrac{1}{2}\big)\log z - z + \tfrac{1}{2}\log(2\pi) + \sum_{k=1}^{n-1} \frac{B_{k+1}}{k(k+1)} z^{-k}\\ {} - \frac{1}{n} \int_0^\infty B_n([t])(t + z)^{-n}\,\text{d}t$$ uniformly for all complex $z\neq 0$ in the cone $\text{arg}(z)\leq \alpha$ for any $\alpha\in(0,\pi)$, with which the above asymptotic relation for Γ is also extended to complex arguments.

[Lp] title = Lp spaces author = Sebastien Gouezel notify = sebastien.gouezel@univ-rennes1.fr date = 2016-10-05 topic = Mathematics/Analysis abstract = Lp is the space of functions whose p-th power is integrable. It is one of the most fundamental Banach spaces that is used in analysis and probability. We develop a framework for function spaces, and then implement the Lp spaces in this framework using the existing integration theory in Isabelle/HOL. Our development contains most fundamental properties of Lp spaces, notably the Hölder and Minkowski inequalities, completeness of Lp, duality, stability under almost sure convergence, multiplication of functions in Lp and Lq, stability under conditional expectation. [Berlekamp_Zassenhaus] title = The Factorization Algorithm of Berlekamp and Zassenhaus author = Jose Divasón , Sebastiaan Joosten , René Thiemann , Akihisa Yamada notify = rene.thiemann@uibk.ac.at date = 2016-10-14 topic = Mathematics/Algebra abstract =

We formalize the Berlekamp-Zassenhaus algorithm for factoring square-free integer polynomials in Isabelle/HOL. We further adapt an existing formalization of Yun’s square-free factorization algorithm to integer polynomials, and thus provide an efficient and certified factorization algorithm for arbitrary univariate polynomials.

The algorithm first performs a factorization in the prime field GF(p) and then performs computations in the integer ring modulo p^k, where both p and k are determined at runtime. Since a natural modeling of these structures via dependent types is not possible in Isabelle/HOL, we formalize the whole algorithm using Isabelle’s recent addition of local type definitions.

Through experiments we verify that our algorithm factors polynomials of degree 100 within seconds.

[Allen_Calculus] title = Allen's Interval Calculus author = Fadoua Ghourabi <> notify = fadouaghourabi@gmail.com date = 2016-09-29 topic = Logic/General logic/Temporal logic, Mathematics/Order abstract = Allen’s interval calculus is a qualitative temporal representation of time events. Allen introduced 13 binary relations that describe all the possible arrangements between two events, i.e. intervals with non-zero finite length. The compositions are pertinent to reasoning about knowledge of time. In particular, a consistency problem of relation constraints is commonly solved with a guideline from these compositions. We formalize the relations together with an axiomatic system. We proof the validity of the 169 compositions of these relations. We also define nests as the sets of intervals that share a meeting point. We prove that nests give the ordering properties of points without introducing a new datatype for points. [1] J.F. Allen. Maintaining Knowledge about Temporal Intervals. In Commun. ACM, volume 26, pages 832–843, 1983. [2] J. F. Allen and P. J. Hayes. A Common-sense Theory of Time. In Proceedings of the 9th International Joint Conference on Artificial Intelligence (IJCAI’85), pages 528–531, 1985. [Source_Coding_Theorem] title = Source Coding Theorem author = Quentin Hibon , Lawrence C. Paulson notify = qh225@cl.cam.ac.uk date = 2016-10-19 topic = Mathematics/Probability theory abstract = This document contains a proof of the necessary condition on the code rate of a source code, namely that this code rate is bounded by the entropy of the source. This represents one half of Shannon's source coding theorem, which is itself an equivalence. [Buffons_Needle] title = Buffon's Needle Problem author = Manuel Eberl topic = Mathematics/Probability theory, Mathematics/Geometry date = 2017-06-06 notify = eberlm@in.tum.de abstract = In the 18th century, Georges-Louis Leclerc, Comte de Buffon posed and later solved the following problem, which is often called the first problem ever solved in geometric probability: Given a floor divided into vertical strips of the same width, what is the probability that a needle thrown onto the floor randomly will cross two strips? This entry formally defines the problem in the case where the needle's position is chosen uniformly at random in a single strip around the origin (which is equivalent to larger arrangements due to symmetry). It then provides proofs of the simple solution in the case where the needle's length is no greater than the width of the strips and the more complicated solution in the opposite case. [SPARCv8] title = A formal model for the SPARCv8 ISA and a proof of non-interference for the LEON3 processor author = Zhe Hou , David Sanan , Alwen Tiu , Yang Liu notify = zhe.hou@ntu.edu.sg, sanan@ntu.edu.sg date = 2016-10-19 topic = Computer science/Security, Computer science/Hardware abstract = We formalise the SPARCv8 instruction set architecture (ISA) which is used in processors such as LEON3. Our formalisation can be specialised to any SPARCv8 CPU, here we use LEON3 as a running example. Our model covers the operational semantics for all the instructions in the integer unit of the SPARCv8 architecture and it supports Isabelle code export, which effectively turns the Isabelle model into a SPARCv8 CPU simulator. We prove the language-based non-interference property for the LEON3 processor. Our model is based on deterministic monad, which is a modified version of the non-deterministic monad from NICTA/l4v. [Separata] title = Separata: Isabelle tactics for Separation Algebra author = Zhe Hou , David Sanan , Alwen Tiu , Rajeev Gore , Ranald Clouston notify = zhe.hou@ntu.edu.sg date = 2016-11-16 topic = Computer science/Programming languages/Logics, Tools abstract = We bring the labelled sequent calculus $LS_{PASL}$ for propositional abstract separation logic to Isabelle. The tactics given here are directly applied on an extension of the Separation Algebra in the AFP. In addition to the cancellative separation algebra, we further consider some useful properties in the heap model of separation logic, such as indivisible unit, disjointness, and cross-split. The tactics are essentially a proof search procedure for the calculus $LS_{PASL}$. We wrap the tactics in an Isabelle method called separata, and give a few examples of separation logic formulae which are provable by separata. [LOFT] title = LOFT — Verified Migration of Linux Firewalls to SDN author = Julius Michaelis , Cornelius Diekmann notify = isabelleopenflow@liftm.de date = 2016-10-21 topic = Computer science/Networks abstract = We present LOFT — Linux firewall OpenFlow Translator, a system that transforms the main routing table and FORWARD chain of iptables of a Linux-based firewall into a set of static OpenFlow rules. Our implementation is verified against a model of a simplified Linux-based router and we can directly show how much of the original functionality is preserved. [Stable_Matching] title = Stable Matching author = Peter Gammie notify = peteg42@gmail.com date = 2016-10-24 topic = Mathematics/Games and economics abstract = We mechanize proofs of several results from the matching with contracts literature, which generalize those of the classical two-sided matching scenarios that go by the name of stable marriage. Our focus is on game theoretic issues. Along the way we develop executable algorithms for computing optimal stable matches. [Modal_Logics_for_NTS] title = Modal Logics for Nominal Transition Systems author = Tjark Weber , Lars-Henrik Eriksson , Joachim Parrow , Johannes Borgström , Ramunas Gutkovas notify = tjark.weber@it.uu.se date = 2016-10-25 topic = Computer science/Concurrency/Process calculi, Logic/General logic/Modal logic abstract = We formalize a uniform semantic substrate for a wide variety of process calculi where states and action labels can be from arbitrary nominal sets. A Hennessy-Milner logic for these systems is defined, and proved adequate for bisimulation equivalence. A main novelty is the construction of an infinitary nominal data type to model formulas with (finitely supported) infinite conjunctions and actions that may contain binding names. The logic is generalized to treat different bisimulation variants such as early, late and open in a systematic way. extra-history = Change history: [2017-01-29]: Formalization of weak bisimilarity added (revision c87cc2057d9c) [Abs_Int_ITP2012] title = Abstract Interpretation of Annotated Commands author = Tobias Nipkow notify = nipkow@in.tum.de date = 2016-11-23 topic = Computer science/Programming languages/Static analysis abstract = This is the Isabelle formalization of the material decribed in the eponymous ITP 2012 paper. It develops a generic abstract interpreter for a while-language, including widening and narrowing. The collecting semantics and the abstract interpreter operate on annotated commands: the program is represented as a syntax tree with the semantic information directly embedded, without auxiliary labels. The aim of the formalization is simplicity, not efficiency or precision. This is motivated by the inclusion of the material in a theorem prover based course on semantics. A similar (but more polished) development is covered in the book Concrete Semantics. [Complx] title = COMPLX: A Verification Framework for Concurrent Imperative Programs author = Sidney Amani<>, June Andronick<>, Maksym Bortin<>, Corey Lewis<>, Christine Rizkallah<>, Joseph Tuong<> notify = sidney.amani@data61.csiro.au, corey.lewis@data61.csiro.au date = 2016-11-29 topic = Computer science/Programming languages/Logics, Computer science/Programming languages/Language definitions abstract = We propose a concurrency reasoning framework for imperative programs, based on the Owicki-Gries (OG) foundational shared-variable concurrency method. Our framework combines the approaches of Hoare-Parallel, a formalisation of OG in Isabelle/HOL for a simple while-language, and Simpl, a generic imperative language embedded in Isabelle/HOL, allowing formal reasoning on C programs. We define the Complx language, extending the syntax and semantics of Simpl with support for parallel composition and synchronisation. We additionally define an OG logic, which we prove sound w.r.t. the semantics, and a verification condition generator, both supporting involved low-level imperative constructs such as function calls and abrupt termination. We illustrate our framework on an example that features exceptions, guards and function calls. We aim to then target concurrent operating systems, such as the interruptible eChronos embedded operating system for which we already have a model-level OG proof using Hoare-Parallel. extra-history = Change history: [2017-01-13]: Improve VCG for nested parallels and sequential sections (revision 30739dbc3dcb) [Paraconsistency] title = Paraconsistency author = Anders Schlichtkrull , Jørgen Villadsen topic = Logic/General logic/Paraconsistent logics date = 2016-12-07 notify = andschl@dtu.dk, jovi@dtu.dk abstract = Paraconsistency is about handling inconsistency in a coherent way. In classical and intuitionistic logic everything follows from an inconsistent theory. A paraconsistent logic avoids the explosion. Quite a few applications in computer science and engineering are discussed in the Intelligent Systems Reference Library Volume 110: Towards Paraconsistent Engineering (Springer 2016). We formalize a paraconsistent many-valued logic that we motivated and described in a special issue on logical approaches to paraconsistency (Journal of Applied Non-Classical Logics 2005). We limit ourselves to the propositional fragment of the higher-order logic. The logic is based on so-called key equalities and has a countably infinite number of truth values. We prove theorems in the logic using the definition of validity. We verify truth tables and also counterexamples for non-theorems. We prove meta-theorems about the logic and finally we investigate a case study. [Proof_Strategy_Language] title = Proof Strategy Language author = Yutaka Nagashima<> topic = Tools date = 2016-12-20 notify = Yutaka.Nagashima@data61.csiro.au abstract = Isabelle includes various automatic tools for finding proofs under certain conditions. However, for each conjecture, knowing which automation to use, and how to tweak its parameters, is currently labour intensive. We have developed a language, PSL, designed to capture high level proof strategies. PSL offloads the construction of human-readable fast-to-replay proof scripts to automatic search, making use of search-time information about each conjecture. Our preliminary evaluations show that PSL reduces the labour cost of interactive theorem proving. This submission contains the implementation of PSL and an example theory file, Example.thy, showing how to write poof strategies in PSL. [Concurrent_Ref_Alg] title = Concurrent Refinement Algebra and Rely Quotients author = Julian Fell , Ian J. Hayes , Andrius Velykis topic = Computer science/Concurrency date = 2016-12-30 notify = Ian.Hayes@itee.uq.edu.au abstract = The concurrent refinement algebra developed here is designed to provide a foundation for rely/guarantee reasoning about concurrent programs. The algebra builds on a complete lattice of commands by providing sequential composition, parallel composition and a novel weak conjunction operator. The weak conjunction operator coincides with the lattice supremum providing its arguments are non-aborting, but aborts if either of its arguments do. Weak conjunction provides an abstract version of a guarantee condition as a guarantee process. We distinguish between models that distribute sequential composition over non-deterministic choice from the left (referred to as being conjunctive in the refinement calculus literature) and those that don't. Least and greatest fixed points of monotone functions are provided to allow recursion and iteration operators to be added to the language. Additional iteration laws are available for conjunctive models. The rely quotient of processes c and i is the process that, if executed in parallel with i implements c. It represents an abstract version of a rely condition generalised to a process. [FOL_Harrison] title = First-Order Logic According to Harrison author = Alexander Birch Jensen , Anders Schlichtkrull , Jørgen Villadsen topic = Logic/General logic/Mechanization of proofs date = 2017-01-01 notify = aleje@dtu.dk, andschl@dtu.dk, jovi@dtu.dk abstract =

We present a certified declarative first-order prover with equality based on John Harrison's Handbook of Practical Logic and Automated Reasoning, Cambridge University Press, 2009. ML code reflection is used such that the entire prover can be executed within Isabelle as a very simple interactive proof assistant. As examples we consider Pelletier's problems 1-46.

Reference: Programming and Verifying a Declarative First-Order Prover in Isabelle/HOL. Alexander Birch Jensen, John Bruntse Larsen, Anders Schlichtkrull & Jørgen Villadsen. AI Communications 31:281-299 2018. https://content.iospress.com/articles/ai-communications/aic764

See also: Students' Proof Assistant (SPA). https://github.com/logic-tools/spa

extra-history = Change history: [2018-07-21]: Proof of Pelletier's problem 34 (Andrews's Challenge) thanks to Asta Halkjær From. [Bernoulli] title = Bernoulli Numbers author = Lukas Bulwahn, Manuel Eberl topic = Mathematics/Analysis, Mathematics/Number theory date = 2017-01-24 notify = eberlm@in.tum.de abstract =

Bernoulli numbers were first discovered in the closed-form expansion of the sum 1m + 2m + … + nm for a fixed m and appear in many other places. This entry provides three different definitions for them: a recursive one, an explicit one, and one through their exponential generating function.

In addition, we prove some basic facts, e.g. their relation to sums of powers of integers and that all odd Bernoulli numbers except the first are zero, and some advanced facts like their relationship to the Riemann zeta function on positive even integers.

We also prove the correctness of the Akiyama–Tanigawa algorithm for computing Bernoulli numbers with reasonable efficiency, and we define the periodic Bernoulli polynomials (which appear e.g. in the Euler–MacLaurin summation formula and the expansion of the log-Gamma function) and prove their basic properties.

[Stone_Relation_Algebras] title = Stone Relation Algebras author = Walter Guttmann topic = Mathematics/Algebra date = 2017-02-07 notify = walter.guttmann@canterbury.ac.nz abstract = We develop Stone relation algebras, which generalise relation algebras by replacing the underlying Boolean algebra structure with a Stone algebra. We show that finite matrices over extended real numbers form an instance. As a consequence, relation-algebraic concepts and methods can be used for reasoning about weighted graphs. We also develop a fixpoint calculus and apply it to compare different definitions of reflexive-transitive closures in semirings. [Stone_Kleene_Relation_Algebras] title = Stone-Kleene Relation Algebras author = Walter Guttmann topic = Mathematics/Algebra date = 2017-07-06 notify = walter.guttmann@canterbury.ac.nz abstract = We develop Stone-Kleene relation algebras, which expand Stone relation algebras with a Kleene star operation to describe reachability in weighted graphs. Many properties of the Kleene star arise as a special case of a more general theory of iteration based on Conway semirings extended by simulation axioms. This includes several theorems representing complex program transformations. We formally prove the correctness of Conway's automata-based construction of the Kleene star of a matrix. We prove numerous results useful for reasoning about weighted graphs. [Abstract_Soundness] title = Abstract Soundness author = Jasmin Christian Blanchette , Andrei Popescu , Dmitriy Traytel topic = Logic/Proof theory date = 2017-02-10 notify = jasmin.blanchette@gmail.com abstract = A formalized coinductive account of the abstract development of Brotherston, Gorogiannis, and Petersen [APLAS 2012], in a slightly more general form since we work with arbitrary infinite proofs, which may be acyclic. This work is described in detail in an article by the authors, published in 2017 in the Journal of Automated Reasoning. The abstract proof can be instantiated for various formalisms, including first-order logic with inductive predicates. [Differential_Dynamic_Logic] title = Differential Dynamic Logic author = Brandon Bohrer topic = Logic/General logic/Modal logic, Computer science/Programming languages/Logics date = 2017-02-13 notify = bbohrer@cs.cmu.edu abstract = We formalize differential dynamic logic, a logic for proving properties of hybrid systems. The proof calculus in this formalization is based on the uniform substitution principle. We show it is sound with respect to our denotational semantics, which provides increased confidence in the correctness of the KeYmaera X theorem prover based on this calculus. As an application, we include a proof term checker embedded in Isabelle/HOL with several example proofs. Published in: Brandon Bohrer, Vincent Rahli, Ivana Vukotic, Marcus Völp, André Platzer: Formally verified differential dynamic logic. CPP 2017. [Syntax_Independent_Logic] title = Syntax-Independent Logic Infrastructure author = Andrei Popescu , Dmitriy Traytel topic = Logic/Proof theory date = 2020-09-16 notify = a.popescu@sheffield.ac.uk, traytel@di.ku.dk abstract = We formalize a notion of logic whose terms and formulas are kept abstract. In particular, logical connectives, substitution, free variables, and provability are not defined, but characterized by their general properties as locale assumptions. Based on this abstract characterization, we develop further reusable reasoning infrastructure. For example, we define parallel substitution (along with proving its characterizing theorems) from single-point substitution. Similarly, we develop a natural deduction style proof system starting from the abstract Hilbert-style one. These one-time efforts benefit different concrete logics satisfying our locales' assumptions. We instantiate the syntax-independent logic infrastructure to Robinson arithmetic (also known as Q) in the AFP entry Robinson_Arithmetic and to hereditarily finite set theory in the AFP entries Goedel_HFSet_Semantic and Goedel_HFSet_Semanticless, which are part of our formalization of Gödel's Incompleteness Theorems described in our CADE-27 paper A Formally Verified Abstract Account of Gödel's Incompleteness Theorems. [Goedel_Incompleteness] title = An Abstract Formalization of Gödel's Incompleteness Theorems author = Andrei Popescu , Dmitriy Traytel topic = Logic/Proof theory date = 2020-09-16 notify = a.popescu@sheffield.ac.uk, traytel@di.ku.dk abstract = We present an abstract formalization of Gödel's incompleteness theorems. We analyze sufficient conditions for the theorems' applicability to a partially specified logic. Our abstract perspective enables a comparison between alternative approaches from the literature. These include Rosser's variation of the first theorem, Jeroslow's variation of the second theorem, and the Swierczkowski–Paulson semantics-based approach. This AFP entry is the main entry point to the results described in our CADE-27 paper A Formally Verified Abstract Account of Gödel's Incompleteness Theorems. As part of our abstract formalization's validation, we instantiate our locales twice in the separate AFP entries Goedel_HFSet_Semantic and Goedel_HFSet_Semanticless. [Goedel_HFSet_Semantic] title = From Abstract to Concrete Gödel's Incompleteness Theorems—Part I author = Andrei Popescu , Dmitriy Traytel topic = Logic/Proof theory date = 2020-09-16 notify = a.popescu@sheffield.ac.uk, traytel@di.ku.dk abstract = We validate an abstract formulation of Gödel's First and Second Incompleteness Theorems from a separate AFP entry by instantiating them to the case of finite sound extensions of the Hereditarily Finite (HF) Set theory, i.e., FOL theories extending the HF Set theory with a finite set of axioms that are sound in the standard model. The concrete results had been previously formalised in an AFP entry by Larry Paulson; our instantiation reuses the infrastructure developed in that entry. [Goedel_HFSet_Semanticless] title = From Abstract to Concrete Gödel's Incompleteness Theorems—Part II author = Andrei Popescu , Dmitriy Traytel topic = Logic/Proof theory date = 2020-09-16 notify = a.popescu@sheffield.ac.uk, traytel@di.ku.dk abstract = We validate an abstract formulation of Gödel's Second Incompleteness Theorem from a separate AFP entry by instantiating it to the case of finite consistent extensions of the Hereditarily Finite (HF) Set theory, i.e., consistent FOL theories extending the HF Set theory with a finite set of axioms. The instantiation draws heavily on infrastructure previously developed by Larry Paulson in his direct formalisation of the concrete result. It strengthens Paulson's formalization of Gödel's Second from that entry by not assuming soundness, and in fact not relying on any notion of model or semantic interpretation. The strengthening was obtained by first replacing some of Paulson’s semantic arguments with proofs within his HF calculus, and then plugging in some of Paulson's (modified) lemmas to instantiate our soundness-free Gödel's Second locale. [Robinson_Arithmetic] title = Robinson Arithmetic author = Andrei Popescu , Dmitriy Traytel topic = Logic/Proof theory date = 2020-09-16 notify = a.popescu@sheffield.ac.uk, traytel@di.ku.dk abstract = We instantiate our syntax-independent logic infrastructure developed in a separate AFP entry to the FOL theory of Robinson arithmetic (also known as Q). The latter was formalised using Nominal Isabelle by adapting Larry Paulson’s formalization of the Hereditarily Finite Set theory. [Elliptic_Curves_Group_Law] title = The Group Law for Elliptic Curves author = Stefan Berghofer topic = Computer science/Security/Cryptography date = 2017-02-28 notify = berghofe@in.tum.de abstract = We prove the group law for elliptic curves in Weierstrass form over fields of characteristic greater than 2. In addition to affine coordinates, we also formalize projective coordinates, which allow for more efficient computations. By specializing the abstract formalization to prime fields, we can apply the curve operations to parameters used in standard security protocols. [Example-Submission] title = Example Submission author = Gerwin Klein topic = Mathematics/Analysis, Mathematics/Number theory date = 2004-02-25 notify = kleing@cse.unsw.edu.au abstract =

This is an example submission to the Archive of Formal Proofs. It shows submission requirements and explains the structure of a simple typical submission.

Note that you can use HTML tags and LaTeX formulae like $\sum_{n=1}^\infty \frac{1}{n^2} = \frac{\pi^2}{6}$ in the abstract. Display formulae like $$ \int_0^1 x^{-x}\,\text{d}x = \sum_{n=1}^\infty n^{-n}$$ are also possible. Please read the submission guidelines before using this.

extra-no-index = no-index: true [CRDT] title = A framework for establishing Strong Eventual Consistency for Conflict-free Replicated Datatypes author = Victor B. F. Gomes , Martin Kleppmann, Dominic P. Mulligan, Alastair R. Beresford topic = Computer science/Algorithms/Distributed, Computer science/Data structures date = 2017-07-07 notify = vb358@cam.ac.uk, dominic.p.mulligan@googlemail.com abstract = In this work, we focus on the correctness of Conflict-free Replicated Data Types (CRDTs), a class of algorithm that provides strong eventual consistency guarantees for replicated data. We develop a modular and reusable framework for verifying the correctness of CRDT algorithms. We avoid correctness issues that have dogged previous mechanised proofs in this area by including a network model in our formalisation, and proving that our theorems hold in all possible network behaviours. Our axiomatic network model is a standard abstraction that accurately reflects the behaviour of real-world computer networks. Moreover, we identify an abstract convergence theorem, a property of order relations, which provides a formal definition of strong eventual consistency. We then obtain the first machine-checked correctness theorems for three concrete CRDTs: the Replicated Growable Array, the Observed-Remove Set, and an Increment-Decrement Counter. [HOLCF-Prelude] title = HOLCF-Prelude author = Joachim Breitner, Brian Huffman<>, Neil Mitchell<>, Christian Sternagel topic = Computer science/Functional programming date = 2017-07-15 notify = c.sternagel@gmail.com, joachim@cis.upenn.edu, hupel@in.tum.de abstract = The Isabelle/HOLCF-Prelude is a formalization of a large part of Haskell's standard prelude in Isabelle/HOLCF. We use it to prove the correctness of the Eratosthenes' Sieve, in its self-referential implementation commonly used to showcase Haskell's laziness; prove correctness of GHC's "fold/build" rule and related rewrite rules; and certify a number of hints suggested by HLint. [Decl_Sem_Fun_PL] title = Declarative Semantics for Functional Languages author = Jeremy Siek topic = Computer science/Programming languages date = 2017-07-21 notify = jsiek@indiana.edu abstract = We present a semantics for an applied call-by-value lambda-calculus that is compositional, extensional, and elementary. We present four different views of the semantics: 1) as a relational (big-step) semantics that is not operational but instead declarative, 2) as a denotational semantics that does not use domain theory, 3) as a non-deterministic interpreter, and 4) as a variant of the intersection type systems of the Torino group. We prove that the semantics is correct by showing that it is sound and complete with respect to operational semantics on programs and that is sound with respect to contextual equivalence. We have not yet investigated whether it is fully abstract. We demonstrate that this approach to semantics is useful with three case studies. First, we use the semantics to prove correctness of a compiler optimization that inlines function application. Second, we adapt the semantics to the polymorphic lambda-calculus extended with general recursion and prove semantic type soundness. Third, we adapt the semantics to the call-by-value lambda-calculus with mutable references.
The paper that accompanies these Isabelle theories is available on arXiv. [DynamicArchitectures] title = Dynamic Architectures author = Diego Marmsoler topic = Computer science/System description languages date = 2017-07-28 notify = diego.marmsoler@tum.de abstract = The architecture of a system describes the system's overall organization into components and connections between those components. With the emergence of mobile computing, dynamic architectures have become increasingly important. In such architectures, components may appear or disappear, and connections may change over time. In the following we mechanize a theory of dynamic architectures and verify the soundness of a corresponding calculus. Therefore, we first formalize the notion of configuration traces as a model for dynamic architectures. Then, the behavior of single components is formalized in terms of behavior traces and an operator is introduced and studied to extract the behavior of a single component out of a given configuration trace. Then, behavior trace assertions are introduced as a temporal specification technique to specify behavior of components. Reasoning about component behavior in a dynamic context is formalized in terms of a calculus for dynamic architectures. Finally, the soundness of the calculus is verified by introducing an alternative interpretation for behavior trace assertions over configuration traces and proving the rules of the calculus. Since projection may lead to finite as well as infinite behavior traces, they are formalized in terms of coinductive lists. Thus, our theory is based on Lochbihler's formalization of coinductive lists. The theory may be applied to verify properties for dynamic architectures. extra-history = Change history: [2018-06-07]: adding logical operators to specify configuration traces (revision 09178f08f050)
[Stewart_Apollonius] title = Stewart's Theorem and Apollonius' Theorem author = Lukas Bulwahn topic = Mathematics/Geometry date = 2017-07-31 notify = lukas.bulwahn@gmail.com abstract = This entry formalizes the two geometric theorems, Stewart's and Apollonius' theorem. Stewart's Theorem relates the length of a triangle's cevian to the lengths of the triangle's two sides. Apollonius' Theorem is a specialisation of Stewart's theorem, restricting the cevian to be the median. The proof applies the law of cosines, some basic geometric facts about triangles and then simply transforms the terms algebraically to yield the conjectured relation. The formalization in Isabelle can closely follow the informal proofs described in the Wikipedia articles of those two theorems. [LambdaMu] title = The LambdaMu-calculus author = Cristina Matache , Victor B. F. Gomes , Dominic P. Mulligan topic = Computer science/Programming languages/Lambda calculi, Logic/General logic/Lambda calculus date = 2017-08-16 notify = victorborgesfg@gmail.com, dominic.p.mulligan@googlemail.com abstract = The propositions-as-types correspondence is ordinarily presented as linking the metatheory of typed λ-calculi and the proof theory of intuitionistic logic. Griffin observed that this correspondence could be extended to classical logic through the use of control operators. This observation set off a flurry of further research, leading to the development of Parigots λμ-calculus. In this work, we formalise λμ- calculus in Isabelle/HOL and prove several metatheoretical properties such as type preservation and progress. [Orbit_Stabiliser] title = Orbit-Stabiliser Theorem with Application to Rotational Symmetries author = Jonas Rädle topic = Mathematics/Algebra date = 2017-08-20 notify = jonas.raedle@tum.de abstract = The Orbit-Stabiliser theorem is a basic result in the algebra of groups that factors the order of a group into the sizes of its orbits and stabilisers. We formalize the notion of a group action and the related concepts of orbits and stabilisers. This allows us to prove the orbit-stabiliser theorem. In the second part of this work, we formalize the tetrahedral group and use the orbit-stabiliser theorem to prove that there are twelve (orientation-preserving) rotations of the tetrahedron. [PLM] title = Representation and Partial Automation of the Principia Logico-Metaphysica in Isabelle/HOL author = Daniel Kirchner topic = Logic/Philosophical aspects date = 2017-09-17 notify = daniel@ekpyron.org abstract =

We present an embedding of the second-order fragment of the Theory of Abstract Objects as described in Edward Zalta's upcoming work Principia Logico-Metaphysica (PLM) in the automated reasoning framework Isabelle/HOL. The Theory of Abstract Objects is a metaphysical theory that reifies property patterns, as they for example occur in the abstract reasoning of mathematics, as abstract objects and provides an axiomatic framework that allows to reason about these objects. It thereby serves as a fundamental metaphysical theory that can be used to axiomatize and describe a wide range of philosophical objects, such as Platonic forms or Leibniz' concepts, and has the ambition to function as a foundational theory of mathematics. The target theory of our embedding as described in chapters 7-9 of PLM employs a modal relational type theory as logical foundation for which a representation in functional type theory is known to be challenging.

Nevertheless we arrive at a functioning representation of the theory in the functional logic of Isabelle/HOL based on a semantical representation of an Aczel-model of the theory. Based on this representation we construct an implementation of the deductive system of PLM which allows to automatically and interactively find and verify theorems of PLM.

Our work thereby supports the concept of shallow semantical embeddings of logical systems in HOL as a universal tool for logical reasoning as promoted by Christoph Benzmüller.

The most notable result of the presented work is the discovery of a previously unknown paradox in the formulation of the Theory of Abstract Objects. The embedding of the theory in Isabelle/HOL played a vital part in this discovery. Furthermore it was possible to immediately offer several options to modify the theory to guarantee its consistency. Thereby our work could provide a significant contribution to the development of a proper grounding for object theory.

[KD_Tree] title = Multidimensional Binary Search Trees author = Martin Rau<> topic = Computer science/Data structures date = 2019-05-30 notify = martin.rau@tum.de, mrtnrau@googlemail.com abstract = This entry provides a formalization of multidimensional binary trees, also known as k-d trees. It includes a balanced build algorithm as well as the nearest neighbor algorithm and the range search algorithm. It is based on the papers Multidimensional binary search trees used for associative searching and An Algorithm for Finding Best Matches in Logarithmic Expected Time. extra-history = Change history: [2020-15-04]: Change representation of k-dimensional points from 'list' to HOL-Analysis.Finite_Cartesian_Product 'vec'. Update proofs to incorporate HOL-Analysis 'dist' and 'cbox' primitives. [Closest_Pair_Points] title = Closest Pair of Points Algorithms author = Martin Rau , Tobias Nipkow topic = Computer science/Algorithms/Geometry date = 2020-01-13 notify = martin.rau@tum.de, nipkow@in.tum.de abstract = This entry provides two related verified divide-and-conquer algorithms solving the fundamental Closest Pair of Points problem in Computational Geometry. Functional correctness and the optimal running time of O(n log n) are proved. Executable code is generated which is empirically competitive with handwritten reference implementations. extra-history = Change history: [2020-14-04]: Incorporate Time_Monad of the AFP entry Root_Balanced_Tree. [Approximation_Algorithms] title = Verified Approximation Algorithms author = Robin Eßmann , Tobias Nipkow , Simon Robillard topic = Computer science/Algorithms/Approximation date = 2020-01-16 notify = nipkow@in.tum.de abstract = We present the first formal verification of approximation algorithms for NP-complete optimization problems: vertex cover, set cover, independent set, load balancing, and bin packing. The proofs correct incompletenesses in existing proofs and improve the approximation ratio in one case. A detailed description of our work has been published in the proceedings of IJCAR 2020. [Diophantine_Eqns_Lin_Hom] title = Homogeneous Linear Diophantine Equations author = Florian Messner , Julian Parsert , Jonas Schöpf , Christian Sternagel topic = Computer science/Algorithms/Mathematical, Mathematics/Number theory, Tools license = LGPL date = 2017-10-14 notify = c.sternagel@gmail.com, julian.parsert@gmail.com abstract = We formalize the theory of homogeneous linear diophantine equations, focusing on two main results: (1) an abstract characterization of minimal complete sets of solutions, and (2) an algorithm computing them. Both, the characterization and the algorithm are based on previous work by Huet. Our starting point is a simple but inefficient variant of Huet's lexicographic algorithm incorporating improved bounds due to Clausen and Fortenbacher. We proceed by proving its soundness and completeness. Finally, we employ code equations to obtain a reasonably efficient implementation. Thus, we provide a formally verified solver for homogeneous linear diophantine equations. [Winding_Number_Eval] title = Evaluate Winding Numbers through Cauchy Indices author = Wenda Li topic = Mathematics/Analysis date = 2017-10-17 notify = wl302@cam.ac.uk, liwenda1990@hotmail.com abstract = In complex analysis, the winding number measures the number of times a path (counterclockwise) winds around a point, while the Cauchy index can approximate how the path winds. This entry provides a formalisation of the Cauchy index, which is then shown to be related to the winding number. In addition, this entry also offers a tactic that enables users to evaluate the winding number by calculating Cauchy indices. [Count_Complex_Roots] title = Count the Number of Complex Roots author = Wenda Li topic = Mathematics/Analysis date = 2017-10-17 notify = wl302@cam.ac.uk, liwenda1990@hotmail.com abstract = Based on evaluating Cauchy indices through remainder sequences, this entry provides an effective procedure to count the number of complex roots (with multiplicity) of a polynomial within a rectangle box or a half-plane. Potential applications of this entry include certified complex root isolation (of a polynomial) and testing the Routh-Hurwitz stability criterion (i.e., to check whether all the roots of some characteristic polynomial have negative real parts). [Buchi_Complementation] title = Büchi Complementation author = Julian Brunner topic = Computer science/Automata and formal languages date = 2017-10-19 notify = brunnerj@in.tum.de abstract = This entry provides a verified implementation of rank-based Büchi Complementation. The verification is done in three steps:
  1. Definition of odd rankings and proof that an automaton rejects a word iff there exists an odd ranking for it.
  2. Definition of the complement automaton and proof that it accepts exactly those words for which there is an odd ranking.
  3. Verified implementation of the complement automaton using the Isabelle Collections Framework.
[Transition_Systems_and_Automata] title = Transition Systems and Automata author = Julian Brunner topic = Computer science/Automata and formal languages date = 2017-10-19 notify = brunnerj@in.tum.de abstract = This entry provides a very abstract theory of transition systems that can be instantiated to express various types of automata. A transition system is typically instantiated by providing a set of initial states, a predicate for enabled transitions, and a transition execution function. From this, it defines the concepts of finite and infinite paths as well as the set of reachable states, among other things. Many useful theorems, from basic path manipulation rules to coinduction and run construction rules, are proven in this abstract transition system context. The library comes with instantiations for DFAs, NFAs, and Büchi automata. [Kuratowski_Closure_Complement] title = The Kuratowski Closure-Complement Theorem author = Peter Gammie , Gianpaolo Gioiosa<> topic = Mathematics/Topology date = 2017-10-26 notify = peteg42@gmail.com abstract = We discuss a topological curiosity discovered by Kuratowski (1922): the fact that the number of distinct operators on a topological space generated by compositions of closure and complement never exceeds 14, and is exactly 14 in the case of R. In addition, we prove a theorem due to Chagrov (1982) that classifies topological spaces according to the number of such operators they support. [Hybrid_Multi_Lane_Spatial_Logic] title = Hybrid Multi-Lane Spatial Logic author = Sven Linker topic = Logic/General logic/Modal logic date = 2017-11-06 notify = s.linker@liverpool.ac.uk abstract = We present a semantic embedding of a spatio-temporal multi-modal logic, specifically defined to reason about motorway traffic, into Isabelle/HOL. The semantic model is an abstraction of a motorway, emphasising local spatial properties, and parameterised by the types of sensors deployed in the vehicles. We use the logic to define controller constraints to ensure safety, i.e., the absence of collisions on the motorway. After proving safety with a restrictive definition of sensors, we relax these assumptions and show how to amend the controller constraints to still guarantee safety. [Dirichlet_L] title = Dirichlet L-Functions and Dirichlet's Theorem author = Manuel Eberl topic = Mathematics/Number theory, Mathematics/Algebra date = 2017-12-21 notify = eberlm@in.tum.de abstract =

This article provides a formalisation of Dirichlet characters and Dirichlet L-functions including proofs of their basic properties – most notably their analyticity, their areas of convergence, and their non-vanishing for ℜ(s) ≥ 1. All of this is built in a very high-level style using Dirichlet series. The proof of the non-vanishing follows a very short and elegant proof by Newman, which we attempt to reproduce faithfully in a similar level of abstraction in Isabelle.

This also leads to a relatively short proof of Dirichlet’s Theorem, which states that, if h and n are coprime, there are infinitely many primes p with ph (mod n).

[Symmetric_Polynomials] title = Symmetric Polynomials author = Manuel Eberl topic = Mathematics/Algebra date = 2018-09-25 notify = eberlm@in.tum.de abstract =

A symmetric polynomial is a polynomial in variables X1,…,Xn that does not discriminate between its variables, i. e. it is invariant under any permutation of them. These polynomials are important in the study of the relationship between the coefficients of a univariate polynomial and its roots in its algebraic closure.

This article provides a definition of symmetric polynomials and the elementary symmetric polynomials e1,…,en and proofs of their basic properties, including three notable ones:

  • Vieta's formula, which gives an explicit expression for the k-th coefficient of a univariate monic polynomial in terms of its roots x1,…,xn, namely ck = (-1)n-k en-k(x1,…,xn).
  • Second, the Fundamental Theorem of Symmetric Polynomials, which states that any symmetric polynomial is itself a uniquely determined polynomial combination of the elementary symmetric polynomials.
  • Third, as a corollary of the previous two, that given a polynomial over some ring R, any symmetric polynomial combination of its roots is also in R even when the roots are not.

Both the symmetry property itself and the witness for the Fundamental Theorem are executable.

[Taylor_Models] title = Taylor Models author = Christoph Traut<>, Fabian Immler topic = Computer science/Algorithms/Mathematical, Computer science/Data structures, Mathematics/Analysis, Mathematics/Algebra date = 2018-01-08 notify = immler@in.tum.de abstract = We present a formally verified implementation of multivariate Taylor models. Taylor models are a form of rigorous polynomial approximation, consisting of an approximation polynomial based on Taylor expansions, combined with a rigorous bound on the approximation error. Taylor models were introduced as a tool to mitigate the dependency problem of interval arithmetic. Our implementation automatically computes Taylor models for the class of elementary functions, expressed by composition of arithmetic operations and basic functions like exp, sin, or square root. [Green] title = An Isabelle/HOL formalisation of Green's Theorem author = Mohammad Abdulaziz , Lawrence C. Paulson topic = Mathematics/Analysis date = 2018-01-11 notify = mohammad.abdulaziz8@gmail.com, lp15@cam.ac.uk abstract = We formalise a statement of Green’s theorem—the first formalisation to our knowledge—in Isabelle/HOL. The theorem statement that we formalise is enough for most applications, especially in physics and engineering. Our formalisation is made possible by a novel proof that avoids the ubiquitous line integral cancellation argument. This eliminates the need to formalise orientations and region boundaries explicitly with respect to the outwards-pointing normal vector. Instead we appeal to a homological argument about equivalences between paths. [AI_Planning_Languages_Semantics] title = AI Planning Languages Semantics author = Mohammad Abdulaziz , Peter Lammich topic = Computer science/Artificial intelligence date = 2020-10-29 notify = mohammad.abdulaziz8@gmail.com abstract = This is an Isabelle/HOL formalisation of the semantics of the multi-valued planning tasks language that is used by the planning system Fast-Downward, the STRIPS fragment of the Planning Domain Definition Language (PDDL), and the STRIPS soundness meta-theory developed by Vladimir Lifschitz. It also contains formally verified checkers for checking the well-formedness of problems specified in either language as well the correctness of potential solutions. The formalisation in this entry was described in an earlier publication. [Verified_SAT_Based_AI_Planning] title = Verified SAT-Based AI Planning author = Mohammad Abdulaziz , Friedrich Kurz <> topic = Computer science/Artificial intelligence date = 2020-10-29 notify = mohammad.abdulaziz8@gmail.com abstract = We present an executable formally verified SAT encoding of classical AI planning that is based on the encodings by Kautz and Selman and the one by Rintanen et al. The encoding was experimentally tested and shown to be usable for reasonably sized standard AI planning benchmarks. We also use it as a reference to test a state-of-the-art SAT-based planner, showing that it sometimes falsely claims that problems have no solutions of certain lengths. The formalisation in this submission was described in an independent publication. [Gromov_Hyperbolicity] title = Gromov Hyperbolicity author = Sebastien Gouezel<> topic = Mathematics/Geometry date = 2018-01-16 notify = sebastien.gouezel@univ-rennes1.fr abstract = A geodesic metric space is Gromov hyperbolic if all its geodesic triangles are thin, i.e., every side is contained in a fixed thickening of the two other sides. While this definition looks innocuous, it has proved extremely important and versatile in modern geometry since its introduction by Gromov. We formalize the basic classical properties of Gromov hyperbolic spaces, notably the Morse lemma asserting that quasigeodesics are close to geodesics, the invariance of hyperbolicity under quasi-isometries, we define and study the Gromov boundary and its associated distance, and prove that a quasi-isometry between Gromov hyperbolic spaces extends to a homeomorphism of the boundaries. We also prove a less classical theorem, by Bonk and Schramm, asserting that a Gromov hyperbolic space embeds isometrically in a geodesic Gromov-hyperbolic space. As the original proof uses a transfinite sequence of Cauchy completions, this is an interesting formalization exercise. Along the way, we introduce basic material on isometries, quasi-isometries, Lipschitz maps, geodesic spaces, the Hausdorff distance, the Cauchy completion of a metric space, and the exponential on extended real numbers. [Ordered_Resolution_Prover] title = Formalization of Bachmair and Ganzinger's Ordered Resolution Prover author = Anders Schlichtkrull , Jasmin Christian Blanchette , Dmitriy Traytel , Uwe Waldmann topic = Logic/General logic/Mechanization of proofs date = 2018-01-18 notify = andschl@dtu.dk, j.c.blanchette@vu.nl abstract = This Isabelle/HOL formalization covers Sections 2 to 4 of Bachmair and Ganzinger's "Resolution Theorem Proving" chapter in the Handbook of Automated Reasoning. This includes soundness and completeness of unordered and ordered variants of ground resolution with and without literal selection, the standard redundancy criterion, a general framework for refutational theorem proving, and soundness and completeness of an abstract first-order prover. [Chandy_Lamport] title = A Formal Proof of The Chandy--Lamport Distributed Snapshot Algorithm author = Ben Fiedler , Dmitriy Traytel topic = Computer science/Algorithms/Distributed date = 2020-07-21 notify = ben.fiedler@inf.ethz.ch, traytel@inf.ethz.ch abstract = We provide a suitable distributed system model and implementation of the Chandy--Lamport distributed snapshot algorithm [ACM Transactions on Computer Systems, 3, 63-75, 1985]. Our main result is a formal termination and correctness proof of the Chandy--Lamport algorithm and its use in stable property detection. [BNF_Operations] title = Operations on Bounded Natural Functors author = Jasmin Christian Blanchette , Andrei Popescu , Dmitriy Traytel topic = Tools date = 2017-12-19 notify = jasmin.blanchette@gmail.com,uuomul@yahoo.com,traytel@inf.ethz.ch abstract = This entry formalizes the closure property of bounded natural functors (BNFs) under seven operations. These operations and the corresponding proofs constitute the core of Isabelle's (co)datatype package. To be close to the implemented tactics, the proofs are deliberately formulated as detailed apply scripts. The (co)datatypes together with (co)induction principles and (co)recursors are byproducts of the fixpoint operations LFP and GFP. Composition of BNFs is subdivided into four simpler operations: Compose, Kill, Lift, and Permute. The N2M operation provides mutual (co)induction principles and (co)recursors for nested (co)datatypes. [LLL_Basis_Reduction] title = A verified LLL algorithm author = Ralph Bottesch <>, Jose Divasón , Maximilian Haslbeck , Sebastiaan Joosten , René Thiemann , Akihisa Yamada<> topic = Computer science/Algorithms/Mathematical, Mathematics/Algebra date = 2018-02-02 notify = ralph.bottesch@uibk.ac.at, jose.divason@unirioja.es, maximilian.haslbeck@uibk.ac.at, s.j.c.joosten@utwente.nl, rene.thiemann@uibk.ac.at, ayamada@trs.cm.is.nagoya-u.ac.jp abstract = The Lenstra-Lenstra-Lovász basis reduction algorithm, also known as LLL algorithm, is an algorithm to find a basis with short, nearly orthogonal vectors of an integer lattice. Thereby, it can also be seen as an approximation to solve the shortest vector problem (SVP), which is an NP-hard problem, where the approximation quality solely depends on the dimension of the lattice, but not the lattice itself. The algorithm also possesses many applications in diverse fields of computer science, from cryptanalysis to number theory, but it is specially well-known since it was used to implement the first polynomial-time algorithm to factor polynomials. In this work we present the first mechanized soundness proof of the LLL algorithm to compute short vectors in lattices. The formalization follows a textbook by von zur Gathen and Gerhard. extra-history = Change history: [2018-04-16]: Integrated formal complexity bounds (Haslbeck, Thiemann) [2018-05-25]: Integrated much faster LLL implementation based on integer arithmetic (Bottesch, Haslbeck, Thiemann) [LLL_Factorization] title = A verified factorization algorithm for integer polynomials with polynomial complexity author = Jose Divasón , Sebastiaan Joosten , René Thiemann , Akihisa Yamada topic = Mathematics/Algebra date = 2018-02-06 notify = jose.divason@unirioja.es, s.j.c.joosten@utwente.nl, rene.thiemann@uibk.ac.at, ayamada@trs.cm.is.nagoya-u.ac.jp abstract = Short vectors in lattices and factors of integer polynomials are related. Each factor of an integer polynomial belongs to a certain lattice. When factoring polynomials, the condition that we are looking for an irreducible polynomial means that we must look for a small element in a lattice, which can be done by a basis reduction algorithm. In this development we formalize this connection and thereby one main application of the LLL basis reduction algorithm: an algorithm to factor square-free integer polynomials which runs in polynomial time. The work is based on our previous Berlekamp–Zassenhaus development, where the exponential reconstruction phase has been replaced by the polynomial-time basis reduction algorithm. Thanks to this formalization we found a serious flaw in a textbook. [Treaps] title = Treaps author = Maximilian Haslbeck , Manuel Eberl , Tobias Nipkow topic = Computer science/Data structures date = 2018-02-06 notify = eberlm@in.tum.de abstract =

A Treap is a binary tree whose nodes contain pairs consisting of some payload and an associated priority. It must have the search-tree property w.r.t. the payloads and the heap property w.r.t. the priorities. Treaps are an interesting data structure that is related to binary search trees (BSTs) in the following way: if one forgets all the priorities of a treap, the resulting BST is exactly the same as if one had inserted the elements into an empty BST in order of ascending priority. This means that a treap behaves like a BST where we can pretend the elements were inserted in a different order from the one in which they were actually inserted.

In particular, by choosing these priorities at random upon insertion of an element, we can pretend that we inserted the elements in random order, so that the shape of the resulting tree is that of a random BST no matter in what order we insert the elements. This is the main result of this formalisation.

[Skip_Lists] title = Skip Lists author = Max W. Haslbeck , Manuel Eberl topic = Computer science/Data structures date = 2020-01-09 notify = max.haslbeck@gmx.de abstract =

Skip lists are sorted linked lists enhanced with shortcuts and are an alternative to binary search trees. A skip lists consists of multiple levels of sorted linked lists where a list on level n is a subsequence of the list on level n − 1. In the ideal case, elements are skipped in such a way that a lookup in a skip lists takes O(log n) time. In a randomised skip list the skipped elements are choosen randomly.

This entry contains formalized proofs of the textbook results about the expected height and the expected length of a search path in a randomised skip list.

[Mersenne_Primes] title = Mersenne primes and the Lucas–Lehmer test author = Manuel Eberl topic = Mathematics/Number theory date = 2020-01-17 notify = eberlm@in.tum.de abstract =

This article provides formal proofs of basic properties of Mersenne numbers, i. e. numbers of the form 2n - 1, and especially of Mersenne primes.

In particular, an efficient, verified, and executable version of the Lucas–Lehmer test is developed. This test decides primality for Mersenne numbers in time polynomial in n.

[Hoare_Time] title = Hoare Logics for Time Bounds author = Maximilian P. L. Haslbeck , Tobias Nipkow topic = Computer science/Programming languages/Logics date = 2018-02-26 notify = haslbema@in.tum.de abstract = We study three different Hoare logics for reasoning about time bounds of imperative programs and formalize them in Isabelle/HOL: a classical Hoare like logic due to Nielson, a logic with potentials due to Carbonneaux et al. and a separation logic following work by Atkey, Chaguérand and Pottier. These logics are formally shown to be sound and complete. Verification condition generators are developed and are shown sound and complete too. We also consider variants of the systems where we abstract from multiplicative constants in the running time bounds, thus supporting a big-O style of reasoning. Finally we compare the expressive power of the three systems. [Architectural_Design_Patterns] title = A Theory of Architectural Design Patterns author = Diego Marmsoler topic = Computer science/System description languages date = 2018-03-01 notify = diego.marmsoler@tum.de abstract = The following document formalizes and verifies several architectural design patterns. Each pattern specification is formalized in terms of a locale where the locale assumptions correspond to the assumptions which a pattern poses on an architecture. Thus, pattern specifications may build on top of each other by interpreting the corresponding locale. A pattern is verified using the framework provided by the AFP entry Dynamic Architectures. Currently, the document consists of formalizations of 4 different patterns: the singleton, the publisher subscriber, the blackboard pattern, and the blockchain pattern. Thereby, the publisher component of the publisher subscriber pattern is modeled as an instance of the singleton pattern and the blackboard pattern is modeled as an instance of the publisher subscriber pattern. In general, this entry provides the first steps towards an overall theory of architectural design patterns. extra-history = Change history: [2018-05-25]: changing the major assumption for blockchain architectures from alternative minings to relative mining frequencies (revision 5043c5c71685)
[2019-04-08]: adapting the terminology: honest instead of trusted, dishonest instead of untrusted (revision 7af3431a22ae) [Weight_Balanced_Trees] title = Weight-Balanced Trees author = Tobias Nipkow , Stefan Dirix<> topic = Computer science/Data structures date = 2018-03-13 notify = nipkow@in.tum.de abstract = This theory provides a verified implementation of weight-balanced trees following the work of Hirai and Yamamoto who proved that all parameters in a certain range are valid, i.e. guarantee that insertion and deletion preserve weight-balance. Instead of a general theorem we provide parameterized proofs of preservation of the invariant that work for many (all?) valid parameters. [Fishburn_Impossibility] title = The Incompatibility of Fishburn-Strategyproofness and Pareto-Efficiency author = Felix Brandt , Manuel Eberl , Christian Saile , Christian Stricker topic = Mathematics/Games and economics date = 2018-03-22 notify = eberlm@in.tum.de abstract =

This formalisation contains the proof that there is no anonymous Social Choice Function for at least three agents and alternatives that fulfils both Pareto-Efficiency and Fishburn-Strategyproofness. It was derived from a proof of Brandt et al., which relies on an unverified translation of a fixed finite instance of the original problem to SAT. This Isabelle proof contains a machine-checked version of both the statement for exactly three agents and alternatives and the lifting to the general case.

[BNF_CC] title = Bounded Natural Functors with Covariance and Contravariance author = Andreas Lochbihler , Joshua Schneider topic = Computer science/Functional programming, Tools date = 2018-04-24 notify = mail@andreas-lochbihler.de, joshua.schneider@inf.ethz.ch abstract = Bounded natural functors (BNFs) provide a modular framework for the construction of (co)datatypes in higher-order logic. Their functorial operations, the mapper and relator, are restricted to a subset of the parameters, namely those where recursion can take place. For certain applications, such as free theorems, data refinement, quotients, and generalised rewriting, it is desirable that these operations do not ignore the other parameters. In this article, we formalise the generalisation BNFCC that extends the mapper and relator to covariant and contravariant parameters. We show that
  1. BNFCCs are closed under functor composition and least and greatest fixpoints,
  2. subtypes inherit the BNFCC structure under conditions that generalise those for the BNF case, and
  3. BNFCCs preserve quotients under mild conditions.
These proofs are carried out for abstract BNFCCs similar to the AFP entry BNF Operations. In addition, we apply the BNFCC theory to several concrete functors. [Modular_Assembly_Kit_Security] title = An Isabelle/HOL Formalization of the Modular Assembly Kit for Security Properties author = Oliver Bračevac , Richard Gay , Sylvia Grewe , Heiko Mantel , Henning Sudbrock , Markus Tasch topic = Computer science/Security date = 2018-05-07 notify = tasch@mais.informatik.tu-darmstadt.de abstract = The "Modular Assembly Kit for Security Properties" (MAKS) is a framework for both the definition and verification of possibilistic information-flow security properties at the specification-level. MAKS supports the uniform representation of a wide range of possibilistic information-flow properties and provides support for the verification of such properties via unwinding results and compositionality results. We provide a formalization of this framework in Isabelle/HOL. [AxiomaticCategoryTheory] title = Axiom Systems for Category Theory in Free Logic author = Christoph Benzmüller , Dana Scott topic = Mathematics/Category theory date = 2018-05-23 notify = c.benzmueller@gmail.com abstract = This document provides a concise overview on the core results of our previous work on the exploration of axioms systems for category theory. Extending the previous studies (http://arxiv.org/abs/1609.01493) we include one further axiomatic theory in our experiments. This additional theory has been suggested by Mac Lane in 1948. We show that the axioms proposed by Mac Lane are equivalent to the ones we studied before, which includes an axioms set suggested by Scott in the 1970s and another axioms set proposed by Freyd and Scedrov in 1990, which we slightly modified to remedy a minor technical issue. [OpSets] title = OpSets: Sequential Specifications for Replicated Datatypes author = Martin Kleppmann , Victor B. F. Gomes , Dominic P. Mulligan , Alastair R. Beresford topic = Computer science/Algorithms/Distributed, Computer science/Data structures date = 2018-05-10 notify = vb358@cam.ac.uk abstract = We introduce OpSets, an executable framework for specifying and reasoning about the semantics of replicated datatypes that provide eventual consistency in a distributed system, and for mechanically verifying algorithms that implement these datatypes. Our approach is simple but expressive, allowing us to succinctly specify a variety of abstract datatypes, including maps, sets, lists, text, graphs, trees, and registers. Our datatypes are also composable, enabling the construction of complex data structures. To demonstrate the utility of OpSets for analysing replication algorithms, we highlight an important correctness property for collaborative text editing that has traditionally been overlooked; algorithms that do not satisfy this property can exhibit awkward interleaving of text. We use OpSets to specify this correctness property and prove that although one existing replication algorithm satisfies this property, several other published algorithms do not. [Irrationality_J_Hancl] title = Irrational Rapidly Convergent Series author = Angeliki Koutsoukou-Argyraki , Wenda Li topic = Mathematics/Number theory, Mathematics/Analysis date = 2018-05-23 notify = ak2110@cam.ac.uk, wl302@cam.ac.uk abstract = We formalize with Isabelle/HOL a proof of a theorem by J. Hancl asserting the irrationality of the sum of a series consisting of rational numbers, built up by sequences that fulfill certain properties. Even though the criterion is a number theoretic result, the proof makes use only of analytical arguments. We also formalize a corollary of the theorem for a specific series fulfilling the assumptions of the theorem. [Optimal_BST] title = Optimal Binary Search Trees author = Tobias Nipkow , Dániel Somogyi <> topic = Computer science/Algorithms, Computer science/Data structures date = 2018-05-27 notify = nipkow@in.tum.de abstract = This article formalizes recursive algorithms for the construction of optimal binary search trees given fixed access frequencies. We follow Knuth (1971), Yao (1980) and Mehlhorn (1984). The algorithms are memoized with the help of the AFP article Monadification, Memoization and Dynamic Programming, thus yielding dynamic programming algorithms. [Projective_Geometry] title = Projective Geometry author = Anthony Bordg topic = Mathematics/Geometry date = 2018-06-14 notify = apdb3@cam.ac.uk abstract = We formalize the basics of projective geometry. In particular, we give a proof of the so-called Hessenberg's theorem in projective plane geometry. We also provide a proof of the so-called Desargues's theorem based on an axiomatization of (higher) projective space geometry using the notion of rank of a matroid. This last approach allows to handle incidence relations in an homogeneous way dealing only with points and without the need of talking explicitly about lines, planes or any higher entity. [Localization_Ring] title = The Localization of a Commutative Ring author = Anthony Bordg topic = Mathematics/Algebra date = 2018-06-14 notify = apdb3@cam.ac.uk abstract = We formalize the localization of a commutative ring R with respect to a multiplicative subset (i.e. a submonoid of R seen as a multiplicative monoid). This localization is itself a commutative ring and we build the natural homomorphism of rings from R to its localization. [Minsky_Machines] title = Minsky Machines author = Bertram Felgenhauer<> topic = Logic/Computability date = 2018-08-14 notify = int-e@gmx.de abstract =

We formalize undecidablity results for Minsky machines. To this end, we also formalize recursive inseparability.

We start by proving that Minsky machines can compute arbitrary primitive recursive and recursive functions. We then show that there is a deterministic Minsky machine with one argument and two final states such that the set of inputs that are accepted in one state is recursively inseparable from the set of inputs that are accepted in the other state.

As a corollary, the set of Minsky configurations that reach the first state but not the second recursively inseparable from the set of Minsky configurations that reach the second state but not the first. In particular both these sets are undecidable.

We do not prove that recursive functions can simulate Minsky machines.

[Neumann_Morgenstern_Utility] title = Von-Neumann-Morgenstern Utility Theorem author = Julian Parsert, Cezary Kaliszyk topic = Mathematics/Games and economics license = LGPL date = 2018-07-04 notify = julian.parsert@uibk.ac.at, cezary.kaliszyk@uibk.ac.at abstract = Utility functions form an essential part of game theory and economics. In order to guarantee the existence of utility functions most of the time sufficient properties are assumed in an axiomatic manner. One famous and very common set of such assumptions is that of expected utility theory. Here, the rationality, continuity, and independence of preferences is assumed. The von-Neumann-Morgenstern Utility theorem shows that these assumptions are necessary and sufficient for an expected utility function to exists. This theorem was proven by Neumann and Morgenstern in ``Theory of Games and Economic Behavior'' which is regarded as one of the most influential works in game theory. The formalization includes formal definitions of the underlying concepts including continuity and independence of preferences. [Simplex] title = An Incremental Simplex Algorithm with Unsatisfiable Core Generation author = Filip Marić , Mirko Spasić , René Thiemann topic = Computer science/Algorithms/Optimization date = 2018-08-24 notify = rene.thiemann@uibk.ac.at abstract = We present an Isabelle/HOL formalization and total correctness proof for the incremental version of the Simplex algorithm which is used in most state-of-the-art SMT solvers. It supports extraction of satisfying assignments, extraction of minimal unsatisfiable cores, incremental assertion of constraints and backtracking. The formalization relies on stepwise program refinement, starting from a simple specification, going through a number of refinement steps, and ending up in a fully executable functional implementation. Symmetries present in the algorithm are handled with special care. [Budan_Fourier] title = The Budan-Fourier Theorem and Counting Real Roots with Multiplicity author = Wenda Li topic = Mathematics/Analysis date = 2018-09-02 notify = wl302@cam.ac.uk, liwenda1990@hotmail.com abstract = This entry is mainly about counting and approximating real roots (of a polynomial) with multiplicity. We have first formalised the Budan-Fourier theorem: given a polynomial with real coefficients, we can calculate sign variations on Fourier sequences to over-approximate the number of real roots (counting multiplicity) within an interval. When all roots are known to be real, the over-approximation becomes tight: we can utilise this theorem to count real roots exactly. It is also worth noting that Descartes' rule of sign is a direct consequence of the Budan-Fourier theorem, and has been included in this entry. In addition, we have extended previous formalised Sturm's theorem to count real roots with multiplicity, while the original Sturm's theorem only counts distinct real roots. Compared to the Budan-Fourier theorem, our extended Sturm's theorem always counts roots exactly but may suffer from greater computational cost. [Quaternions] title = Quaternions author = Lawrence C. Paulson topic = Mathematics/Algebra, Mathematics/Geometry date = 2018-09-05 notify = lp15@cam.ac.uk abstract = This theory is inspired by the HOL Light development of quaternions, but follows its own route. Quaternions are developed coinductively, as in the existing formalisation of the complex numbers. Quaternions are quickly shown to belong to the type classes of real normed division algebras and real inner product spaces. And therefore they inherit a great body of facts involving algebraic laws, limits, continuity, etc., which must be proved explicitly in the HOL Light version. The development concludes with the geometric interpretation of the product of imaginary quaternions. [Octonions] title = Octonions author = Angeliki Koutsoukou-Argyraki topic = Mathematics/Algebra, Mathematics/Geometry date = 2018-09-14 notify = ak2110@cam.ac.uk abstract = We develop the basic theory of Octonions, including various identities and properties of the octonions and of the octonionic product, a description of 7D isometries and representations of orthogonal transformations. To this end we first develop the theory of the vector cross product in 7 dimensions. The development of the theory of Octonions is inspired by that of the theory of Quaternions by Lawrence Paulson. However, we do not work within the type class real_algebra_1 because the octonionic product is not associative. [Aggregation_Algebras] title = Aggregation Algebras author = Walter Guttmann topic = Mathematics/Algebra date = 2018-09-15 notify = walter.guttmann@canterbury.ac.nz abstract = We develop algebras for aggregation and minimisation for weight matrices and for edge weights in graphs. We verify the correctness of Prim's and Kruskal's minimum spanning tree algorithms based on these algebras. We also show numerous instances of these algebras based on linearly ordered commutative semigroups. [Prime_Number_Theorem] title = The Prime Number Theorem author = Manuel Eberl , Lawrence C. Paulson topic = Mathematics/Number theory date = 2018-09-19 notify = eberlm@in.tum.de abstract =

This article provides a short proof of the Prime Number Theorem in several equivalent forms, most notably π(x) ~ x/ln x where π(x) is the number of primes no larger than x. It also defines other basic number-theoretic functions related to primes like Chebyshev's functions ϑ and ψ and the “n-th prime number” function pn. We also show various bounds and relationship between these functions are shown. Lastly, we derive Mertens' First and Second Theorem, i. e. ∑px ln p/p = ln x + O(1) and ∑px 1/p = ln ln x + M + O(1/ln x). We also give explicit bounds for the remainder terms.

The proof of the Prime Number Theorem builds on a library of Dirichlet series and analytic combinatorics. We essentially follow the presentation by Newman. The core part of the proof is a Tauberian theorem for Dirichlet series, which is proven using complex analysis and then used to strengthen Mertens' First Theorem to ∑px ln p/p = ln x + c + o(1).

A variant of this proof has been formalised before by Harrison in HOL Light, and formalisations of Selberg's elementary proof exist both by Avigad et al. in Isabelle and by Carneiro in Metamath. The advantage of the analytic proof is that, while it requires more powerful mathematical tools, it is considerably shorter and clearer. This article attempts to provide a short and clear formalisation of all components of that proof using the full range of mathematical machinery available in Isabelle, staying as close as possible to Newman's simple paper proof.

[Signature_Groebner] title = Signature-Based Gröbner Basis Algorithms author = Alexander Maletzky topic = Mathematics/Algebra, Computer science/Algorithms/Mathematical date = 2018-09-20 notify = alexander.maletzky@risc.jku.at abstract =

This article formalizes signature-based algorithms for computing Gröbner bases. Such algorithms are, in general, superior to other algorithms in terms of efficiency, and have not been formalized in any proof assistant so far. The present development is both generic, in the sense that most known variants of signature-based algorithms are covered by it, and effectively executable on concrete input thanks to Isabelle's code generator. Sample computations of benchmark problems show that the verified implementation of signature-based algorithms indeed outperforms the existing implementation of Buchberger's algorithm in Isabelle/HOL.

Besides total correctness of the algorithms, the article also proves that under certain conditions they a-priori detect and avoid all useless zero-reductions, and always return 'minimal' (in some sense) Gröbner bases if an input parameter is chosen in the right way.

The formalization follows the recent survey article by Eder and Faugère.

[Factored_Transition_System_Bounding] title = Upper Bounding Diameters of State Spaces of Factored Transition Systems author = Friedrich Kurz <>, Mohammad Abdulaziz topic = Computer science/Automata and formal languages, Mathematics/Graph theory date = 2018-10-12 notify = friedrich.kurz@tum.de, mohammad.abdulaziz@in.tum.de abstract = A completeness threshold is required to guarantee the completeness of planning as satisfiability, and bounded model checking of safety properties. One valid completeness threshold is the diameter of the underlying transition system. The diameter is the maximum element in the set of lengths of all shortest paths between pairs of states. The diameter is not calculated exactly in our setting, where the transition system is succinctly described using a (propositionally) factored representation. Rather, an upper bound on the diameter is calculated compositionally, by bounding the diameters of small abstract subsystems, and then composing those. We port a HOL4 formalisation of a compositional algorithm for computing a relatively tight upper bound on the system diameter. This compositional algorithm exploits acyclicity in the state space to achieve compositionality, and it was introduced by Abdulaziz et. al. The formalisation that we port is described as a part of another paper by Abdulaziz et. al. As a part of this porting we developed a libray about transition systems, which shall be of use in future related mechanisation efforts. [Smooth_Manifolds] title = Smooth Manifolds author = Fabian Immler , Bohua Zhan topic = Mathematics/Analysis, Mathematics/Topology date = 2018-10-22 notify = immler@in.tum.de, bzhan@ios.ac.cn abstract = We formalize the definition and basic properties of smooth manifolds in Isabelle/HOL. Concepts covered include partition of unity, tangent and cotangent spaces, and the fundamental theorem of path integrals. We also examine some concrete manifolds such as spheres and projective spaces. The formalization makes extensive use of the analysis and linear algebra libraries in Isabelle/HOL, in particular its “types-to-sets” mechanism. [Matroids] title = Matroids author = Jonas Keinholz<> topic = Mathematics/Combinatorics date = 2018-11-16 notify = eberlm@in.tum.de abstract =

This article defines the combinatorial structures known as Independence Systems and Matroids and provides basic concepts and theorems related to them. These structures play an important role in combinatorial optimisation, e. g. greedy algorithms such as Kruskal's algorithm. The development is based on Oxley's `What is a Matroid?'.

[Graph_Saturation] title = Graph Saturation author = Sebastiaan J. C. Joosten<> topic = Logic/Rewriting, Mathematics/Graph theory date = 2018-11-23 notify = sjcjoosten@gmail.com abstract = This is an Isabelle/HOL formalisation of graph saturation, closely following a paper by the author on graph saturation. Nine out of ten lemmas of the original paper are proven in this formalisation. The formalisation additionally includes two theorems that show the main premise of the paper: that consistency and entailment are decided through graph saturation. This formalisation does not give executable code, and it did not implement any of the optimisations suggested in the paper. [Functional_Ordered_Resolution_Prover] title = A Verified Functional Implementation of Bachmair and Ganzinger's Ordered Resolution Prover author = Anders Schlichtkrull , Jasmin Christian Blanchette , Dmitriy Traytel topic = Logic/General logic/Mechanization of proofs date = 2018-11-23 notify = andschl@dtu.dk,j.c.blanchette@vu.nl,traytel@inf.ethz.ch abstract = This Isabelle/HOL formalization refines the abstract ordered resolution prover presented in Section 4.3 of Bachmair and Ganzinger's "Resolution Theorem Proving" chapter in the Handbook of Automated Reasoning. The result is a functional implementation of a first-order prover. [Auto2_HOL] title = Auto2 Prover author = Bohua Zhan topic = Tools date = 2018-11-20 notify = bzhan@ios.ac.cn abstract = Auto2 is a saturation-based heuristic prover for higher-order logic, implemented as a tactic in Isabelle. This entry contains the instantiation of auto2 for Isabelle/HOL, along with two basic examples: solutions to some of the Pelletier’s problems, and elementary number theory of primes. [Order_Lattice_Props] title = Properties of Orderings and Lattices author = Georg Struth topic = Mathematics/Order date = 2018-12-11 notify = g.struth@sheffield.ac.uk abstract = These components add further fundamental order and lattice-theoretic concepts and properties to Isabelle's libraries. They follow by and large the introductory sections of the Compendium of Continuous Lattices, covering directed and filtered sets, down-closed and up-closed sets, ideals and filters, Galois connections, closure and co-closure operators. Some emphasis is on duality and morphisms between structures, as in the Compendium. To this end, three ad-hoc approaches to duality are compared. [Quantales] title = Quantales author = Georg Struth topic = Mathematics/Algebra date = 2018-12-11 notify = g.struth@sheffield.ac.uk abstract = These mathematical components formalise basic properties of quantales, together with some important models, constructions, and concepts, including quantic nuclei and conuclei. [Transformer_Semantics] title = Transformer Semantics author = Georg Struth topic = Mathematics/Algebra, Computer science/Semantics date = 2018-12-11 notify = g.struth@sheffield.ac.uk abstract = These mathematical components formalise predicate transformer semantics for programs, yet currently only for partial correctness and in the absence of faults. A first part for isotone (or monotone), Sup-preserving and Inf-preserving transformers follows Back and von Wright's approach, with additional emphasis on the quantalic structure of algebras of transformers. The second part develops Sup-preserving and Inf-preserving predicate transformers from the powerset monad, via its Kleisli category and Eilenberg-Moore algebras, with emphasis on adjunctions and dualities, as well as isomorphisms between relations, state transformers and predicate transformers. [Concurrent_Revisions] title = Formalization of Concurrent Revisions author = Roy Overbeek topic = Computer science/Concurrency date = 2018-12-25 notify = Roy.Overbeek@cwi.nl abstract = Concurrent revisions is a concurrency control model developed by Microsoft Research. It has many interesting properties that distinguish it from other well-known models such as transactional memory. One of these properties is determinacy: programs written within the model always produce the same outcome, independent of scheduling activity. The concurrent revisions model has an operational semantics, with an informal proof of determinacy. This document contains an Isabelle/HOL formalization of this semantics and the proof of determinacy. [Core_DOM] title = A Formal Model of the Document Object Model author = Achim D. Brucker , Michael Herzberg topic = Computer science/Data structures date = 2018-12-26 notify = adbrucker@0x5f.org abstract = In this AFP entry, we formalize the core of the Document Object Model (DOM). At its core, the DOM defines a tree-like data structure for representing documents in general and HTML documents in particular. It is the heart of any modern web browser. Formalizing the key concepts of the DOM is a prerequisite for the formal reasoning over client-side JavaScript programs and for the analysis of security concepts in modern web browsers. We present a formalization of the core DOM, with focus on the node-tree and the operations defined on node-trees, in Isabelle/HOL. We use the formalization to verify the functional correctness of the most important functions defined in the DOM standard. Moreover, our formalization is 1) extensible, i.e., can be extended without the need of re-proving already proven properties and 2) executable, i.e., we can generate executable code from our specification. [Core_SC_DOM] title = The Safely Composable DOM author = Achim D. Brucker , Michael Herzberg topic = Computer science/Data structures date = 2020-09-28 notify = adbrucker@0x5f.org, mail@michael-herzberg.de abstract = In this AFP entry, we formalize the core of the Safely Composable Document Object Model (SC DOM). The SC DOM improve the standard DOM (as formalized in the AFP entry "Core DOM") by strengthening the tree boundaries set by shadow roots: in the SC DOM, the shadow root is a sub-class of the document class (instead of a base class). This modifications also results in changes to some API methods (e.g., getOwnerDocument) to return the nearest shadow root rather than the document root. As a result, many API methods that, when called on a node inside a shadow tree, would previously ``break out'' and return or modify nodes that are possibly outside the shadow tree, now stay within its boundaries. This change in behavior makes programs that operate on shadow trees more predictable for the developer and allows them to make more assumptions about other code accessing the DOM. [Shadow_SC_DOM] title = A Formal Model of the Safely Composable Document Object Model with Shadow Roots author = Achim D. Brucker , Michael Herzberg topic = Computer science/Data structures date = 2020-09-28 notify = adbrucker@0x5f.org, mail@michael-herzberg.de abstract = In this AFP entry, we extend our formalization of the safely composable DOM with Shadow Roots. This is a proposal for Shadow Roots with stricter safety guarantess than the standard compliant formalization (see "Shadow DOM"). Shadow Roots are a recent proposal of the web community to support a component-based development approach for client-side web applications. Shadow roots are a significant extension to the DOM standard and, as web standards are condemned to be backward compatible, such extensions often result in complex specification that may contain unwanted subtleties that can be detected by a formalization. Our Isabelle/HOL formalization is, in the sense of object-orientation, an extension of our formalization of the core DOM and enjoys the same basic properties, i.e., it is extensible, i.e., can be extended without the need of re-proving already proven properties and executable, i.e., we can generate executable code from our specification. We exploit the executability to show that our formalization complies to the official standard of the W3C, respectively, the WHATWG. [SC_DOM_Components] title = A Formalization of Safely Composable Web Components author = Achim D. Brucker , Michael Herzberg topic = Computer science/Data structures date = 2020-09-28 notify = adbrucker@0x5f.org, mail@michael-herzberg.de abstract = While the (safely composable) DOM with shadow trees provide the technical basis for defining web components, it does neither defines the concept of web components nor specifies the safety properties that web components should guarantee. Consequently, the standard also does not discuss how or even if the methods for modifying the DOM respect component boundaries. In AFP entry, we present a formally verified model of safely composable web components and define safety properties which ensure that different web components can only interact with each other using well-defined interfaces. Moreover, our verification of the application programming interface (API) of the DOM revealed numerous invariants that implementations of the DOM API need to preserve to ensure the integrity of components. In comparison to the strict standard compliance formalization of Web Components in the AFP entry "DOM_Components", the notion of components in this entry (based on "SC_DOM" and "Shadow_SC_DOM") provides much stronger safety guarantees. [Store_Buffer_Reduction] title = A Reduction Theorem for Store Buffers author = Ernie Cohen , Norbert Schirmer topic = Computer science/Concurrency date = 2019-01-07 notify = norbert.schirmer@web.de abstract = When verifying a concurrent program, it is usual to assume that memory is sequentially consistent. However, most modern multiprocessors depend on store buffering for efficiency, and provide native sequential consistency only at a substantial performance penalty. To regain sequential consistency, a programmer has to follow an appropriate programming discipline. However, naïve disciplines, such as protecting all shared accesses with locks, are not flexible enough for building high-performance multiprocessor software. We present a new discipline for concurrent programming under TSO (total store order, with store buffer forwarding). It does not depend on concurrency primitives, such as locks. Instead, threads use ghost operations to acquire and release ownership of memory addresses. A thread can write to an address only if no other thread owns it, and can read from an address only if it owns it or it is shared and the thread has flushed its store buffer since it last wrote to an address it did not own. This discipline covers both coarse-grained concurrency (where data is protected by locks) as well as fine-grained concurrency (where atomic operations race to memory). We formalize this discipline in Isabelle/HOL, and prove that if every execution of a program in a system without store buffers follows the discipline, then every execution of the program with store buffers is sequentially consistent. Thus, we can show sequential consistency under TSO by ordinary assertional reasoning about the program, without having to consider store buffers at all. [IMP2] title = IMP2 – Simple Program Verification in Isabelle/HOL author = Peter Lammich , Simon Wimmer topic = Computer science/Programming languages/Logics, Computer science/Algorithms date = 2019-01-15 notify = lammich@in.tum.de abstract = IMP2 is a simple imperative language together with Isabelle tooling to create a program verification environment in Isabelle/HOL. The tools include a C-like syntax, a verification condition generator, and Isabelle commands for the specification of programs. The framework is modular, i.e., it allows easy reuse of already proved programs within larger programs. This entry comes with a quickstart guide and a large collection of examples, spanning basic algorithms with simple proofs to more advanced algorithms and proof techniques like data refinement. Some highlights from the examples are:
  • Bisection Square Root,
  • Extended Euclid,
  • Exponentiation by Squaring,
  • Binary Search,
  • Insertion Sort,
  • Quicksort,
  • Depth First Search.
The abstract syntax and semantics are very simple and well-documented. They are suitable to be used in a course, as extension to the IMP language which comes with the Isabelle distribution. While this entry is limited to a simple imperative language, the ideas could be extended to more sophisticated languages. [Farkas] title = Farkas' Lemma and Motzkin's Transposition Theorem author = Ralph Bottesch , Max W. Haslbeck , René Thiemann topic = Mathematics/Algebra date = 2019-01-17 notify = rene.thiemann@uibk.ac.at abstract = We formalize a proof of Motzkin's transposition theorem and Farkas' lemma in Isabelle/HOL. Our proof is based on the formalization of the simplex algorithm which, given a set of linear constraints, either returns a satisfying assignment to the problem or detects unsatisfiability. By reusing facts about the simplex algorithm we show that a set of linear constraints is unsatisfiable if and only if there is a linear combination of the constraints which evaluates to a trivially unsatisfiable inequality. [Auto2_Imperative_HOL] title = Verifying Imperative Programs using Auto2 author = Bohua Zhan topic = Computer science/Algorithms, Computer science/Data structures date = 2018-12-21 notify = bzhan@ios.ac.cn abstract = This entry contains the application of auto2 to verifying functional and imperative programs. Algorithms and data structures that are verified include linked lists, binary search trees, red-black trees, interval trees, priority queue, quicksort, union-find, Dijkstra's algorithm, and a sweep-line algorithm for detecting rectangle intersection. The imperative verification is based on Imperative HOL and its separation logic framework. A major goal of this work is to set up automation in order to reduce the length of proof that the user needs to provide, both for verifying functional programs and for working with separation logic. [UTP] title = Isabelle/UTP: Mechanised Theory Engineering for Unifying Theories of Programming author = Simon Foster , Frank Zeyda<>, Yakoub Nemouchi , Pedro Ribeiro<>, Burkhart Wolff topic = Computer science/Programming languages/Logics date = 2019-02-01 notify = simon.foster@york.ac.uk abstract = Isabelle/UTP is a mechanised theory engineering toolkit based on Hoare and He’s Unifying Theories of Programming (UTP). UTP enables the creation of denotational, algebraic, and operational semantics for different programming languages using an alphabetised relational calculus. We provide a semantic embedding of the alphabetised relational calculus in Isabelle/HOL, including new type definitions, relational constructors, automated proof tactics, and accompanying algebraic laws. Isabelle/UTP can be used to both capture laws of programming for different languages, and put these fundamental theorems to work in the creation of associated verification tools, using calculi like Hoare logics. This document describes the relational core of the UTP in Isabelle/HOL. [HOL-CSP] title = HOL-CSP Version 2.0 author = Safouan Taha , Lina Ye , Burkhart Wolff topic = Computer science/Concurrency/Process calculi, Computer science/Semantics date = 2019-04-26 notify = wolff@lri.fr abstract = This is a complete formalization of the work of Hoare and Roscoe on the denotational semantics of the Failure/Divergence Model of CSP. It follows essentially the presentation of CSP in Roscoe’s Book ”Theory and Practice of Concurrency” [8] and the semantic details in a joint Paper of Roscoe and Brooks ”An improved failures model for communicating processes". The present work is based on a prior formalization attempt, called HOL-CSP 1.0, done in 1997 by H. Tej and B. Wolff with the Isabelle proof technology available at that time. This work revealed minor, but omnipresent foundational errors in key concepts like the process invariant. The present version HOL-CSP profits from substantially improved libraries (notably HOLCF), improved automated proof techniques, and structured proof techniques in Isar and is substantially shorter but more complete. [Probabilistic_Prime_Tests] title = Probabilistic Primality Testing author = Daniel Stüwe<>, Manuel Eberl topic = Mathematics/Number theory date = 2019-02-11 notify = eberlm@in.tum.de abstract =

The most efficient known primality tests are probabilistic in the sense that they use randomness and may, with some probability, mistakenly classify a composite number as prime – but never a prime number as composite. Examples of this are the Miller–Rabin test, the Solovay–Strassen test, and (in most cases) Fermat's test.

This entry defines these three tests and proves their correctness. It also develops some of the number-theoretic foundations, such as Carmichael numbers and the Jacobi symbol with an efficient executable algorithm to compute it.

[Kruskal] title = Kruskal's Algorithm for Minimum Spanning Forest author = Maximilian P.L. Haslbeck , Peter Lammich , Julian Biendarra<> topic = Computer science/Algorithms/Graph date = 2019-02-14 notify = haslbema@in.tum.de, lammich@in.tum.de abstract = This Isabelle/HOL formalization defines a greedy algorithm for finding a minimum weight basis on a weighted matroid and proves its correctness. This algorithm is an abstract version of Kruskal's algorithm. We interpret the abstract algorithm for the cycle matroid (i.e. forests in a graph) and refine it to imperative executable code using an efficient union-find data structure. Our formalization can be instantiated for different graph representations. We provide instantiations for undirected graphs and symmetric directed graphs. [List_Inversions] title = The Inversions of a List author = Manuel Eberl topic = Computer science/Algorithms date = 2019-02-01 notify = eberlm@in.tum.de abstract =

This entry defines the set of inversions of a list, i.e. the pairs of indices that violate sortedness. It also proves the correctness of the well-known O(n log n) divide-and-conquer algorithm to compute the number of inversions.

[Prime_Distribution_Elementary] title = Elementary Facts About the Distribution of Primes author = Manuel Eberl topic = Mathematics/Number theory date = 2019-02-21 notify = eberlm@in.tum.de abstract =

This entry is a formalisation of Chapter 4 (and parts of Chapter 3) of Apostol's Introduction to Analytic Number Theory. The main topics that are addressed are properties of the distribution of prime numbers that can be shown in an elementary way (i. e. without the Prime Number Theorem), the various equivalent forms of the PNT (which imply each other in elementary ways), and consequences that follow from the PNT in elementary ways. The latter include, most notably, asymptotic bounds for the number of distinct prime factors of n, the divisor function d(n), Euler's totient function φ(n), and lcm(1,…,n).

[Safe_OCL] title = Safe OCL author = Denis Nikiforov <> topic = Computer science/Programming languages/Language definitions license = LGPL date = 2019-03-09 notify = denis.nikif@gmail.com abstract =

The theory is a formalization of the OCL type system, its abstract syntax and expression typing rules. The theory does not define a concrete syntax and a semantics. In contrast to Featherweight OCL, it is based on a deep embedding approach. The type system is defined from scratch, it is not based on the Isabelle HOL type system.

The Safe OCL distincts nullable and non-nullable types. Also the theory gives a formal definition of safe navigation operations. The Safe OCL typing rules are much stricter than rules given in the OCL specification. It allows one to catch more errors on a type checking phase.

The type theory presented is four-layered: classes, basic types, generic types, errorable types. We introduce the following new types: non-nullable types (T[1]), nullable types (T[?]), OclSuper. OclSuper is a supertype of all other types (basic types, collections, tuples). This type allows us to define a total supremum function, so types form an upper semilattice. It allows us to define rich expression typing rules in an elegant manner.

The Preliminaries Chapter of the theory defines a number of helper lemmas for transitive closures and tuples. It defines also a generic object model independent from OCL. It allows one to use the theory as a reference for formalization of analogous languages.

[QHLProver] title = Quantum Hoare Logic author = Junyi Liu<>, Bohua Zhan , Shuling Wang<>, Shenggang Ying<>, Tao Liu<>, Yangjia Li<>, Mingsheng Ying<>, Naijun Zhan<> topic = Computer science/Programming languages/Logics, Computer science/Semantics date = 2019-03-24 notify = bzhan@ios.ac.cn abstract = We formalize quantum Hoare logic as given in [1]. In particular, we specify the syntax and denotational semantics of a simple model of quantum programs. Then, we write down the rules of quantum Hoare logic for partial correctness, and show the soundness and completeness of the resulting proof system. As an application, we verify the correctness of Grover’s algorithm. [Transcendence_Series_Hancl_Rucki] title = The Transcendence of Certain Infinite Series author = Angeliki Koutsoukou-Argyraki , Wenda Li topic = Mathematics/Analysis, Mathematics/Number theory date = 2019-03-27 notify = wl302@cam.ac.uk, ak2110@cam.ac.uk abstract = We formalize the proofs of two transcendence criteria by J. Hančl and P. Rucki that assert the transcendence of the sums of certain infinite series built up by sequences that fulfil certain properties. Both proofs make use of Roth's celebrated theorem on diophantine approximations to algebraic numbers from 1955 which we implement as an assumption without having formalised its proof. [Binding_Syntax_Theory] title = A General Theory of Syntax with Bindings author = Lorenzo Gheri , Andrei Popescu topic = Computer science/Programming languages/Lambda calculi, Computer science/Functional programming, Logic/General logic/Mechanization of proofs date = 2019-04-06 notify = a.popescu@mdx.ac.uk, lor.gheri@gmail.com abstract = We formalize a theory of syntax with bindings that has been developed and refined over the last decade to support several large formalization efforts. Terms are defined for an arbitrary number of constructors of varying numbers of inputs, quotiented to alpha-equivalence and sorted according to a binding signature. The theory includes many properties of the standard operators on terms: substitution, swapping and freshness. It also includes bindings-aware induction and recursion principles and support for semantic interpretation. This work has been presented in the ITP 2017 paper “A Formalized General Theory of Syntax with Bindings”. [LTL_Master_Theorem] title = A Compositional and Unified Translation of LTL into ω-Automata author = Benedikt Seidl , Salomon Sickert topic = Computer science/Automata and formal languages date = 2019-04-16 notify = benedikt.seidl@tum.de, s.sickert@tum.de abstract = We present a formalisation of the unified translation approach of linear temporal logic (LTL) into ω-automata from [1]. This approach decomposes LTL formulas into ``simple'' languages and allows a clear separation of concerns: first, we formalise the purely logical result yielding this decomposition; second, we instantiate this generic theory to obtain a construction for deterministic (state-based) Rabin automata (DRA). We extract from this particular instantiation an executable tool translating LTL to DRAs. To the best of our knowledge this is the first verified translation from LTL to DRAs that is proven to be double exponential in the worst case which asymptotically matches the known lower bound.

[1] Javier Esparza, Jan Kretínský, Salomon Sickert. One Theorem to Rule Them All: A Unified Translation of LTL into ω-Automata. LICS 2018 [LambdaAuth] title = Formalization of Generic Authenticated Data Structures author = Matthias Brun<>, Dmitriy Traytel topic = Computer science/Security, Computer science/Programming languages/Lambda calculi date = 2019-05-14 notify = traytel@inf.ethz.ch abstract = Authenticated data structures are a technique for outsourcing data storage and maintenance to an untrusted server. The server is required to produce an efficiently checkable and cryptographically secure proof that it carried out precisely the requested computation. Miller et al. introduced λ• (pronounced lambda auth)—a functional programming language with a built-in primitive authentication construct, which supports a wide range of user-specified authenticated data structures while guaranteeing certain correctness and security properties for all well-typed programs. We formalize λ• and prove its correctness and security properties. With Isabelle's help, we uncover and repair several mistakes in the informal proofs and lemma statements. Our findings are summarized in an ITP'19 paper. [IMP2_Binary_Heap] title = Binary Heaps for IMP2 author = Simon Griebel<> topic = Computer science/Data structures, Computer science/Algorithms date = 2019-06-13 notify = s.griebel@tum.de abstract = In this submission array-based binary minimum heaps are formalized. The correctness of the following heap operations is proved: insert, get-min, delete-min and make-heap. These are then used to verify an in-place heapsort. The formalization is based on IMP2, an imperative program verification framework implemented in Isabelle/HOL. The verified heap functions are iterative versions of the partly recursive functions found in "Algorithms and Data Structures – The Basic Toolbox" by K. Mehlhorn and P. Sanders and "Introduction to Algorithms" by T. H. Cormen, C. E. Leiserson, R. L. Rivest and C. Stein. [Groebner_Macaulay] title = Gröbner Bases, Macaulay Matrices and Dubé's Degree Bounds author = Alexander Maletzky topic = Mathematics/Algebra date = 2019-06-15 notify = alexander.maletzky@risc.jku.at abstract = This entry formalizes the connection between Gröbner bases and Macaulay matrices (sometimes also referred to as `generalized Sylvester matrices'). In particular, it contains a method for computing Gröbner bases, which proceeds by first constructing some Macaulay matrix of the initial set of polynomials, then row-reducing this matrix, and finally converting the result back into a set of polynomials. The output is shown to be a Gröbner basis if the Macaulay matrix constructed in the first step is sufficiently large. In order to obtain concrete upper bounds on the size of the matrix (and hence turn the method into an effectively executable algorithm), Dubé's degree bounds on Gröbner bases are utilized; consequently, they are also part of the formalization. [Linear_Inequalities] title = Linear Inequalities author = Ralph Bottesch , Alban Reynaud <>, René Thiemann topic = Mathematics/Algebra date = 2019-06-21 notify = rene.thiemann@uibk.ac.at abstract = We formalize results about linear inqualities, mainly from Schrijver's book. The main results are the proof of the fundamental theorem on linear inequalities, Farkas' lemma, Carathéodory's theorem, the Farkas-Minkowsky-Weyl theorem, the decomposition theorem of polyhedra, and Meyer's result that the integer hull of a polyhedron is a polyhedron itself. Several theorems include bounds on the appearing numbers, and in particular we provide an a-priori bound on mixed-integer solutions of linear inequalities. [Linear_Programming] title = Linear Programming author = Julian Parsert , Cezary Kaliszyk topic = Mathematics/Algebra date = 2019-08-06 notify = julian.parsert@gmail.com, cezary.kaliszyk@uibk.ac.at abstract = We use the previous formalization of the general simplex algorithm to formulate an algorithm for solving linear programs. We encode the linear programs using only linear constraints. Solving these constraints also solves the original linear program. This algorithm is proven to be sound by applying the weak duality theorem which is also part of this formalization. [Differential_Game_Logic] title = Differential Game Logic author = André Platzer topic = Computer science/Programming languages/Logics date = 2019-06-03 notify = aplatzer@cs.cmu.edu abstract = This formalization provides differential game logic (dGL), a logic for proving properties of hybrid game. In addition to the syntax and semantics, it formalizes a uniform substitution calculus for dGL. Church's uniform substitutions substitute a term or formula for a function or predicate symbol everywhere. The uniform substitutions for dGL also substitute hybrid games for a game symbol everywhere. We prove soundness of one-pass uniform substitutions and the axioms of differential game logic with respect to their denotational semantics. One-pass uniform substitutions are faster by postponing soundness-critical admissibility checks with a linear pass homomorphic application and regain soundness by a variable condition at the replacements. The formalization is based on prior non-mechanized soundness proofs for dGL. [Complete_Non_Orders] title = Complete Non-Orders and Fixed Points author = Akihisa Yamada , Jérémy Dubut topic = Mathematics/Order date = 2019-06-27 notify = akihisayamada@nii.ac.jp, dubut@nii.ac.jp abstract = We develop an Isabelle/HOL library of order-theoretic concepts, such as various completeness conditions and fixed-point theorems. We keep our formalization as general as possible: we reprove several well-known results about complete orders, often without any properties of ordering, thus complete non-orders. In particular, we generalize the Knaster–Tarski theorem so that we ensure the existence of a quasi-fixed point of monotone maps over complete non-orders, and show that the set of quasi-fixed points is complete under a mild condition—attractivity—which is implied by either antisymmetry or transitivity. This result generalizes and strengthens a result by Stauti and Maaden. Finally, we recover Kleene’s fixed-point theorem for omega-complete non-orders, again using attractivity to prove that Kleene’s fixed points are least quasi-fixed points. [Priority_Search_Trees] title = Priority Search Trees author = Peter Lammich , Tobias Nipkow topic = Computer science/Data structures date = 2019-06-25 notify = lammich@in.tum.de abstract = We present a new, purely functional, simple and efficient data structure combining a search tree and a priority queue, which we call a priority search tree. The salient feature of priority search trees is that they offer a decrease-key operation, something that is missing from other simple, purely functional priority queue implementations. Priority search trees can be implemented on top of any search tree. This entry does the implementation for red-black trees. This entry formalizes the first part of our ITP-2019 proof pearl Purely Functional, Simple and Efficient Priority Search Trees and Applications to Prim and Dijkstra. [Prim_Dijkstra_Simple] title = Purely Functional, Simple, and Efficient Implementation of Prim and Dijkstra author = Peter Lammich , Tobias Nipkow topic = Computer science/Algorithms/Graph date = 2019-06-25 notify = lammich@in.tum.de abstract = We verify purely functional, simple and efficient implementations of Prim's and Dijkstra's algorithms. This constitutes the first verification of an executable and even efficient version of Prim's algorithm. This entry formalizes the second part of our ITP-2019 proof pearl Purely Functional, Simple and Efficient Priority Search Trees and Applications to Prim and Dijkstra. [MFOTL_Monitor] title = Formalization of a Monitoring Algorithm for Metric First-Order Temporal Logic author = Joshua Schneider , Dmitriy Traytel topic = Computer science/Algorithms, Logic/General logic/Temporal logic, Computer science/Automata and formal languages date = 2019-07-04 notify = joshua.schneider@inf.ethz.ch, traytel@inf.ethz.ch abstract = A monitor is a runtime verification tool that solves the following problem: Given a stream of time-stamped events and a policy formulated in a specification language, decide whether the policy is satisfied at every point in the stream. We verify the correctness of an executable monitor for specifications given as formulas in metric first-order temporal logic (MFOTL), an expressive extension of linear temporal logic with real-time constraints and first-order quantification. The verified monitor implements a simplified variant of the algorithm used in the efficient MonPoly monitoring tool. The formalization is presented in a RV 2019 paper, which also compares the output of the verified monitor to that of other monitoring tools on randomly generated inputs. This case study revealed several errors in the optimized but unverified tools. extra-history = Change history: [2020-08-13]: added the formalization of the abstract slicing framework and joint data slicer (revision b1639ed541b7)
[FOL_Seq_Calc1] title = A Sequent Calculus for First-Order Logic author = Asta Halkjær From contributors = Alexander Birch Jensen , Anders Schlichtkrull , Jørgen Villadsen topic = Logic/Proof theory date = 2019-07-18 notify = ahfrom@dtu.dk abstract = This work formalizes soundness and completeness of a one-sided sequent calculus for first-order logic. The completeness is shown via a translation from a complete semantic tableau calculus, the proof of which is based on the First-Order Logic According to Fitting theory. The calculi and proof techniques are taken from Ben-Ari's Mathematical Logic for Computer Science. [Szpilrajn] title = Order Extension and Szpilrajn's Extension Theorem author = Peter Zeller , Lukas Stevens topic = Mathematics/Order date = 2019-07-27 notify = p_zeller@cs.uni-kl.de abstract = This entry is concerned with the principle of order extension, i.e. the extension of an order relation to a total order relation. To this end, we prove a more general version of Szpilrajn's extension theorem employing terminology from the book "Consistency, Choice, and Rationality" by Bossert and Suzumura. We also formalize theorem 2.7 of their book. extra-history = Change history: [2021-03-22]: (by Lukas Stevens) generalise Szpilrajn's extension theorem and add material from the book "Consistency, Choice, and Rationality" [TESL_Language] title = A Formal Development of a Polychronous Polytimed Coordination Language author = Hai Nguyen Van , Frédéric Boulanger , Burkhart Wolff topic = Computer science/System description languages, Computer science/Semantics, Computer science/Concurrency date = 2019-07-30 notify = frederic.boulanger@centralesupelec.fr, burkhart.wolff@lri.fr abstract = The design of complex systems involves different formalisms for modeling their different parts or aspects. The global model of a system may therefore consist of a coordination of concurrent sub-models that use different paradigms. We develop here a theory for a language used to specify the timed coordination of such heterogeneous subsystems by addressing the following issues:

  • the behavior of the sub-systems is observed only at a series of discrete instants,
  • events may occur in different sub-systems at unrelated times, leading to polychronous systems, which do not necessarily have a common base clock,
  • coordination between subsystems involves causality, so the occurrence of an event may enforce the occurrence of other events, possibly after a certain duration has elapsed or an event has occurred a given number of times,
  • the domain of time (discrete, rational, continuous...) may be different in the subsystems, leading to polytimed systems,
  • the time frames of different sub-systems may be related (for instance, time in a GPS satellite and in a GPS receiver on Earth are related although they are not the same).
Firstly, a denotational semantics of the language is defined. Then, in order to be able to incrementally check the behavior of systems, an operational semantics is given, with proofs of progress, soundness and completeness with regard to the denotational semantics. These proofs are made according to a setup that can scale up when new operators are added to the language. In order for specifications to be composed in a clean way, the language should be invariant by stuttering (i.e., adding observation instants at which nothing happens). The proof of this invariance is also given. [Stellar_Quorums] title = Stellar Quorum Systems author = Giuliano Losa topic = Computer science/Algorithms/Distributed date = 2019-08-01 notify = giuliano@galois.com abstract = We formalize the static properties of personal Byzantine quorum systems (PBQSs) and Stellar quorum systems, as described in the paper ``Stellar Consensus by Reduction'' (to appear at DISC 2019). [IMO2019] title = Selected Problems from the International Mathematical Olympiad 2019 author = Manuel Eberl topic = Mathematics/Misc date = 2019-08-05 notify = eberlm@in.tum.de abstract =

This entry contains formalisations of the answers to three of the six problem of the International Mathematical Olympiad 2019, namely Q1, Q4, and Q5.

The reason why these problems were chosen is that they are particularly amenable to formalisation: they can be solved with minimal use of libraries. The remaining three concern geometry and graph theory, which, in the author's opinion, are more difficult to formalise resp. require a more complex library.

[Adaptive_State_Counting] title = Formalisation of an Adaptive State Counting Algorithm author = Robert Sachtleben topic = Computer science/Automata and formal languages, Computer science/Algorithms date = 2019-08-16 notify = rob_sac@uni-bremen.de abstract = This entry provides a formalisation of a refinement of an adaptive state counting algorithm, used to test for reduction between finite state machines. The algorithm has been originally presented by Hierons in the paper Testing from a Non-Deterministic Finite State Machine Using Adaptive State Counting. Definitions for finite state machines and adaptive test cases are given and many useful theorems are derived from these. The algorithm is formalised using mutually recursive functions, for which it is proven that the generated test suite is sufficient to test for reduction against finite state machines of a certain fault domain. Additionally, the algorithm is specified in a simple WHILE-language and its correctness is shown using Hoare-logic. [Jacobson_Basic_Algebra] title = A Case Study in Basic Algebra author = Clemens Ballarin topic = Mathematics/Algebra date = 2019-08-30 notify = ballarin@in.tum.de abstract = The focus of this case study is re-use in abstract algebra. It contains locale-based formalisations of selected parts of set, group and ring theory from Jacobson's Basic Algebra leading to the respective fundamental homomorphism theorems. The study is not intended as a library base for abstract algebra. It rather explores an approach towards abstract algebra in Isabelle. [Hybrid_Systems_VCs] title = Verification Components for Hybrid Systems author = Jonathan Julian Huerta y Munive <> topic = Mathematics/Algebra, Mathematics/Analysis date = 2019-09-10 notify = jjhuertaymunive1@sheffield.ac.uk, jonjulian23@gmail.com abstract = These components formalise a semantic framework for the deductive verification of hybrid systems. They support reasoning about continuous evolutions of hybrid programs in the style of differential dynamics logic. Vector fields or flows model these evolutions, and their verification is done with invariants for the former or orbits for the latter. Laws of modal Kleene algebra or categorical predicate transformers implement the verification condition generation. Examples show the approach at work. extra-history = Change history: [2020-12-13]: added components based on Kleene algebras with tests. These implement differential Hoare logic (dH) and a Morgan-style differential refinement calculus (dR) for verification of hybrid programs. [Generic_Join] title = Formalization of Multiway-Join Algorithms author = Thibault Dardinier<> topic = Computer science/Algorithms date = 2019-09-16 notify = tdardini@student.ethz.ch, traytel@inf.ethz.ch abstract = Worst-case optimal multiway-join algorithms are recent seminal achievement of the database community. These algorithms compute the natural join of multiple relational databases and improve in the worst case over traditional query plan optimizations of nested binary joins. In 2014, Ngo, Ré, and Rudra gave a unified presentation of different multi-way join algorithms. We formalized and proved correct their "Generic Join" algorithm and extended it to support negative joins. [Aristotles_Assertoric_Syllogistic] title = Aristotle's Assertoric Syllogistic author = Angeliki Koutsoukou-Argyraki topic = Logic/Philosophical aspects date = 2019-10-08 notify = ak2110@cam.ac.uk abstract = We formalise with Isabelle/HOL some basic elements of Aristotle's assertoric syllogistic following the article from the Stanford Encyclopedia of Philosophy by Robin Smith. To this end, we use a set theoretic formulation (covering both individual and general predication). In particular, we formalise the deductions in the Figures and after that we present Aristotle's metatheoretical observation that all deductions in the Figures can in fact be reduced to either Barbara or Celarent. As the formal proofs prove to be straightforward, the interest of this entry lies in illustrating the functionality of Isabelle and high efficiency of Sledgehammer for simple exercises in philosophy. [VerifyThis2019] title = VerifyThis 2019 -- Polished Isabelle Solutions author = Peter Lammich<>, Simon Wimmer topic = Computer science/Algorithms date = 2019-10-16 notify = lammich@in.tum.de, wimmers@in.tum.de abstract = VerifyThis 2019 (http://www.pm.inf.ethz.ch/research/verifythis.html) was a program verification competition associated with ETAPS 2019. It was the 8th event in the VerifyThis competition series. In this entry, we present polished and completed versions of our solutions that we created during the competition. [ZFC_in_HOL] title = Zermelo Fraenkel Set Theory in Higher-Order Logic author = Lawrence C. Paulson topic = Logic/Set theory date = 2019-10-24 notify = lp15@cam.ac.uk abstract =

This entry is a new formalisation of ZFC set theory in Isabelle/HOL. It is logically equivalent to Obua's HOLZF; the point is to have the closest possible integration with the rest of Isabelle/HOL, minimising the amount of new notations and exploiting type classes.

There is a type V of sets and a function elts :: V => V set mapping a set to its elements. Classes simply have type V set, and a predicate identifies the small classes: those that correspond to actual sets. Type classes connected with orders and lattices are used to minimise the amount of new notation for concepts such as the subset relation, union and intersection. Basic concepts — Cartesian products, disjoint sums, natural numbers, functions, etc. — are formalised.

More advanced set-theoretic concepts, such as transfinite induction, ordinals, cardinals and the transitive closure of a set, are also provided. The definition of addition and multiplication for general sets (not just ordinals) follows Kirby.

The theory provides two type classes with the aim of facilitating developments that combine V with other Isabelle/HOL types: embeddable, the class of types that can be injected into V (including V itself as well as V*V, etc.), and small, the class of types that correspond to some ZF set.

extra-history = Change history: [2020-01-28]: Generalisation of the "small" predicate and order types to arbitrary sets; ordinal exponentiation; introduction of the coercion ord_of_nat :: "nat => V"; numerous new lemmas. (revision 6081d5be8d08) [Interval_Arithmetic_Word32] title = Interval Arithmetic on 32-bit Words author = Brandon Bohrer topic = Computer science/Data structures date = 2019-11-27 notify = bjbohrer@gmail.com, bbohrer@cs.cmu.edu abstract = Interval_Arithmetic implements conservative interval arithmetic computations, then uses this interval arithmetic to implement a simple programming language where all terms have 32-bit signed word values, with explicit infinities for terms outside the representable bounds. Our target use case is interpreters for languages that must have a well-understood low-level behavior. We include a formalization of bounded-length strings which are used for the identifiers of our language. Bounded-length identifiers are useful in some applications, for example the Differential_Dynamic_Logic article, where a Euclidean space indexed by identifiers demands that identifiers are finitely many. [Generalized_Counting_Sort] title = An Efficient Generalization of Counting Sort for Large, possibly Infinite Key Ranges author = Pasquale Noce topic = Computer science/Algorithms, Computer science/Functional programming date = 2019-12-04 notify = pasquale.noce.lavoro@gmail.com abstract = Counting sort is a well-known algorithm that sorts objects of any kind mapped to integer keys, or else to keys in one-to-one correspondence with some subset of the integers (e.g. alphabet letters). However, it is suitable for direct use, viz. not just as a subroutine of another sorting algorithm (e.g. radix sort), only if the key range is not significantly larger than the number of the objects to be sorted. This paper describes a tail-recursive generalization of counting sort making use of a bounded number of counters, suitable for direct use in case of a large, or even infinite key range of any kind, subject to the only constraint of being a subset of an arbitrary linear order. After performing a pen-and-paper analysis of how such algorithm has to be designed to maximize its efficiency, this paper formalizes the resulting generalized counting sort (GCsort) algorithm and then formally proves its correctness properties, namely that (a) the counters' number is maximized never exceeding the fixed upper bound, (b) objects are conserved, (c) objects get sorted, and (d) the algorithm is stable. [Poincare_Bendixson] title = The Poincaré-Bendixson Theorem author = Fabian Immler , Yong Kiam Tan topic = Mathematics/Analysis date = 2019-12-18 notify = fimmler@cs.cmu.edu, yongkiat@cs.cmu.edu abstract = The Poincaré-Bendixson theorem is a classical result in the study of (continuous) dynamical systems. Colloquially, it restricts the possible behaviors of planar dynamical systems: such systems cannot be chaotic. In practice, it is a useful tool for proving the existence of (limiting) periodic behavior in planar systems. The theorem is an interesting and challenging benchmark for formalized mathematics because proofs in the literature rely on geometric sketches and only hint at symmetric cases. It also requires a substantial background of mathematical theories, e.g., the Jordan curve theorem, real analysis, ordinary differential equations, and limiting (long-term) behavior of dynamical systems. [Isabelle_C] title = Isabelle/C author = Frédéric Tuong , Burkhart Wolff topic = Computer science/Programming languages/Language definitions, Computer science/Semantics, Tools date = 2019-10-22 notify = tuong@users.gforge.inria.fr, wolff@lri.fr abstract = We present a framework for C code in C11 syntax deeply integrated into the Isabelle/PIDE development environment. Our framework provides an abstract interface for verification back-ends to be plugged-in independently. Thus, various techniques such as deductive program verification or white-box testing can be applied to the same source, which is part of an integrated PIDE document model. Semantic back-ends are free to choose the supported C fragment and its semantics. In particular, they can differ on the chosen memory model or the specification mechanism for framing conditions. Our framework supports semantic annotations of C sources in the form of comments. Annotations serve to locally control back-end settings, and can express the term focus to which an annotation refers. Both the logical and the syntactic context are available when semantic annotations are evaluated. As a consequence, a formula in an annotation can refer both to HOL or C variables. Our approach demonstrates the degree of maturity and expressive power the Isabelle/PIDE sub-system has achieved in recent years. Our integration technique employs Lex and Yacc style grammars to ensure efficient deterministic parsing. This is the core-module of Isabelle/C; the AFP package for Clean and Clean_wrapper as well as AutoCorres and AutoCorres_wrapper (available via git) are applications of this front-end. [Zeta_3_Irrational] title = The Irrationality of ζ(3) author = Manuel Eberl topic = Mathematics/Number theory date = 2019-12-27 notify = manuel.eberl@tum.de abstract =

This article provides a formalisation of Beukers's straightforward analytic proof that ζ(3) is irrational. This was first proven by Apéry (which is why this result is also often called ‘Apéry's Theorem’) using a more algebraic approach. This formalisation follows Filaseta's presentation of Beukers's proof.

[Hybrid_Logic] title = Formalizing a Seligman-Style Tableau System for Hybrid Logic author = Asta Halkjær From topic = Logic/General logic/Modal logic date = 2019-12-20 notify = ahfrom@dtu.dk abstract = This work is a formalization of soundness and completeness proofs for a Seligman-style tableau system for hybrid logic. The completeness result is obtained via a synthetic approach using maximally consistent sets of tableau blocks. The formalization differs from previous work in a few ways. First, to avoid the need to backtrack in the construction of a tableau, the formalized system has no unnamed initial segment, and therefore no Name rule. Second, I show that the full Bridge rule is admissible in the system. Third, I start from rules restricted to only extend the branch with new formulas, including only witnessing diamonds that are not already witnessed, and show that the unrestricted rules are admissible. Similarly, I start from simpler versions of the @-rules and show that these are sufficient. The GoTo rule is restricted using a notion of potential such that each application consumes potential and potential is earned through applications of the remaining rules. I show that if a branch can be closed then it can be closed starting from a single unit. Finally, Nom is restricted by a fixed set of allowed nominals. The resulting system should be terminating. extra-history = Change history: [2020-06-03]: The fully restricted system has been shown complete by updating the synthetic completeness proof. [Bicategory] title = Bicategories author = Eugene W. Stark topic = Mathematics/Category theory date = 2020-01-06 notify = stark@cs.stonybrook.edu abstract =

Taking as a starting point the author's previous work on developing aspects of category theory in Isabelle/HOL, this article gives a compatible formalization of the notion of "bicategory" and develops a framework within which formal proofs of facts about bicategories can be given. The framework includes a number of basic results, including the Coherence Theorem, the Strictness Theorem, pseudofunctors and biequivalence, and facts about internal equivalences and adjunctions in a bicategory. As a driving application and demonstration of the utility of the framework, it is used to give a formal proof of a theorem, due to Carboni, Kasangian, and Street, that characterizes up to biequivalence the bicategories of spans in a category with pullbacks. The formalization effort necessitated the filling-in of many details that were not evident from the brief presentation in the original paper, as well as identifying a few minor corrections along the way.

Revisions made subsequent to the first version of this article added additional material on pseudofunctors, pseudonatural transformations, modifications, and equivalence of bicategories; the main thrust being to give a proof that a pseudofunctor is a biequivalence if and only if it can be extended to an equivalence of bicategories.

extra-history = Change history: [2020-02-15]: Move ConcreteCategory.thy from Bicategory to Category3 and use it systematically. Make other minor improvements throughout. (revision a51840d36867)
[2020-11-04]: Added new material on equivalence of bicategories, with associated changes. (revision 472cb2268826)
[Subset_Boolean_Algebras] title = A Hierarchy of Algebras for Boolean Subsets author = Walter Guttmann , Bernhard Möller topic = Mathematics/Algebra date = 2020-01-31 notify = walter.guttmann@canterbury.ac.nz abstract = We present a collection of axiom systems for the construction of Boolean subalgebras of larger overall algebras. The subalgebras are defined as the range of a complement-like operation on a semilattice. This technique has been used, for example, with the antidomain operation, dynamic negation and Stone algebras. We present a common ground for these constructions based on a new equational axiomatisation of Boolean algebras. [Goodstein_Lambda] title = Implementing the Goodstein Function in λ-Calculus author = Bertram Felgenhauer topic = Logic/Rewriting date = 2020-02-21 notify = int-e@gmx.de abstract = In this formalization, we develop an implementation of the Goodstein function G in plain λ-calculus, linked to a concise, self-contained specification. The implementation works on a Church-encoded representation of countable ordinals. The initial conversion to hereditary base 2 is not covered, but the material is sufficient to compute the particular value G(16), and easily extends to other fixed arguments. [VeriComp] title = A Generic Framework for Verified Compilers author = Martin Desharnais topic = Computer science/Programming languages/Compiling date = 2020-02-10 notify = martin.desharnais@unibw.de abstract = This is a generic framework for formalizing compiler transformations. It leverages Isabelle/HOL’s locales to abstract over concrete languages and transformations. It states common definitions for language semantics, program behaviours, forward and backward simulations, and compilers. We provide generic operations, such as simulation and compiler composition, and prove general (partial) correctness theorems, resulting in reusable proof components. [Hello_World] title = Hello World author = Cornelius Diekmann , Lars Hupel topic = Computer science/Functional programming date = 2020-03-07 notify = diekmann@net.in.tum.de abstract = In this article, we present a formalization of the well-known "Hello, World!" code, including a formal framework for reasoning about IO. Our model is inspired by the handling of IO in Haskell. We start by formalizing the 🌍 and embrace the IO monad afterwards. Then we present a sample main :: IO (), followed by its proof of correctness. [WOOT_Strong_Eventual_Consistency] title = Strong Eventual Consistency of the Collaborative Editing Framework WOOT author = Emin Karayel , Edgar Gonzàlez topic = Computer science/Algorithms/Distributed date = 2020-03-25 notify = eminkarayel@google.com, edgargip@google.com, me@eminkarayel.de abstract = Commutative Replicated Data Types (CRDTs) are a promising new class of data structures for large-scale shared mutable content in applications that only require eventual consistency. The WithOut Operational Transforms (WOOT) framework is a CRDT for collaborative text editing introduced by Oster et al. (CSCW 2006) for which the eventual consistency property was verified only for a bounded model to date. We contribute a formal proof for WOOTs strong eventual consistency. [Furstenberg_Topology] title = Furstenberg's topology and his proof of the infinitude of primes author = Manuel Eberl topic = Mathematics/Number theory date = 2020-03-22 notify = manuel.eberl@tum.de abstract =

This article gives a formal version of Furstenberg's topological proof of the infinitude of primes. He defines a topology on the integers based on arithmetic progressions (or, equivalently, residue classes). Using some fairly obvious properties of this topology, the infinitude of primes is then easily obtained.

Apart from this, this topology is also fairly ‘nice’ in general: it is second countable, metrizable, and perfect. All of these (well-known) facts are formally proven, including an explicit metric for the topology given by Zulfeqarr.

[Saturation_Framework] title = A Comprehensive Framework for Saturation Theorem Proving author = Sophie Tourret topic = Logic/General logic/Mechanization of proofs date = 2020-04-09 notify = stourret@mpi-inf.mpg.de abstract = This Isabelle/HOL formalization is the companion of the technical report “A comprehensive framework for saturation theorem proving”, itself companion of the eponym IJCAR 2020 paper, written by Uwe Waldmann, Sophie Tourret, Simon Robillard and Jasmin Blanchette. It verifies a framework for formal refutational completeness proofs of abstract provers that implement saturation calculi, such as ordered resolution or superposition, and allows to model entire prover architectures in such a way that the static refutational completeness of a calculus immediately implies the dynamic refutational completeness of a prover implementing the calculus using a variant of the given clause loop. The technical report “A comprehensive framework for saturation theorem proving” is available on the Matryoshka website. The names of the Isabelle lemmas and theorems corresponding to the results in the report are indicated in the margin of the report. [Saturation_Framework_Extensions] title = Extensions to the Comprehensive Framework for Saturation Theorem Proving author = Jasmin Blanchette , Sophie Tourret topic = Logic/General logic/Mechanization of proofs date = 2020-08-25 notify = jasmin.blanchette@gmail.com abstract = This Isabelle/HOL formalization extends the AFP entry Saturation_Framework with the following contributions:
  • an application of the framework to prove Bachmair and Ganzinger's resolution prover RP refutationally complete, which was formalized in a more ad hoc fashion by Schlichtkrull et al. in the AFP entry Ordered_Resultion_Prover;
  • generalizations of various basic concepts formalized by Schlichtkrull et al., which were needed to verify RP and could be useful to formalize other calculi, such as superposition;
  • alternative proofs of fairness (and hence saturation and ultimately refutational completeness) for the given clause procedures GC and LGC, based on invariance.
[MFODL_Monitor_Optimized] title = Formalization of an Optimized Monitoring Algorithm for Metric First-Order Dynamic Logic with Aggregations author = Thibault Dardinier<>, Lukas Heimes<>, Martin Raszyk , Joshua Schneider , Dmitriy Traytel topic = Computer science/Algorithms, Logic/General logic/Modal logic, Computer science/Automata and formal languages date = 2020-04-09 notify = martin.raszyk@inf.ethz.ch, joshua.schneider@inf.ethz.ch, traytel@inf.ethz.ch abstract = A monitor is a runtime verification tool that solves the following problem: Given a stream of time-stamped events and a policy formulated in a specification language, decide whether the policy is satisfied at every point in the stream. We verify the correctness of an executable monitor for specifications given as formulas in metric first-order dynamic logic (MFODL), which combines the features of metric first-order temporal logic (MFOTL) and metric dynamic logic. Thus, MFODL supports real-time constraints, first-order parameters, and regular expressions. Additionally, the monitor supports aggregation operations such as count and sum. This formalization, which is described in a forthcoming paper at IJCAR 2020, significantly extends previous work on a verified monitor for MFOTL. Apart from the addition of regular expressions and aggregations, we implemented multi-way joins and a specialized sliding window algorithm to further optimize the monitor. [Sliding_Window_Algorithm] title = Formalization of an Algorithm for Greedily Computing Associative Aggregations on Sliding Windows author = Lukas Heimes<>, Dmitriy Traytel , Joshua Schneider<> topic = Computer science/Algorithms date = 2020-04-10 notify = heimesl@student.ethz.ch, traytel@inf.ethz.ch, joshua.schneider@inf.ethz.ch abstract = Basin et al.'s sliding window algorithm (SWA) is an algorithm for combining the elements of subsequences of a sequence with an associative operator. It is greedy and minimizes the number of operator applications. We formalize the algorithm and verify its functional correctness. We extend the algorithm with additional operations and provide an alternative interface to the slide operation that does not require the entire input sequence. [Lucas_Theorem] title = Lucas's Theorem author = Chelsea Edmonds topic = Mathematics/Number theory date = 2020-04-07 notify = cle47@cam.ac.uk abstract = This work presents a formalisation of a generating function proof for Lucas's theorem. We first outline extensions to the existing Formal Power Series (FPS) library, including an equivalence relation for coefficients modulo n, an alternate binomial theorem statement, and a formalised proof of the Freshman's dream (mod p) lemma. The second part of the work presents the formal proof of Lucas's Theorem. Working backwards, the formalisation first proves a well known corollary of the theorem which is easier to formalise, and then applies induction to prove the original theorem statement. The proof of the corollary aims to provide a good example of a formalised generating function equivalence proof using the FPS library. The final theorem statement is intended to be integrated into the formalised proof of Hilbert's 10th Problem. [ADS_Functor] title = Authenticated Data Structures As Functors author = Andreas Lochbihler , Ognjen Marić topic = Computer science/Data structures date = 2020-04-16 notify = andreas.lochbihler@digitalasset.com, mail@andreas-lochbihler.de abstract = Authenticated data structures allow several systems to convince each other that they are referring to the same data structure, even if each of them knows only a part of the data structure. Using inclusion proofs, knowledgeable systems can selectively share their knowledge with other systems and the latter can verify the authenticity of what is being shared. In this article, we show how to modularly define authenticated data structures, their inclusion proofs, and operations thereon as datatypes in Isabelle/HOL, using a shallow embedding. Modularity allows us to construct complicated trees from reusable building blocks, which we call Merkle functors. Merkle functors include sums, products, and function spaces and are closed under composition and least fixpoints. As a practical application, we model the hierarchical transactions of Canton, a practical interoperability protocol for distributed ledgers, as authenticated data structures. This is a first step towards formalizing the Canton protocol and verifying its integrity and security guarantees. [Power_Sum_Polynomials] title = Power Sum Polynomials author = Manuel Eberl topic = Mathematics/Algebra date = 2020-04-24 notify = eberlm@in.tum.de abstract =

This article provides a formalisation of the symmetric multivariate polynomials known as power sum polynomials. These are of the form pn(X1,…, Xk) = X1n + … + Xkn. A formal proof of the Girard–Newton Theorem is also given. This theorem relates the power sum polynomials to the elementary symmetric polynomials sk in the form of a recurrence relation (-1)k k sk = ∑i∈[0,k) (-1)i si pk-i .

As an application, this is then used to solve a generalised form of a puzzle given as an exercise in Dummit and Foote's Abstract Algebra: For k complex unknowns x1, …, xk, define pj := x1j + … + xkj. Then for each vector a ∈ ℂk, show that there is exactly one solution to the system p1 = a1, …, pk = ak up to permutation of the xi and determine the value of pi for i>k.

[Formal_Puiseux_Series] title = Formal Puiseux Series author = Manuel Eberl topic = Mathematics/Algebra date = 2021-02-17 notify = eberlm@in.tum.de abstract =

Formal Puiseux series are generalisations of formal power series and formal Laurent series that also allow for fractional exponents. They have the following general form: \[\sum_{i=N}^\infty a_{i/d} X^{i/d}\] where N is an integer and d is a positive integer.

This entry defines these series including their basic algebraic properties. Furthermore, it proves the Newton–Puiseux Theorem, namely that the Puiseux series over an algebraically closed field of characteristic 0 are also algebraically closed.

[Gaussian_Integers] title = Gaussian Integers author = Manuel Eberl topic = Mathematics/Number theory date = 2020-04-24 notify = eberlm@in.tum.de abstract =

The Gaussian integers are the subring ℤ[i] of the complex numbers, i. e. the ring of all complex numbers with integral real and imaginary part. This article provides a definition of this ring as well as proofs of various basic properties, such as that they form a Euclidean ring and a full classification of their primes. An executable (albeit not very efficient) factorisation algorithm is also provided.

Lastly, this Gaussian integer formalisation is used in two short applications:

  1. The characterisation of all positive integers that can be written as sums of two squares
  2. Euclid's formula for primitive Pythagorean triples

While elementary proofs for both of these are already available in the AFP, the theory of Gaussian integers provides more concise proofs and a more high-level view.

[Forcing] title = Formalization of Forcing in Isabelle/ZF author = Emmanuel Gunther , Miguel Pagano , Pedro Sánchez Terraf topic = Logic/Set theory date = 2020-05-06 notify = gunther@famaf.unc.edu.ar, pagano@famaf.unc.edu.ar, sterraf@famaf.unc.edu.ar abstract = We formalize the theory of forcing in the set theory framework of Isabelle/ZF. Under the assumption of the existence of a countable transitive model of ZFC, we construct a proper generic extension and show that the latter also satisfies ZFC. [Delta_System_Lemma] title = Cofinality and the Delta System Lemma author = Pedro Sánchez Terraf topic = Mathematics/Combinatorics, Logic/Set theory date = 2020-12-27 notify = sterraf@famaf.unc.edu.ar abstract = We formalize the basic results on cofinality of linearly ordered sets and ordinals and Šanin’s Lemma for uncountable families of finite sets. This last result is used to prove the countable chain condition for Cohen posets. We work in the set theory framework of Isabelle/ZF, using the Axiom of Choice as needed. [Recursion-Addition] title = Recursion Theorem in ZF author = Georgy Dunaev topic = Logic/Set theory date = 2020-05-11 notify = georgedunaev@gmail.com abstract = This document contains a proof of the recursion theorem. This is a mechanization of the proof of the recursion theorem from the text Introduction to Set Theory, by Karel Hrbacek and Thomas Jech. This implementation may be used as the basis for a model of Peano arithmetic in ZF. While recursion and the natural numbers are already available in Isabelle/ZF, this clean development is much easier to follow. [LTL_Normal_Form] title = An Efficient Normalisation Procedure for Linear Temporal Logic: Isabelle/HOL Formalisation author = Salomon Sickert topic = Computer science/Automata and formal languages, Logic/General logic/Temporal logic date = 2020-05-08 notify = s.sickert@tum.de abstract = In the mid 80s, Lichtenstein, Pnueli, and Zuck proved a classical theorem stating that every formula of Past LTL (the extension of LTL with past operators) is equivalent to a formula of the form $\bigwedge_{i=1}^n \mathbf{G}\mathbf{F} \varphi_i \vee \mathbf{F}\mathbf{G} \psi_i$, where $\varphi_i$ and $\psi_i$ contain only past operators. Some years later, Chang, Manna, and Pnueli built on this result to derive a similar normal form for LTL. Both normalisation procedures have a non-elementary worst-case blow-up, and follow an involved path from formulas to counter-free automata to star-free regular expressions and back to formulas. We improve on both points. We present an executable formalisation of a direct and purely syntactic normalisation procedure for LTL yielding a normal form, comparable to the one by Chang, Manna, and Pnueli, that has only a single exponential blow-up. [Matrices_for_ODEs] title = Matrices for ODEs author = Jonathan Julian Huerta y Munive topic = Mathematics/Analysis, Mathematics/Algebra date = 2020-04-19 notify = jonjulian23@gmail.com abstract = Our theories formalise various matrix properties that serve to establish existence, uniqueness and characterisation of the solution to affine systems of ordinary differential equations (ODEs). In particular, we formalise the operator and maximum norm of matrices. Then we use them to prove that square matrices form a Banach space, and in this setting, we show an instance of Picard-Lindelöf’s theorem for affine systems of ODEs. Finally, we use this formalisation to verify three simple hybrid programs. [Irrational_Series_Erdos_Straus] title = Irrationality Criteria for Series by Erdős and Straus author = Angeliki Koutsoukou-Argyraki , Wenda Li topic = Mathematics/Number theory, Mathematics/Analysis date = 2020-05-12 notify = ak2110@cam.ac.uk, wl302@cam.ac.uk, liwenda1990@hotmail.com abstract = We formalise certain irrationality criteria for infinite series of the form: \[\sum_{n=1}^\infty \frac{b_n}{\prod_{i=1}^n a_i} \] where $\{b_n\}$ is a sequence of integers and $\{a_n\}$ a sequence of positive integers with $a_n >1$ for all large n. The results are due to P. Erdős and E. G. Straus [1]. In particular, we formalise Theorem 2.1, Corollary 2.10 and Theorem 3.1. The latter is an application of Theorem 2.1 involving the prime numbers. [Knuth_Bendix_Order] title = A Formalization of Knuth–Bendix Orders author = Christian Sternagel , René Thiemann topic = Logic/Rewriting date = 2020-05-13 notify = c.sternagel@gmail.com, rene.thiemann@uibk.ac.at abstract = We define a generalized version of Knuth–Bendix orders, including subterm coefficient functions. For these orders we formalize several properties such as strong normalization, the subterm property, closure properties under substitutions and contexts, as well as ground totality. [Stateful_Protocol_Composition_and_Typing] title = Stateful Protocol Composition and Typing author = Andreas V. Hess , Sebastian Mödersheim , Achim D. Brucker topic = Computer science/Security date = 2020-04-08 notify = avhe@dtu.dk, andreasvhess@gmail.com, samo@dtu.dk, brucker@spamfence.net, andschl@dtu.dk abstract = We provide in this AFP entry several relative soundness results for security protocols. In particular, we prove typing and compositionality results for stateful protocols (i.e., protocols with mutable state that may span several sessions), and that focuses on reachability properties. Such results are useful to simplify protocol verification by reducing it to a simpler problem: Typing results give conditions under which it is safe to verify a protocol in a typed model where only "well-typed" attacks can occur whereas compositionality results allow us to verify a composed protocol by only verifying the component protocols in isolation. The conditions on the protocols under which the results hold are furthermore syntactic in nature allowing for full automation. The foundation presented here is used in another entry to provide fully automated and formalized security proofs of stateful protocols. [Automated_Stateful_Protocol_Verification] title = Automated Stateful Protocol Verification author = Andreas V. Hess , Sebastian Mödersheim , Achim D. Brucker , Anders Schlichtkrull topic = Computer science/Security, Tools date = 2020-04-08 notify = avhe@dtu.dk, andreasvhess@gmail.com, samo@dtu.dk, brucker@spamfence.net, andschl@dtu.dk abstract = In protocol verification we observe a wide spectrum from fully automated methods to interactive theorem proving with proof assistants like Isabelle/HOL. In this AFP entry, we present a fully-automated approach for verifying stateful security protocols, i.e., protocols with mutable state that may span several sessions. The approach supports reachability goals like secrecy and authentication. We also include a simple user-friendly transaction-based protocol specification language that is embedded into Isabelle. [Smith_Normal_Form] title = A verified algorithm for computing the Smith normal form of a matrix author = Jose Divasón topic = Mathematics/Algebra, Computer science/Algorithms/Mathematical date = 2020-05-23 notify = jose.divason@unirioja.es abstract = This work presents a formal proof in Isabelle/HOL of an algorithm to transform a matrix into its Smith normal form, a canonical matrix form, in a general setting: the algorithm is parameterized by operations to prove its existence over elementary divisor rings, while execution is guaranteed over Euclidean domains. We also provide a formal proof on some results about the generality of this algorithm as well as the uniqueness of the Smith normal form. Since Isabelle/HOL does not feature dependent types, the development is carried out switching conveniently between two different existing libraries: the Hermite normal form (based on HOL Analysis) and the Jordan normal form AFP entries. This permits to reuse results from both developments and it is done by means of the lifting and transfer package together with the use of local type definitions. [Nash_Williams] title = The Nash-Williams Partition Theorem author = Lawrence C. Paulson topic = Mathematics/Combinatorics date = 2020-05-16 notify = lp15@cam.ac.uk abstract = In 1965, Nash-Williams discovered a generalisation of the infinite form of Ramsey's theorem. Where the latter concerns infinite sets of n-element sets for some fixed n, the Nash-Williams theorem concerns infinite sets of finite sets (or lists) subject to a “no initial segment” condition. The present formalisation follows a monograph on Ramsey Spaces by Todorčević. [Safe_Distance] title = A Formally Verified Checker of the Safe Distance Traffic Rules for Autonomous Vehicles author = Albert Rizaldi , Fabian Immler topic = Computer science/Algorithms/Mathematical, Mathematics/Physics date = 2020-06-01 notify = albert.rizaldi@ntu.edu.sg, fimmler@andrew.cmu.edu, martin.rau@tum.de abstract = The Vienna Convention on Road Traffic defines the safe distance traffic rules informally. This could make autonomous vehicle liable for safe-distance-related accidents because there is no clear definition of how large a safe distance is. We provide a formally proven prescriptive definition of a safe distance, and checkers which can decide whether an autonomous vehicle is obeying the safe distance rule. Not only does our work apply to the domain of law, but it also serves as a specification for autonomous vehicle manufacturers and for online verification of path planners. [Relational_Paths] title = Relational Characterisations of Paths author = Walter Guttmann , Peter Höfner topic = Mathematics/Graph theory date = 2020-07-13 notify = walter.guttmann@canterbury.ac.nz, peter@hoefner-online.de abstract = Binary relations are one of the standard ways to encode, characterise and reason about graphs. Relation algebras provide equational axioms for a large fragment of the calculus of binary relations. Although relations are standard tools in many areas of mathematics and computing, researchers usually fall back to point-wise reasoning when it comes to arguments about paths in a graph. We present a purely algebraic way to specify different kinds of paths in Kleene relation algebras, which are relation algebras equipped with an operation for reflexive transitive closure. We study the relationship between paths with a designated root vertex and paths without such a vertex. Since we stay in first-order logic this development helps with mechanising proofs. To demonstrate the applicability of the algebraic framework we verify the correctness of three basic graph algorithms. [Amicable_Numbers] title = Amicable Numbers author = Angeliki Koutsoukou-Argyraki topic = Mathematics/Number theory date = 2020-08-04 notify = ak2110@cam.ac.uk abstract = This is a formalisation of Amicable Numbers, involving some relevant material including Euler's sigma function, some relevant definitions, results and examples as well as rules such as Thābit ibn Qurra's Rule, Euler's Rule, te Riele's Rule and Borho's Rule with breeders. [Ordinal_Partitions] title = Ordinal Partitions author = Lawrence C. Paulson topic = Mathematics/Combinatorics, Logic/Set theory date = 2020-08-03 notify = lp15@cam.ac.uk abstract = The theory of partition relations concerns generalisations of Ramsey's theorem. For any ordinal $\alpha$, write $\alpha \to (\alpha, m)^2$ if for each function $f$ from unordered pairs of elements of $\alpha$ into $\{0,1\}$, either there is a subset $X\subseteq \alpha$ order-isomorphic to $\alpha$ such that $f\{x,y\}=0$ for all $\{x,y\}\subseteq X$, or there is an $m$ element set $Y\subseteq \alpha$ such that $f\{x,y\}=1$ for all $\{x,y\}\subseteq Y$. (In both cases, with $\{x,y\}$ we require $x\not=y$.) In particular, the infinite Ramsey theorem can be written in this notation as $\omega \to (\omega, \omega)^2$, or if we restrict $m$ to the positive integers as above, then $\omega \to (\omega, m)^2$ for all $m$. This entry formalises Larson's proof of $\omega^\omega \to (\omega^\omega, m)^2$ along with a similar proof of a result due to Specker: $\omega^2 \to (\omega^2, m)^2$. Also proved is a necessary result by Erdős and Milner: $\omega^{1+\alpha\cdot n} \to (\omega^{1+\alpha}, 2^n)^2$. [Relational_Disjoint_Set_Forests] title = Relational Disjoint-Set Forests author = Walter Guttmann topic = Computer science/Data structures date = 2020-08-26 notify = walter.guttmann@canterbury.ac.nz abstract = We give a simple relation-algebraic semantics of read and write operations on associative arrays. The array operations seamlessly integrate with assignments in the Hoare-logic library. Using relation algebras and Kleene algebras we verify the correctness of an array-based implementation of disjoint-set forests with a naive union operation and a find operation with path compression. [PAC_Checker] title = Practical Algebraic Calculus Checker author = Mathias Fleury , Daniela Kaufmann topic = Computer science/Algorithms date = 2020-08-31 notify = mathias.fleury@jku.at abstract = Generating and checking proof certificates is important to increase the trust in automated reasoning tools. In recent years formal verification using computer algebra became more important and is heavily used in automated circuit verification. An existing proof format which covers algebraic reasoning and allows efficient proof checking is the practical algebraic calculus (PAC). In this development, we present the verified checker Pastèque that is obtained by synthesis via the Refinement Framework. This is the formalization going with our FMCAD'20 tool presentation. [BirdKMP] title = Putting the `K' into Bird's derivation of Knuth-Morris-Pratt string matching author = Peter Gammie topic = Computer science/Functional programming date = 2020-08-25 notify = peteg42@gmail.com abstract = Richard Bird and collaborators have proposed a derivation of an intricate cyclic program that implements the Morris-Pratt string matching algorithm. Here we provide a proof of total correctness for Bird's derivation and complete it by adding Knuth's optimisation. [Extended_Finite_State_Machines] title = A Formal Model of Extended Finite State Machines author = Michael Foster , Achim D. Brucker , Ramsay G. Taylor , John Derrick topic = Computer science/Automata and formal languages date = 2020-09-07 notify = jmafoster1@sheffield.ac.uk, adbrucker@0x5f.org abstract = In this AFP entry, we provide a formalisation of extended finite state machines (EFSMs) where models are represented as finite sets of transitions between states. EFSMs execute traces to produce observable outputs. We also define various simulation and equality metrics for EFSMs in terms of traces and prove their strengths in relation to each other. Another key contribution is a framework of function definitions such that LTL properties can be phrased over EFSMs. Finally, we provide a simple example case study in the form of a drinks machine. [Extended_Finite_State_Machine_Inference] title = Inference of Extended Finite State Machines author = Michael Foster , Achim D. Brucker , Ramsay G. Taylor , John Derrick topic = Computer science/Automata and formal languages date = 2020-09-07 notify = jmafoster1@sheffield.ac.uk, adbrucker@0x5f.org abstract = In this AFP entry, we provide a formal implementation of a state-merging technique to infer extended finite state machines (EFSMs), complete with output and update functions, from black-box traces. In particular, we define the subsumption in context relation as a means of determining whether one transition is able to account for the behaviour of another. Building on this, we define the direct subsumption relation, which lifts the subsumption in context relation to EFSM level such that we can use it to determine whether it is safe to merge a given pair of transitions. Key proofs include the conditions necessary for subsumption to occur and that subsumption and direct subsumption are preorder relations. We also provide a number of different heuristics which can be used to abstract away concrete values into registers so that more states and transitions can be merged and provide proofs of the various conditions which must hold for these abstractions to subsume their ungeneralised counterparts. A Code Generator setup to create executable Scala code is also defined. [Physical_Quantities] title = A Sound Type System for Physical Quantities, Units, and Measurements author = Simon Foster , Burkhart Wolff topic = Mathematics/Physics, Computer science/Programming languages/Type systems date = 2020-10-20 notify = simon.foster@york.ac.uk, wolff@lri.fr abstract = The present Isabelle theory builds a formal model for both the International System of Quantities (ISQ) and the International System of Units (SI), which are both fundamental for physics and engineering. Both the ISQ and the SI are deeply integrated into Isabelle's type system. Quantities are parameterised by dimension types, which correspond to base vectors, and thus only quantities of the same dimension can be equated. Since the underlying "algebra of quantities" induces congruences on quantity and SI types, specific tactic support is developed to capture these. Our construction is validated by a test-set of known equivalences between both quantities and SI units. Moreover, the presented theory can be used for type-safe conversions between the SI system and others, like the British Imperial System (BIS). [Shadow_DOM] title = A Formal Model of the Document Object Model with Shadow Roots author = Achim D. Brucker , Michael Herzberg topic = Computer science/Data structures date = 2020-09-28 notify = adbrucker@0x5f.org, mail@michael-herzberg.de abstract = In this AFP entry, we extend our formalization of the core DOM with Shadow Roots. Shadow roots are a recent proposal of the web community to support a component-based development approach for client-side web applications. Shadow roots are a significant extension to the DOM standard and, as web standards are condemned to be backward compatible, such extensions often result in complex specification that may contain unwanted subtleties that can be detected by a formalization. Our Isabelle/HOL formalization is, in the sense of object-orientation, an extension of our formalization of the core DOM and enjoys the same basic properties, i.e., it is extensible, i.e., can be extended without the need of re-proving already proven properties and executable, i.e., we can generate executable code from our specification. We exploit the executability to show that our formalization complies to the official standard of the W3C, respectively, the WHATWG. [DOM_Components] title = A Formalization of Web Components author = Achim D. Brucker , Michael Herzberg topic = Computer science/Data structures date = 2020-09-28 notify = adbrucker@0x5f.org, mail@michael-herzberg.de abstract = While the DOM with shadow trees provide the technical basis for defining web components, the DOM standard neither defines the concept of web components nor specifies the safety properties that web components should guarantee. Consequently, the standard also does not discuss how or even if the methods for modifying the DOM respect component boundaries. In AFP entry, we present a formally verified model of web components and define safety properties which ensure that different web components can only interact with each other using well-defined interfaces. Moreover, our verification of the application programming interface (API) of the DOM revealed numerous invariants that implementations of the DOM API need to preserve to ensure the integrity of components. [Interpreter_Optimizations] title = Inline Caching and Unboxing Optimization for Interpreters author = Martin Desharnais topic = Computer science/Programming languages/Misc date = 2020-12-07 notify = martin.desharnais@unibw.de abstract = This Isabelle/HOL formalization builds on the VeriComp entry of the Archive of Formal Proofs to provide the following contributions:
  • an operational semantics for a realistic virtual machine (Std) for dynamically typed programming languages;
  • the formalization of an inline caching optimization (Inca), a proof of bisimulation with (Std), and a compilation function;
  • the formalization of an unboxing optimization (Ubx), a proof of bisimulation with (Inca), and a simple compilation function.
This formalization was described in the CPP 2021 paper Towards Efficient and Verified Virtual Machines for Dynamic Languages [Isabelle_Marries_Dirac] title = Isabelle Marries Dirac: a Library for Quantum Computation and Quantum Information author = Anthony Bordg , Hanna Lachnitt, Yijun He topic = Computer science/Algorithms/Quantum computing, Mathematics/Physics/Quantum information date = 2020-11-22 notify = apdb3@cam.ac.uk, lachnitt@stanford.edu abstract = This work is an effort to formalise some quantum algorithms and results in quantum information theory. Formal methods being critical for the safety and security of algorithms and protocols, we foresee their widespread use for quantum computing in the future. We have developed a large library for quantum computing in Isabelle based on a matrix representation for quantum circuits, successfully formalising the no-cloning theorem, quantum teleportation, Deutsch's algorithm, the Deutsch-Jozsa algorithm and the quantum Prisoner's Dilemma. [Projective_Measurements] title = Quantum projective measurements and the CHSH inequality author = Mnacho Echenim topic = Computer science/Algorithms/Quantum computing, Mathematics/Physics/Quantum information date = 2021-03-03 notify = mnacho.echenim@univ-grenoble-alpes.fr abstract = This work contains a formalization of quantum projective measurements, also known as von Neumann measurements, which are based on elements of spectral theory. We also formalized the CHSH inequality, an inequality involving expectations in a probability space that is violated by quantum measurements, thus proving that quantum mechanics cannot be modeled with an underlying local hidden-variable theory. [Finite-Map-Extras] title = Finite Map Extras author = Javier Díaz topic = Computer science/Data structures date = 2020-10-12 notify = javier.diaz.manzi@gmail.com abstract = This entry includes useful syntactic sugar, new operators and functions, and their associated lemmas for finite maps which currently are not present in the standard Finite_Map theory. [Relational_Minimum_Spanning_Trees] title = Relational Minimum Spanning Tree Algorithms author = Walter Guttmann , Nicolas Robinson-O'Brien<> topic = Computer science/Algorithms/Graph date = 2020-12-08 notify = walter.guttmann@canterbury.ac.nz abstract = We verify the correctness of Prim's, Kruskal's and Borůvka's minimum spanning tree algorithms based on algebras for aggregation and minimisation. [Topological_Semantics] title = Topological semantics for paraconsistent and paracomplete logics author = David Fuenmayor topic = Logic/General logic date = 2020-12-17 notify = davfuenmayor@gmail.com abstract = We introduce a generalized topological semantics for paraconsistent and paracomplete logics by drawing upon early works on topological Boolean algebras (cf. works by Kuratowski, Zarycki, McKinsey & Tarski, etc.). In particular, this work exemplarily illustrates the shallow semantical embeddings approach (SSE) employing the proof assistant Isabelle/HOL. By means of the SSE technique we can effectively harness theorem provers, model finders and 'hammers' for reasoning with quantified non-classical logics. [CSP_RefTK] title = The HOL-CSP Refinement Toolkit author = Safouan Taha , Burkhart Wolff , Lina Ye topic = Computer science/Concurrency/Process calculi, Computer science/Semantics date = 2020-11-19 notify = wolff@lri.fr abstract = We use a formal development for CSP, called HOL-CSP2.0, to analyse a family of refinement notions, comprising classic and new ones. This analysis enables to derive a number of properties that allow to deepen the understanding of these notions, in particular with respect to specification decomposition principles for the case of infinite sets of events. The established relations between the refinement relations help to clarify some obscure points in the CSP literature, but also provide a weapon for shorter refinement proofs. Furthermore, we provide a framework for state-normalisation allowing to formally reason on parameterised process architectures. As a result, we have a modern environment for formal proofs of concurrent systems that allow for the combination of general infinite processes with locally finite ones in a logically safe way. We demonstrate these verification-techniques for classical, generalised examples: The CopyBuffer for arbitrary data and the Dijkstra's Dining Philosopher Problem of arbitrary size. [Hood_Melville_Queue] title = Hood-Melville Queue author = Alejandro Gómez-Londoño topic = Computer science/Data structures date = 2021-01-18 notify = nipkow@in.tum.de abstract = This is a verified implementation of a constant time queue. The original design is due to Hood and Melville. This formalization follows the presentation in Purely Functional Data Structuresby Okasaki. [JinjaDCI] title = JinjaDCI: a Java semantics with dynamic class initialization author = Susannah Mansky topic = Computer science/Programming languages/Language definitions date = 2021-01-11 notify = sjohnsn2@illinois.edu, susannahej@gmail.com abstract = We extend Jinja to include static fields, methods, and instructions, and dynamic class initialization, based on the Java SE 8 specification. This includes extension of definitions and proofs. This work is partially described in Mansky and Gunter's paper at CPP 2019 and Mansky's doctoral thesis (UIUC, 2020). [Blue_Eyes] title = Solution to the xkcd Blue Eyes puzzle author = Jakub Kądziołka topic = Logic/General logic/Logics of knowledge and belief date = 2021-01-30 notify = kuba@kadziolka.net abstract = In a puzzle published by Randall Munroe, perfect logicians forbidden from communicating are stranded on an island, and may only leave once they have figured out their own eye color. We present a method of modeling the behavior of perfect logicians and formalize a solution of the puzzle. [Laws_of_Large_Numbers] title = The Laws of Large Numbers author = Manuel Eberl topic = Mathematics/Probability theory date = 2021-02-10 notify = eberlm@in.tum.de abstract =

The Law of Large Numbers states that, informally, if one performs a random experiment $X$ many times and takes the average of the results, that average will be very close to the expected value $E[X]$.

More formally, let $(X_i)_{i\in\mathbb{N}}$ be a sequence of independently identically distributed random variables whose expected value $E[X_1]$ exists. Denote the running average of $X_1, \ldots, X_n$ as $\overline{X}_n$. Then:

  • The Weak Law of Large Numbers states that $\overline{X}_{n} \longrightarrow E[X_1]$ in probability for $n\to\infty$, i.e. $\mathcal{P}(|\overline{X}_{n} - E[X_1]| > \varepsilon) \longrightarrow 0$ as $n\to\infty$ for any $\varepsilon > 0$.
  • The Strong Law of Large Numbers states that $\overline{X}_{n} \longrightarrow E[X_1]$ almost surely for $n\to\infty$, i.e. $\mathcal{P}(\overline{X}_{n} \longrightarrow E[X_1]) = 1$.

In this entry, I formally prove the strong law and from it the weak law. The approach used for the proof of the strong law is a particularly quick and slick one based on ergodic theory, which was formalised by Gouëzel in another AFP entry.

[BTree] title = A Verified Imperative Implementation of B-Trees author = Niels Mündler topic = Computer science/Data structures date = 2021-02-24 notify = n.muendler@tum.de abstract = In this work, we use the interactive theorem prover Isabelle/HOL to verify an imperative implementation of the classical B-tree data structure invented by Bayer and McCreight [ACM 1970]. The - implementation supports set membership and insertion queries with + implementation supports set membership, insertion and deletion queries with efficient binary search for intra-node navigation. This is accomplished by first specifying the structure abstractly in the functional modeling language HOL and proving functional correctness. Using manual refinement, we derive an imperative implementation in Imperative/HOL. We show the validity of this refinement using the separation logic utilities from the Isabelle Refinement Framework . The code can be exported to - the programming languages SML and Scala. We examine the runtime of all + the programming languages SML, OCaml and Scala. We examine the runtime of all operations indirectly by reproducing results of the logarithmic relationship between height and the number of nodes. The results are discussed in greater detail in the corresponding Bachelor's Thesis. +extra-history = + Change history: + [2021-05-02]: + Add implementation and proof of correctness of imperative deletion operations. + Further add the option to export code to OCaml. +
[Sunflowers] title = The Sunflower Lemma of Erdős and Rado author = René Thiemann topic = Mathematics/Combinatorics date = 2021-02-25 notify = rene.thiemann@uibk.ac.at abstract = We formally define sunflowers and provide a formalization of the sunflower lemma of Erdős and Rado: whenever a set of size-k-sets has a larger cardinality than (r - 1)k · k!, then it contains a sunflower of cardinality r. [Mereology] title = Mereology author = Ben Blumson topic = Logic/Philosophical aspects date = 2021-03-01 notify = benblumson@gmail.com abstract = We use Isabelle/HOL to verify elementary theorems and alternative axiomatizations of classical extensional mereology. [Modular_arithmetic_LLL_and_HNF_algorithms] title = Two algorithms based on modular arithmetic: lattice basis reduction and Hermite normal form computation author = Ralph Bottesch <>, Jose Divasón , René Thiemann topic = Computer science/Algorithms/Mathematical date = 2021-03-12 notify = rene.thiemann@uibk.ac.at abstract = We verify two algorithms for which modular arithmetic plays an essential role: Storjohann's variant of the LLL lattice basis reduction algorithm and Kopparty's algorithm for computing the Hermite normal form of a matrix. To do this, we also formalize some facts about the modulo operation with symmetric range. Our implementations are based on the original papers, but are otherwise efficient. For basis reduction we formalize two versions: one that includes all of the optimizations/heuristics from Storjohann's paper, and one excluding a heuristic that we observed to often decrease efficiency. We also provide a fast, self-contained certifier for basis reduction, based on the efficient Hermite normal form algorithm. [Constructive_Cryptography_CM] title = Constructive Cryptography in HOL: the Communication Modeling Aspect author = Andreas Lochbihler , S. Reza Sefidgar <> topic = Computer science/Security/Cryptography, Mathematics/Probability theory date = 2021-03-17 notify = mail@andreas-lochbihler.de, reza.sefidgar@inf.ethz.ch abstract = Constructive Cryptography (CC) [ICS 2011, TOSCA 2011, TCC 2016] introduces an abstract approach to composable security statements that allows one to focus on a particular aspect of security proofs at a time. Instead of proving the properties of concrete systems, CC studies system classes, i.e., the shared behavior of similar systems, and their transformations. Modeling of systems communication plays a crucial role in composability and reusability of security statements; yet, this aspect has not been studied in any of the existing CC results. We extend our previous CC formalization [Constructive_Cryptography, CSF 2019] with a new semantic domain called Fused Resource Templates (FRT) that abstracts over the systems communication patterns in CC proofs. This widens the scope of cryptography proof formalizations in the CryptHOL library [CryptHOL, ESOP 2016, J Cryptol 2020]. This formalization is described in Abstract Modeling of Systems Communication in Constructive Cryptography using CryptHOL. diff --git a/thys/BTree/Array_SBlit.thy b/thys/BTree/Array_SBlit.thy --- a/thys/BTree/Array_SBlit.thy +++ b/thys/BTree/Array_SBlit.thy @@ -1,269 +1,305 @@ theory Array_SBlit imports "Separation_Logic_Imperative_HOL.Array_Blit" begin +(* Resolves TODO by Peter Lammich *) +(* OCaml handles the case of len=0 correctly (i.e. +as specified by the Hoare Triple in Array_Blit +not generating an exception if si+len \ array length and such) *) +code_printing code_module "array_blit" \ (OCaml) + \ + let array_blit src si dst di len = ( + if src=dst then + raise (Invalid_argument "array_blit: Same arrays") + else + Array.blit src (Z.to_int si) dst (Z.to_int di) (Z.to_int len) + ) +\ + +code_printing constant blit' \ + (OCaml) "(fun () -> /array'_blit _ _ _ _ _)" + + +export_code blit checking OCaml + section "Same array Blit" text "The standard framework already provides a function to copy array elements." term blit thm blit_rule thm blit.simps (* Same array BLIT *) definition "sblit a s d l \ blit a s a d l" text "When copying values within arrays, blit only works for moving elements to the left." lemma sblit_rule[sep_heap_rules]: assumes LEN: "si+len \ length lsrc" and DST_SM: "di \ si" shows "< src \\<^sub>a lsrc > sblit src si di len <\_. src \\<^sub>a (take di lsrc @ take len (drop si lsrc) @ drop (di+len) lsrc) >" unfolding sblit_def using LEN DST_SM proof (induction len arbitrary: lsrc si di) case 0 thus ?case by sep_auto next case (Suc len) note [sep_heap_rules] = Suc.IH have [simp]: "\x. lsrc ! si # take len (drop (Suc si) lsrc) @ x = take (Suc len) (drop si lsrc) @ x" apply simp by (metis Suc.prems(1) add_Suc_right Cons_nth_drop_Suc less_Suc_eq_le add.commute not_less_eq take_Suc_Cons Nat.trans_le_add2) from Suc.prems show ?case by (sep_auto simp: take_update_last drop_upd_irrelevant) qed subsection "A reverse blit" text "The function rblit may be used to copy elements a defined offset to the right" (* Right BLIT or Reverse BLIT *) primrec rblit :: "_ array \ nat \ _ array \ nat \ nat \ unit Heap" where "rblit _ _ _ _ 0 = return ()" | "rblit src si dst di (Suc l) = do { x \ Array.nth src (si+l); Array.upd (di+l) x dst; rblit src si dst di l }" text "For separated arrays it is equivalent to normal blit. The proof follows similarly to the corresponding proof for blit." lemma rblit_rule[sep_heap_rules]: assumes LEN: "si+len \ length lsrc" "di+len \ length ldst" shows "< src \\<^sub>a lsrc * dst \\<^sub>a ldst > rblit src si dst di len <\_. src \\<^sub>a lsrc * dst \\<^sub>a (take di ldst @ take len (drop si lsrc) @ drop (di+len) ldst) >" using LEN proof (induction len arbitrary: ldst) case 0 thus ?case by sep_auto next case (Suc len) note [sep_heap_rules] = Suc.IH have [simp]: "drop (di + len) (ldst[di + len := lsrc ! (si + len)]) = lsrc ! (si + len) # drop (Suc (di + len)) ldst" by (metis Cons_nth_drop_Suc Suc.prems(2) Suc_le_eq add_Suc_right drop_upd_irrelevant length_list_update lessI nth_list_update_eq) have "take len (drop si lsrc) @ [lsrc ! (si + len)] = take (Suc len) (drop si lsrc)" proof - have "len < length (drop si lsrc)" using Suc.prems(1) by force then show "take len (drop si lsrc) @ [lsrc ! (si + len)] = take (Suc len) (drop si lsrc)" by (metis (no_types) Suc.prems(1) add_leD1 nth_drop take_Suc_conv_app_nth) qed then have [simp]: "\x. take len (drop si lsrc) @ lsrc ! (si + len) # x = take (Suc len) (drop si lsrc) @ x" by simp from Suc.prems show ?case by (sep_auto simp: take_update_last drop_upd_irrelevant) qed definition "srblit a s d l \ rblit a s a d l" text "However, within arrays we can now copy to the right." lemma srblit_rule[sep_heap_rules]: assumes LEN: "di+len \ length lsrc" and DST_GR: "di \ si" shows "< src \\<^sub>a lsrc > srblit src si di len <\_. src \\<^sub>a (take di lsrc @ take len (drop si lsrc) @ drop (di+len) lsrc) >" unfolding srblit_def using LEN DST_GR proof (induction len arbitrary: lsrc si di) case 0 thus ?case by sep_auto next case (Suc len) note [sep_heap_rules] = Suc.IH have[simp]: "take len (drop si (lsrc[di + len := lsrc ! (si + len)])) = take len (drop si lsrc)" - sledgehammer by (metis Suc.prems(2) ab_semigroup_add_class.add.commute add_le_cancel_right take_drop take_update_cancel) have [simp]: "drop (di + len) (lsrc[di + len := lsrc ! (si + len)]) = lsrc ! (si+len) # drop (Suc di + len) lsrc" by (metis Suc.prems(1) add_Suc_right add_Suc_shift add_less_cancel_left append_take_drop_id le_imp_less_Suc le_refl plus_1_eq_Suc same_append_eq take_update_cancel upd_conv_take_nth_drop) have "take len (drop si lsrc) @ [lsrc ! (si + len)] = take (Suc len) (drop si lsrc)" proof - have "len < length lsrc - si" using Suc.prems(1) Suc.prems(2) by linarith then show ?thesis by (metis (no_types) Suc.prems(1) Suc.prems(2) add_leD1 le_add_diff_inverse length_drop nth_drop take_Suc_conv_app_nth) qed then have [simp]: "\x. take len (drop si lsrc) @ lsrc ! (si + len) # x = take (Suc len) (drop si lsrc) @ x" by simp from Suc.prems show ?case by (sep_auto simp: take_update_last drop_upd_irrelevant) qed subsection "Modeling target language blit" text "For convenience, a function that is oblivious to the direction of the shift is defined." definition "safe_sblit a s d l \ if s > d then sblit a s d l else srblit a s d l " text "We obtain a heap rule similar to the one of blit, but for copying within one array." lemma safe_sblit_rule[sep_heap_rules]: assumes LEN: "len > 0 \ di+len \ length lsrc \ si+len \ length lsrc" shows "< src \\<^sub>a lsrc > safe_sblit src si di len <\_. src \\<^sub>a (take di lsrc @ take len (drop si lsrc) @ drop (di+len) lsrc) >" unfolding safe_sblit_def using LEN apply(cases len) apply(sep_auto simp add: sblit_def srblit_def)[] apply sep_auto done (* Compare this to blit_rule *) thm blit_rule thm safe_sblit_rule subsection "Code Generator Setup" text "Note that the requirement for correctness - is even weaker here than in SML. - We therefore manually handle the case where length is 0 (in which case nothing happens at all)." + is even weaker here than in SML/OCaml. + In particular, if the length of the slice to copy is equal to 0, + we will never throw an exception. + We therefore manually handle this case, where nothing happens at all." code_printing code_module "array_sblit" \ (SML) \ fun array_sblit src si di len = ( if len > 0 then ArraySlice.copy { di = IntInf.toInt di, src = ArraySlice.slice (src,IntInf.toInt si,SOME (IntInf.toInt len)), dst = src} else () ) \ +code_printing code_module "array_sblit" \ (OCaml) + \ + let array_sblit src si di len = ( + if len > Z.zero then + (Array.blit src (Z.to_int si) src (Z.to_int di) (Z.to_int len)) + else () + ) +\ + definition safe_sblit' where [code del]: "safe_sblit' src si di len = safe_sblit src (nat_of_integer si) (nat_of_integer di) (nat_of_integer len)" lemma [code]: "safe_sblit src si di len = safe_sblit' src (integer_of_nat si) (integer_of_nat di) (integer_of_nat len)" by (simp add: safe_sblit'_def) -(* TODO: Export to other languages: OCaml, Haskell *) +(* TODO: Export to other languages: Haskell *) code_printing constant safe_sblit' \ (SML) "(fn/ ()/ => /array'_sblit _ _ _ _)" and (Scala) "{ ('_: Unit)/=>/ def safescopy(src: Array['_], srci: Int, dsti: Int, len: Int) = { if (len > 0) System.arraycopy(src, srci, src, dsti, len) else () } safescopy(_.array,_.toInt,_.toInt,_.toInt) }" -export_code safe_sblit checking SML Scala +code_printing constant safe_sblit' \ + (OCaml) "(fun () -> /array'_sblit _ _ _ _)" + + +export_code safe_sblit checking SML Scala OCaml + + subsection "Derived operations" definition array_shr where "array_shr a i k \ do { l \ Array.len a; safe_sblit a i (i+k) (l-(i+k)) }" find_theorems "Array.len" lemma array_shr_rule[sep_heap_rules]: "< src \\<^sub>a lsrc > array_shr src i k <\_. src \\<^sub>a (take (i+k) lsrc @ take (length lsrc - (i+k)) (drop i lsrc)) >" unfolding array_shr_def by sep_auto lemma array_shr_rule_alt: "< src \\<^sub>a lsrc > array_shr src i k <\_. src \\<^sub>a (take (length lsrc) (take (i+k) lsrc @ (drop i lsrc))) >" by (sep_auto simp add: min_def) definition array_shl where "array_shl a i k \ do { l \ Array.len a; safe_sblit a i (i-k) (l-i) } " lemma array_shl_rule[sep_heap_rules]: " < src \\<^sub>a lsrc > array_shl src i k <\_. src \\<^sub>a (take (i-k) lsrc @ (drop i lsrc) @ drop (i - k + (length lsrc - i)) lsrc) >" unfolding array_shl_def by sep_auto lemma array_shl_rule_alt: " \i \ length lsrc; k \ i\ \ < src \\<^sub>a lsrc > array_shl src i k <\_. src \\<^sub>a (take (i-k) lsrc @ (drop i lsrc) @ drop (length lsrc - k) lsrc) >" by sep_auto end \ No newline at end of file diff --git a/thys/BTree/BTree_ImpSet.thy b/thys/BTree/BTree_ImpSet.thy --- a/thys/BTree/BTree_ImpSet.thy +++ b/thys/BTree/BTree_ImpSet.thy @@ -1,902 +1,1701 @@ theory BTree_ImpSet imports BTree_Imp BTree_Set begin section "Imperative Set operations" subsection "Auxiliary operations" definition "split_relation xs \ \(as,bs) i. i \ length xs \ as = take i xs \ bs = drop i xs" lemma split_relation_alt: "split_relation as (ls,rs) i = (as = ls@rs \ i = length ls)" by (auto simp add: split_relation_def) lemma split_relation_length: "split_relation xs (ls,rs) (length xs) = (ls = xs \ rs = [])" by (simp add: split_relation_def) (* auxiliary lemmas on assns *) (* simp? not sure if it always makes things more easy *) lemma list_assn_prod_map: "list_assn (A \\<^sub>a B) xs ys = list_assn B (map snd xs) (map snd ys) * list_assn A (map fst xs) (map fst ys)" apply(induct "(A \\<^sub>a B)" xs ys rule: list_assn.induct) apply(auto simp add: ab_semigroup_mult_class.mult.left_commute ent_star_mono star_aci(2) star_assoc) done (* concrete *) lemma id_assn_list: "h \ list_assn id_assn (xs::'a list) ys \ xs = ys" apply(induction "id_assn::('a \ 'a \ assn)" xs ys rule: list_assn.induct) apply(auto simp add: less_Suc_eq_0_disj pure_def) done lemma snd_map_help: "x \ length tsi \ (\j snd (tsi!x) = ((map snd tsi)!x)" by auto lemma split_ismeq: "((a::nat) \ b \ X) = ((a < b \ X) \ (a = b \ X))" by auto lemma split_relation_map: "split_relation as (ls,rs) i \ split_relation (map f as) (map f ls, map f rs) i" apply(induction as arbitrary: ls rs i) apply(auto simp add: split_relation_def take_map drop_Cons') apply(metis list.simps(9) take_map) done lemma split_relation_access: "\split_relation as (ls,rs) i; rs = r#rrs\ \ as!i = r" by (simp add: split_relation_alt) lemma index_to_elem_all: "(\jx \ set xs. P x)" by (simp add: all_set_conv_nth) lemma index_to_elem: "n < length xs \ (\jx \ set (take n xs). P x)" by (simp add: all_set_conv_nth) (* ----------------- *) definition split_half :: "('a::heap \ 'b::{heap}) pfarray \ nat Heap" where "split_half a \ do { l \ pfa_length a; return (l div 2) }" lemma split_half_rule[sep_heap_rules]: "< is_pfa c tsi a * list_assn R ts tsi> split_half a <\i. is_pfa c tsi a * list_assn R ts tsi * \(i = length ts div 2 \ split_relation ts (BTree_Set.split_half ts) i)>" unfolding split_half_def split_relation_def apply(rule hoare_triple_preI) apply(sep_auto dest!: list_assn_len mod_starD) done subsection "The imperative split locale" text "This locale extends the abstract split locale, assuming that we are provided with an imperative program that refines the abstract split function." locale imp_split = abs_split: BTree_Set.split split for split:: "('a btree \ 'a::{heap,default,linorder}) list \ 'a \ ('a btree \ 'a) list \ ('a btree \ 'a) list" + fixes imp_split:: "('a btnode ref option \ 'a::{heap,default,linorder}) pfarray \ 'a \ nat Heap" assumes imp_split_rule [sep_heap_rules]:"sorted_less (separators ts) \ imp_split (a,n) p <\i. is_pfa c tsi (a,n) * blist_assn k ts tsi * \(split_relation ts (split ts p) i)>\<^sub>t" begin subsection "Membership" partial_function (heap) isin :: "'a btnode ref option \ 'a \ bool Heap" where "isin p x = (case p of None \ return False | (Some a) \ do { node \ !a; i \ imp_split (kvs node) x; tsl \ pfa_length (kvs node); if i < tsl then do { s \ pfa_get (kvs node) i; let (sub,sep) = s in if x = sep then return True else isin sub x } else isin (last node) x } )" subsection "Insertion" datatype 'b btupi = T\<^sub>i "'b btnode ref option" | Up\<^sub>i "'b btnode ref option" "'b" "'b btnode ref option" fun btupi_assn where "btupi_assn k (abs_split.T\<^sub>i l) (T\<^sub>i li) = btree_assn k l li" | "btupi_assn k (abs_split.Up\<^sub>i l a r) (Up\<^sub>i li ai ri) = btree_assn k l li * id_assn a ai * btree_assn k r ri" | "btupi_assn _ _ _ = false" definition node\<^sub>i :: "nat \ ('a btnode ref option \ 'a) pfarray \ 'a btnode ref option \ 'a btupi Heap" where "node\<^sub>i k a ti \ do { n \ pfa_length a; if n \ 2*k then do { a' \ pfa_shrink_cap (2*k) a; l \ ref (Btnode a' ti); return (T\<^sub>i (Some l)) } else do { b \ (pfa_empty (2*k) :: ('a btnode ref option \ 'a) pfarray Heap); i \ split_half a; m \ pfa_get a i; b' \ pfa_drop a (i+1) b; a' \ pfa_shrink i a; a'' \ pfa_shrink_cap (2*k) a'; let (sub,sep) = m in do { l \ ref (Btnode a'' sub); r \ ref (Btnode b' ti); return (Up\<^sub>i (Some l) sep (Some r)) } } }" partial_function (heap) ins :: "nat \ 'a \ 'a btnode ref option \ 'a btupi Heap" where "ins k x apo = (case apo of None \ return (Up\<^sub>i None x None) | (Some ap) \ do { a \ !ap; i \ imp_split (kvs a) x; tsl \ pfa_length (kvs a); if i < tsl then do { s \ pfa_get (kvs a) i; let (sub,sep) = s in if sep = x then return (T\<^sub>i apo) else do { r \ ins k x sub; case r of (T\<^sub>i lp) \ do { pfa_set (kvs a) i (lp,sep); return (T\<^sub>i apo) } | (Up\<^sub>i lp x' rp) \ do { pfa_set (kvs a) i (rp,sep); if tsl < 2*k then do { kvs' \ pfa_insert (kvs a) i (lp,x'); ap := (Btnode kvs' (last a)); return (T\<^sub>i apo) } else do { kvs' \ pfa_insert_grow (kvs a) i (lp,x'); node\<^sub>i k kvs' (last a) } } } } else do { r \ ins k x (last a); case r of (T\<^sub>i lp) \ do { ap := (Btnode (kvs a) lp); return (T\<^sub>i apo) } | (Up\<^sub>i lp x' rp) \ if tsl < 2*k then do { kvs' \ pfa_append (kvs a) (lp,x'); ap := (Btnode kvs' rp); return (T\<^sub>i apo) } else do { kvs' \ pfa_append_grow' (kvs a) (lp,x'); node\<^sub>i k kvs' rp } } } )" (*fun tree\<^sub>i::"'a up\<^sub>i \ 'a btree" where "tree\<^sub>i (T\<^sub>i sub) = sub" | "tree\<^sub>i (Up\<^sub>i l a r) = (Node [(l,a)] r)" fun insert::"nat \ 'a \ 'a btree \ 'a btree" where "insert k x t = tree\<^sub>i (ins k x t)" *) definition insert :: "nat \ ('a::{heap,default,linorder}) \ 'a btnode ref option \ 'a btnode ref option Heap" where "insert \ \k x ti. do { ti' \ ins k x ti; case ti' of T\<^sub>i sub \ return sub | Up\<^sub>i l a r \ do { kvs \ pfa_init (2*k) (l,a) 1; t' \ ref (Btnode kvs r); return (Some t') } }" subsection "Deletion" text "Note that the below operations have not been verified to refine the abstract set operations." (* rebalance middle tree gets a list of trees, an index pointing to the position of sub/sep and a last tree *) definition rebalance_middle_tree:: "nat \ (('a::{default,heap,linorder}) btnode ref option \ 'a) pfarray \ nat \ 'a btnode ref option \ 'a btnode Heap" where "rebalance_middle_tree \ \ k tsi i r_ti. ( case r_ti of None \ do { - return (Btnode tsi r_ti) + (r_sub,sep) \ pfa_get tsi i; + case r_sub of None \ return (Btnode tsi r_ti) } | Some p_t \ do { - ti \ !p_t; (r_sub,sep) \ pfa_get tsi i; case r_sub of (Some p_sub) \ do { + ti \ !p_t; sub \ !p_sub; l_sub \ pfa_length (kvs sub); l_tts \ pfa_length (kvs ti); if l_sub \ k \ l_tts \ k then do { return (Btnode tsi r_ti) } else do { l_tsi \ pfa_length tsi; - if l_tsi = i then do { + if i+1 = l_tsi then do { mts' \ pfa_append_extend_grow (kvs sub) (last sub,sep) (kvs ti); res_node\<^sub>i \ node\<^sub>i k mts' (last ti); case res_node\<^sub>i of - T\<^sub>i u \ return (Btnode tsi u) | + T\<^sub>i u \ do { + tsi' \ pfa_shrink i tsi; + return (Btnode tsi' u) + } | Up\<^sub>i l a r \ do { - tsi' \ pfa_append tsi (l,a); + tsi' \ pfa_set tsi i (l,a); return (Btnode tsi' r) } } else do { (r_rsub,rsep) \ pfa_get tsi (i+1); case r_rsub of Some p_rsub \ do { rsub \ !p_rsub; mts' \ pfa_append_extend_grow (kvs sub) (last sub,sep) (kvs rsub); res_node\<^sub>i \ node\<^sub>i k mts' (last rsub); case res_node\<^sub>i of T\<^sub>i u \ do { - tsi' \ pfa_set tsi (i+1) (u,rsep); - tsi'' \ pfa_delete tsi' i; + tsi' \ pfa_set tsi i (u,rsep); + tsi'' \ pfa_delete tsi' (i+1); return (Btnode tsi'' r_ti) } | Up\<^sub>i l a r \ do { tsi' \ pfa_set tsi i (l,a); tsi'' \ pfa_set tsi' (i+1) (r,rsep); return (Btnode tsi'' r_ti) } } } } } }) " definition rebalance_last_tree:: "nat \ (('a::{default,heap,linorder}) btnode ref option \ 'a) pfarray \ 'a btnode ref option \ 'a btnode Heap" where "rebalance_last_tree \ \k tsi ti. do { l_tsi \ pfa_length tsi; rebalance_middle_tree k tsi (l_tsi-1) ti }" -partial_function (heap) split_max ::"nat \ ('a::{default,heap,linorder}) btnode ref option \ ('a btnode ref option \ 'a) Heap" - where - "split_max k r_t = (case r_t of Some p_t \ do { - t \ !p_t; - (case t of Btnode tsi r_ti \ - case r_ti of None \ do { - (sub,sep) \ pfa_last tsi; - tsi' \ pfa_butlast tsi; - p_t := Btnode tsi' sub; - return (Some p_t, sep) - } | - _ \ do { - (sub,sep) \ split_max k r_ti; - p_t' \ rebalance_last_tree k tsi sub; - p_t := p_t'; - return (Some p_t, sep) - }) -}) -" - -partial_function (heap) del ::"nat \ 'a \ ('a::{default,heap,linorder}) btnode ref option \ 'a btnode ref option Heap" - where - "del k x ti = (case ti of None \ return None | - Some p \ do { - node \ !p; - i \ imp_split (kvs node) x; - tsl \ pfa_length (kvs node); - if i < tsl then do { - s \ pfa_get (kvs node) i; - let (sub,sep) = s in - if x \ sep then do { - sub' \ del k x sub; - kvs' \ pfa_set (kvs node) i (sub',sep); - node' \ rebalance_middle_tree k kvs' i (last node); - ti' \ ref node'; - return (Some ti') - } - else if sub = None then do{ - pfa_delete (kvs node) i; - return ti - } - else do { - sm \ split_max k sub; - kvs' \ pfa_set (kvs node) i sm; - node' \ rebalance_middle_tree k kvs' i (last node); - ti' \ ref node'; - return (Some ti') - } - } else do { - t' \ del k x (last node); - node' \ rebalance_last_tree k (kvs node) t'; - ti' \ ref node'; - return (Some ti') - } -}) -" - -partial_function (heap) reduce_root ::"('a::{default,heap,linorder}) btnode ref option \ 'a btnode ref option Heap" - where - "reduce_root ti = (case ti of - None \ return None | - Some p_t \ do { - node \ !p_t; - tsl \ pfa_length (kvs node); - case tsl of 0 \ return (last node) | - _ \ return ti -})" - -partial_function (heap) delete ::"nat \ 'a \ ('a::{default,heap,linorder}) btnode ref option \ 'a btnode ref option Heap" - where - "delete k x ti = do { - ti' \ del k x ti; - reduce_root ti' -}" subsection "Refinement of the abstract B-tree operations" definition empty ::"('a::{default,heap,linorder}) btnode ref option Heap" where "empty = return None" lemma P_imp_Q_implies_P: "P \ (Q \ P)" by simp lemma "sorted_less (inorder t) \ isin ti x <\r. btree_assn k t ti * \(abs_split.isin t x = r)>\<^sub>t" proof(induction t x arbitrary: ti rule: abs_split.isin.induct) case (1 x) then show ?case apply(subst isin.simps) apply (cases ti) apply (auto simp add: return_cons_rule) done next case (2 ts t x) then obtain ls rs where list_split[simp]: "split ts x = (ls,rs)" by (cases "split ts x") then show ?case proof (cases rs) (* NOTE: induction condition trivial here *) case [simp]: Nil show ?thesis apply(subst isin.simps) apply(sep_auto) using "2.prems" sorted_inorder_separators apply blast apply(auto simp add: split_relation_def dest!: sym[of "[]"] mod_starD list_assn_len)[] apply(rule hoare_triple_preI) apply(auto simp add: split_relation_def dest!: sym[of "[]"] mod_starD list_assn_len)[] using 2(3) apply(sep_auto heap: "2.IH"(1)[of ls "[]"] simp add: sorted_wrt_append) done next case [simp]: (Cons h rrs) obtain sub sep where h_split[simp]: "h = (sub,sep)" by (cases h) show ?thesis proof (cases "sep = x") (* NOTE: no induction required here, only vacuous counter cases generated *) case [simp]: True then show ?thesis apply(simp split: list.splits prod.splits) apply(subst isin.simps) using "2.prems" sorted_inorder_separators apply(sep_auto) apply(rule hoare_triple_preI) apply(auto simp add: split_relation_alt list_assn_append_Cons_left dest!: mod_starD list_assn_len)[] apply(rule hoare_triple_preI) apply(auto simp add: split_relation_def dest!: sym[of "[]"] mod_starD list_assn_len)[] done next case [simp]: False show ?thesis apply(simp split: list.splits prod.splits) apply safe using False apply simp apply(subst isin.simps) using "2.prems" sorted_inorder_separators apply(sep_auto) (*eliminate vacuous case*) apply(auto simp add: split_relation_alt list_assn_append_Cons_left dest!: mod_starD list_assn_len)[] (* simplify towards induction step *) apply(auto simp add: split_relation_alt list_assn_append_Cons_left dest!: mod_starD list_assn_len)[] (* NOTE show that z = (suba, sepa) *) apply(rule norm_pre_ex_rule)+ apply(rule hoare_triple_preI) subgoal for p tsi n ti xsi suba sepa zs1 z zs2 _ apply(subgoal_tac "z = (suba, sepa)", simp) using 2(3) apply(sep_auto heap:"2.IH"(2)[of ls rs h rrs sub sep] simp add: sorted_wrt_append) using list_split Cons h_split apply simp_all (* proof that previous assumptions hold later *) apply(rule P_imp_Q_implies_P) apply(rule ent_ex_postI[where x="(tsi,n)"]) apply(rule ent_ex_postI[where x="ti"]) apply(rule ent_ex_postI[where x="(zs1 @ (suba, sepa) # zs2)"]) apply(rule ent_ex_postI[where x="zs1"]) apply(rule ent_ex_postI[where x="z"]) apply(rule ent_ex_postI[where x="zs2"]) apply sep_auto (* prove subgoal_tac assumption *) apply (metis (no_types, lifting) list_assn_aux_ineq_len list_assn_len nth_append_length star_false_left star_false_right) done (* eliminate last vacuous case *) apply(rule hoare_triple_preI) apply(auto simp add: split_relation_def dest!: mod_starD list_assn_len)[] done qed qed qed declare abs_split.node\<^sub>i.simps [simp add] lemma node\<^sub>i_rule: assumes c_cap: "2*k \ c" "c \ 4*k+1" shows "\<^sub>a id_assn) ts tsi * btree_assn k t ti> node\<^sub>i k (a,n) ti <\r. btupi_assn k (abs_split.node\<^sub>i k ts t) r >\<^sub>t" proof (cases "length ts \ 2*k") case [simp]: True then show ?thesis apply(subst node\<^sub>i_def) apply(rule hoare_triple_preI) apply(sep_auto dest!: mod_starD list_assn_len) apply(sep_auto simp add: is_pfa_def)[] using c_cap apply(sep_auto simp add: is_pfa_def)[] apply(sep_auto dest!: mod_starD list_assn_len)[] using True apply(sep_auto dest!: mod_starD list_assn_len) done next case [simp]: False then obtain ls sub sep rs where split_half_eq: "BTree_Set.split_half ts = (ls,(sub,sep)#rs)" using abs_split.node\<^sub>i_cases by blast then show ?thesis apply(subst node\<^sub>i_def) apply(rule hoare_triple_preI) apply(sep_auto dest!: mod_starD list_assn_len) apply(sep_auto simp add: split_relation_alt split_relation_length is_pfa_def dest!: mod_starD list_assn_len) using False apply(sep_auto simp add: split_relation_alt ) using False apply(sep_auto simp add: is_pfa_def)[] apply(sep_auto)[] apply(sep_auto simp add: is_pfa_def split_relation_alt)[] using c_cap apply(sep_auto simp add: is_pfa_def)[] apply(sep_auto)[] using c_cap apply(sep_auto simp add: is_pfa_def)[] using c_cap apply(simp) apply(vcg) apply(simp) apply(rule impI) subgoal for _ _ _ _ rsa subi ba rn lsi al ar _ thm ent_ex_postI thm ent_ex_postI[where x="take (length tsi div 2) tsi"] (* instantiate right hand side *) apply(rule ent_ex_postI[where x="(rsa,rn)"]) apply(rule ent_ex_postI[where x="ti"]) apply(rule ent_ex_postI[where x="(drop (Suc (length tsi div 2)) tsi)"]) apply(rule ent_ex_postI[where x="lsi"]) apply(rule ent_ex_postI[where x="subi"]) apply(rule ent_ex_postI[where x="take (length tsi div 2) tsi"]) (* introduce equality between equality of split tsi/ts and original lists *) apply(simp add: split_relation_alt) apply(subgoal_tac "tsi = take (length tsi div 2) tsi @ (subi, ba) # drop (Suc (length tsi div 2)) tsi") apply(rule back_subst[where a="blist_assn k ts (take (length tsi div 2) tsi @ (subi, ba) # (drop (Suc (length tsi div 2)) tsi))" and b="blist_assn k ts tsi"]) apply(rule back_subst[where a="blist_assn k (take (length tsi div 2) ts @ (sub, sep) # rs)" and b="blist_assn k ts"]) apply(subst list_assn_aux_append_Cons) apply sep_auto apply sep_auto apply simp apply simp apply(rule back_subst[where a="tsi ! (length tsi div 2)" and b="(subi, ba)"]) apply(rule id_take_nth_drop) apply simp apply simp done done qed declare abs_split.node\<^sub>i.simps [simp del] lemma node\<^sub>i_no_split: "length ts \ 2*k \ abs_split.node\<^sub>i k ts t = abs_split.T\<^sub>i (Node ts t)" by (simp add: abs_split.node\<^sub>i.simps) lemma node\<^sub>i_rule_app: "\2*k \ c; c \ 4*k+1\ \ node\<^sub>i k (aa, al) ri i k (ls @ [(l, a)]) r)>\<^sub>t" proof - note node\<^sub>i_rule[of k c "(tsi' @ [(li, ai)])" aa al "(ls @ [(l, a)])" r ri] moreover assume "2*k \ c" "c \ 4*k+1" ultimately show ?thesis by (simp add: mult.left_assoc) qed lemma node\<^sub>i_rule_ins2: "\2*k \ c; c \ 4*k+1; length ls = length lsi\ \ node\<^sub>i k (aa, al) ti i k (ls @ (l, a) # (r,a') # rs) t)>\<^sub>t" proof - assume [simp]: "2*k \ c" "c \ 4*k+1" "length ls = length lsi" moreover note node\<^sub>i_rule[of k c "(lsi @ (li, ai) # (ri,a'i) # rsi)" aa al "(ls @ (l, a) # (r,a') # rs)" t ti] ultimately show ?thesis by (simp add: mult.left_assoc list_assn_aux_append_Cons) qed lemma ins_rule: "sorted_less (inorder t) \ ins k x ti <\r. btupi_assn k (abs_split.ins k x t) r>\<^sub>t" proof (induction k x t arbitrary: ti rule: abs_split.ins.induct) case (1 k x) then show ?case apply(subst ins.simps) apply (sep_auto simp add: pure_app_eq) done next case (2 k x ts t) obtain ls rrs where list_split: "split ts x = (ls,rrs)" by (cases "split ts x") have [simp]: "sorted_less (separators ts)" using "2.prems" sorted_inorder_separators by simp have [simp]: "sorted_less (inorder t)" using "2.prems" sorted_inorder_induct_last by simp show ?case proof (cases rrs) case Nil then show ?thesis proof (cases "abs_split.ins k x t") case (T\<^sub>i a) then show ?thesis apply(subst ins.simps) apply(sep_auto) subgoal for p tsil tsin tti using Nil list_split by (simp add: list_assn_aux_ineq_len split_relation_alt) subgoal for p tsil tsin tti tsi' i tsin' _ sub sep apply(rule hoare_triple_preI) using Nil list_split by (simp add: list_assn_aux_ineq_len split_relation_alt) subgoal for p tsil tsin tti tsi' thm "2.IH"(1)[of ls rrs tti] using Nil list_split T\<^sub>i apply(sep_auto split!: list.splits simp add: split_relation_alt heap add: "2.IH"(1)[of ls rrs tti]) subgoal for ai apply(cases ai) apply sep_auto apply sep_auto done done done next case (Up\<^sub>i l a r) then show ?thesis apply(subst ins.simps) apply(sep_auto) subgoal for p tsil tsin tti using Nil list_split by (simp add: list_assn_aux_ineq_len split_relation_alt) subgoal for p tsil tsin tti tsi' i tsin' _ sub sep using Nil list_split by (simp add: list_assn_aux_ineq_len split_relation_alt) subgoal for p tsil tsin tti tsi' i tsin' thm "2.IH"(1)[of ls rrs tti] using Nil list_split Up\<^sub>i apply(sep_auto split!: list.splits simp add: split_relation_alt heap add: "2.IH"(1)[of ls rrs tti]) subgoal for ai apply(cases ai) apply sep_auto apply(rule hoare_triple_preI) apply(sep_auto) apply(auto dest!: mod_starD simp add: is_pfa_def)[] apply (sep_auto) subgoal for li ai ri (* no split case *) apply(subgoal_tac "length (ls @ [(l,a)]) \ 2*k") apply(simp add: node\<^sub>i_no_split) apply(rule ent_ex_postI[where x="(tsil,Suc tsin)"]) apply(rule ent_ex_postI[where x="ri"]) apply(rule ent_ex_postI[where x="tsi' @ [(li, ai)]"]) apply(sep_auto) apply (sep_auto dest!: mod_starD list_assn_len simp add: is_pfa_def) done (* split case*) apply(sep_auto heap add: node\<^sub>i_rule_app) done done done qed next case (Cons a rs) obtain sub sep where a_split: "a = (sub,sep)" by (cases a) then have [simp]: "sorted_less (inorder sub)" using "2.prems" abs_split.split_axioms list_split Cons sorted_inorder_induct_subtree split_def by fastforce then show ?thesis proof(cases "x = sep") case True show ?thesis apply(subst ins.simps) apply(sep_auto) subgoal for p tsil tsin tti tsi j subi using Cons list_split a_split True by sep_auto subgoal for p tsil tsin tti tsi j _ _ subi sepi apply(rule hoare_triple_preI) using Cons list_split a_split True apply(subgoal_tac "sepi = sep") apply (sep_auto simp add: split_relation_alt) apply(sep_auto simp add: list_assn_prod_map dest!: mod_starD id_assn_list) by (metis length_map snd_conv snd_map_help(2) split_relation_access) subgoal for p tsil tsin tti tsi j apply(rule hoare_triple_preI) using Cons list_split by (sep_auto simp add: split_relation_alt dest!: mod_starD list_assn_len) done next case False then show ?thesis proof (cases "abs_split.ins k x sub") case (T\<^sub>i a') then show ?thesis apply(auto simp add: Cons list_split a_split False) using False apply simp apply(subst ins.simps) apply vcg apply auto apply(rule norm_pre_ex_rule)+ (* at this point, we want to introduce the split, and after that tease the hoare triple assumptions out of the bracket, s.t. we don't split twice *) apply vcg apply sep_auto using list_split Cons apply(simp add: split_relation_alt list_assn_append_Cons_left) apply (rule impI) apply(rule norm_pre_ex_rule)+ apply(rule hoare_triple_preI) apply sep_auto (* discard wrong branch *) subgoal for p tsil tsin ti zs1 subi sepi zs2 _ _ suba apply(subgoal_tac "sepi = x") using list_split Cons a_split apply(auto dest!: mod_starD )[] apply(auto dest!: mod_starD list_assn_len)[] done (* actual induction branch *) subgoal for p tsil tsin ti zs1 subi sepi zs2 _ _ n z suba sepa apply (cases a, simp) apply(subgoal_tac "subi = suba", simp) using list_split a_split T\<^sub>i False apply (vcg heap: 2) apply(auto split!: btupi.splits) (* careful progression for manual value insertion *) apply vcg apply simp apply vcg apply simp subgoal for a'i q r apply(rule impI) apply(simp add: list_assn_append_Cons_left) apply(rule ent_ex_postI[where x="(tsil,tsin)"]) apply(rule ent_ex_postI[where x="ti"]) apply(rule ent_ex_postI[where x="(zs1 @ (a'i, sepi) # zs2)"]) apply(rule ent_ex_postI[where x="zs1"]) apply(rule ent_ex_postI[where x="(a'i,sep)"]) apply(rule ent_ex_postI[where x="zs2"]) apply sep_auto apply (simp add: pure_app_eq) apply(sep_auto dest!: mod_starD list_assn_len)[] done apply (metis list_assn_aux_ineq_len Pair_inject list_assn_len nth_append_length star_false_left star_false_right) done subgoal for p tsil tsin ti zs1 subi sepi zs2 _ _ suba apply(auto dest!: mod_starD list_assn_len)[] done done next case (Up\<^sub>i l w r) then show ?thesis apply(auto simp add: Cons list_split a_split False) using False apply simp apply(subst ins.simps) apply vcg apply auto apply(rule norm_pre_ex_rule)+ (* at this point, we want to introduce the split, and after that tease the hoare triple assumptions out of the bracket, s.t. we don't split twice *) apply vcg apply sep_auto using list_split Cons apply(simp add: split_relation_alt list_assn_append_Cons_left) apply (rule impI) apply(rule norm_pre_ex_rule)+ apply(rule hoare_triple_preI) apply sep_auto (* discard wrong branch *) subgoal for p tsil tsin ti zs1 subi sepi zs2 _ _ suba apply(subgoal_tac "sepi = x") using list_split Cons a_split apply(auto dest!: mod_starD )[] apply(auto dest!: mod_starD list_assn_len)[] done (* actual induction branch *) subgoal for p tsil tsin ti zs1 subi sepi zs2 _ _ n z suba sepa apply(subgoal_tac "subi = suba", simp) thm 2(2)[of ls rrs a rs sub sep] using list_split a_split Cons Up\<^sub>i False apply (sep_auto heap: 2(2)) apply(auto split!: btupi.splits) (* careful progression for manual value insertion *) apply vcg apply simp subgoal for li wi ri u (* no split case *) apply (cases u,simp) apply (sep_auto dest!: mod_starD list_assn_len heap: pfa_insert_grow_rule) apply (simp add: is_pfa_def)[] apply (metis le_less_linear length_append length_take less_not_refl min.absorb2 trans_less_add1) apply(simp add: is_pfa_def) apply (metis add_Suc_right length_Cons length_append length_take min.absorb2) apply(sep_auto split: prod.splits dest!: mod_starD list_assn_len)[] (* no split case *) apply(subgoal_tac "length (ls @ [(l,w)]) \ 2*k") apply(simp add: node\<^sub>i_no_split) apply(rule ent_ex_postI[where x="(tsil,Suc tsin)"]) apply(rule ent_ex_postI[where x="ti"]) apply(rule ent_ex_postI[where x="(zs1 @ (li, wi) # (ri, sep) # zs2)"]) apply(sep_auto dest!: mod_starD list_assn_len) apply (sep_auto dest!: mod_starD list_assn_len simp add: is_pfa_def) done apply vcg apply simp subgoal for x21 x22 x23 u (* split case *) apply (cases u,simp) thm pfa_insert_grow_rule[where ?l="((zs1 @ (suba, sepi) # zs2)[length ls := (x23, sepa)])"] apply (sep_auto dest!: mod_starD list_assn_len heap: pfa_insert_grow_rule) apply (simp add: is_pfa_def)[] apply (metis le_less_linear length_append length_take less_not_refl min.absorb2 trans_less_add1) apply(auto split: prod.splits dest!: mod_starD list_assn_len)[] apply (vcg heap: node\<^sub>i_rule_ins2) apply simp apply simp apply simp apply sep_auto done apply(auto dest!: mod_starD list_assn_len)[] done subgoal for p tsil tsin ti zs1 subi sepi zs2 _ _ suba apply(auto dest!: mod_starD list_assn_len)[] done done qed qed qed qed text "The imperative insert refines the abstract insert." lemma insert_rule: assumes "k > 0" "sorted_less (inorder t)" shows " insert k x ti <\r. btree_assn k (abs_split.insert k x t) r>\<^sub>t" unfolding insert_def apply(cases "abs_split.ins k x t") apply(sep_auto split!: btupi.splits heap: ins_rule[OF assms(2)]) using assms apply(vcg heap: ins_rule[OF assms(2)]) apply(simp split!: btupi.splits) apply(vcg) apply auto[] apply vcg apply auto[] subgoal for l a r li ai ri tsa tsn ti apply(rule ent_ex_postI[where x="(tsa,tsn)"]) apply(rule ent_ex_postI[where x="ri"]) apply(rule ent_ex_postI[where x="[(li, ai)]"]) apply sep_auto done done text "The \"pure\" resulting rule follows automatically." lemma insert_rule': shows "(abs_split.invar_inorder (Suc k) t \ sorted_less (inorder t))> insert (Suc k) x ti <\ri.\\<^sub>Ar. btree_assn (Suc k) r ri * \(abs_split.invar_inorder (Suc k) r \ sorted_less (inorder r) \ inorder r = (ins_list x (inorder t)))>\<^sub>t" using abs_split.insert_bal abs_split.insert_order abs_split.insert_inorder by (sep_auto heap: insert_rule simp add: sorted_ins_list) +lemma list_update_length2 [simp]: + "(xs @ x # y # ys)[Suc (length xs) := z] = (xs @ x # z # ys)" + by (induct xs, auto) + lemma node\<^sub>i_rule_ins: "\2*k \ c; c \ 4*k+1; length ls = length lsi\ \ node\<^sub>i k (aa, al) - ti i k (ls @ (l, a) # rs) t)>\<^sub>t" + btree_assn k t ti> + node\<^sub>i k (aa, al) ti + i k (ls @ (l, a) # rs) t)>\<^sub>t" proof - assume [simp]: "2*k \ c" "c \ 4*k+1" "length ls = length lsi" moreover note node\<^sub>i_rule[of k c "(lsi @ (li, ai) # rsi)" aa al "(ls @ (l, a) # rs)" t ti] ultimately show ?thesis by (simp add: mult.left_assoc list_assn_aux_append_Cons) qed +lemma btupi_assn_T: "h \ btupi_assn k (abs_split.node\<^sub>i k ts t) (T\<^sub>i x) \ abs_split.node\<^sub>i k ts t = abs_split.T\<^sub>i (Node ts t)" + apply(auto simp add: abs_split.node\<^sub>i.simps dest!: mod_starD split!: list.splits) + done + +lemma btupi_assn_Up: "h \ btupi_assn k (abs_split.node\<^sub>i k ts t) (Up\<^sub>i l a r) \ + abs_split.node\<^sub>i k ts t = ( + case BTree_Set.split_half ts of (ls, (sub,sep)#rs) \ + abs_split.Up\<^sub>i (Node ls sub) sep (Node rs t))" + apply(auto simp add: abs_split.node\<^sub>i.simps dest!: mod_starD split!: list.splits) + done + +lemma second_last_access:"(xs@a#b#ys) ! Suc(length xs) = b" + by (simp add: nth_via_drop) + +lemma pfa_assn_len:"h \ is_pfa k ls (a,n) \ n \ k \ length ls = n" + by (auto simp add: is_pfa_def) + +(*declare "impCE"[rule del]*) +lemma rebalance_middle_tree_rule: + assumes "height t = height sub" + and "case rs of (rsub,rsep) # list \ height rsub = height t | [] \ True" + and "i = length ls" + shows " + rebalance_middle_tree k (a,n) i ti + <\r. btnode_assn k (abs_split.rebalance_middle_tree k ls sub sep rs t) r >\<^sub>t" + apply(simp add: list_assn_append_Cons_left) + apply(rule norm_pre_ex_rule)+ +proof(goal_cases) + case (1 lsi z rsi) + then show ?case + proof(cases z) + case z_split: (Pair subi sepi) + then show ?thesis + proof(cases sub) + case sub_leaf[simp]: Leaf + then have t_leaf[simp]: "t = Leaf" using assms + by (cases t) auto + show ?thesis + apply (subst rebalance_middle_tree_def) + apply (rule hoare_triple_preI) + apply (vcg) + using assms apply (auto dest!: mod_starD list_assn_len split!: option.splits) + apply (vcg) + apply (auto dest!: mod_starD list_assn_len split!: option.splits) + apply (rule ent_ex_postI[where x=tsi]) + apply sep_auto + done + next + case sub_node[simp]: (Node mts mt) + then obtain tts tt where t_node[simp]: "t = Node tts tt" using assms + by (cases t) auto + then show ?thesis + proof(cases "length mts \ k \ length tts \ k") + case True + then show ?thesis + apply(subst rebalance_middle_tree_def) + apply(rule hoare_triple_preI) + apply(sep_auto dest!: mod_starD) + using assms apply (auto dest!: list_assn_len)[] + + using assms apply(sep_auto split!: prod.splits) + using assms apply (auto simp del: height_btree.simps dest!: mod_starD list_assn_len)[] + using z_split apply(auto)[] + subgoal for _ _ _ _ _ _ _ _ tp tsia' tsin' _ _ _ _ _ _ _ _ _ _ tsia tsin tti ttsi sepi subp + apply(auto dest!: mod_starD list_assn_len simp: prod_assn_def)[] + apply(vcg) + apply(auto)[] + apply(rule ent_ex_postI[where x="lsi@(Some subp, sepi)#rsi"]) + apply(rule ent_ex_postI[where x="(tsia, tsin)"]) + apply(rule ent_ex_postI[where x="tti"]) + apply(rule ent_ex_postI[where x=ttsi]) + apply(sep_auto)[] + apply(rule hoare_triple_preI) + using True apply(auto dest!: mod_starD list_assn_len) + done + done + next + case False + then show ?thesis + proof(cases rs) + case Nil + then show ?thesis + apply(subst rebalance_middle_tree_def) + apply(rule hoare_triple_preI) + apply(sep_auto dest!: mod_starD) + using assms apply (auto dest!: list_assn_len)[] + + apply(sep_auto split!: prod.splits) + using assms apply (auto simp del: height_btree.simps dest!: mod_starD list_assn_len)[] + using z_split apply(auto)[] + subgoal for _ _ _ _ _ _ _ _ tp tsia' tsin' _ _ _ _ _ _ _ _ _ _ tsia tsin tti ttsi + apply(auto dest!: mod_starD list_assn_len simp: prod_assn_def)[] + apply(vcg) + using False apply(auto dest!: mod_starD list_assn_len) + done + apply(sep_auto dest!: mod_starD) + using assms apply (auto dest!: list_assn_len)[] + using assms apply (auto dest!: list_assn_len)[] + apply(sep_auto) + using assms apply (auto dest!: list_assn_len mod_starD)[] + using assms apply (auto dest!: list_assn_len mod_starD)[] + (* Issue: we do not know yet what 'subp is pointing at *) + subgoal for _ _ _ _ _ _ tp tsia tsin tti ttsi _ _ _ _ _ _ _ _ tsia' tsin' tti' tsi' subi sepi subp + apply(subgoal_tac "z = (subi, sepi)") + prefer 2 + apply (metis assms(3) list_assn_len nth_append_length) + apply simp + apply(vcg) + subgoal + (* still the "IF" branch *) + apply(rule entailsI) + (* solves impossible case*) + using False apply (auto dest!: list_assn_len mod_starD)[] + done + apply (auto del: impCE) + apply(thin_tac "_ \ _")+ + apply(rule hoare_triple_preI) + (* for each possible combination of \ and \\, a subgoal is created *) + apply(sep_auto heap add: node\<^sub>i_rule_ins dest!: mod_starD del: impCE) + apply (auto dest!: pfa_assn_len)[] + apply (auto dest!: pfa_assn_len list_assn_len)[] + subgoal + apply(thin_tac "_ \ _")+ + apply(rule hoare_triple_preI) + apply(sep_auto split!: btupi.splits del: impCE) + apply(auto dest!: btupi_assn_T mod_starD del: impCE)[] + apply(rule ent_ex_postI[where x="lsi"]) + apply sep_auto + apply (sep_auto del: impCE) + apply(auto dest!: btupi_assn_Up mod_starD split!: list.splits del: impCE)[] + subgoal for li ai ri + apply(rule ent_ex_postI[where x="lsi @ [(li, ai)]"]) + apply sep_auto + done + done + apply (sep_auto del: impCE) + using assms apply(auto dest!: pfa_assn_len list_assn_len mod_starD)[] + using assms apply(auto dest!: pfa_assn_len list_assn_len mod_starD)[] + done + done + next + case (Cons rss rrs) + then show ?thesis + apply(subst rebalance_middle_tree_def) + apply(rule hoare_triple_preI) + apply(sep_auto dest!: mod_starD) + using assms apply (auto dest!: list_assn_len)[] + + apply(sep_auto split!: prod.splits) + using assms apply (auto simp del: height_btree.simps dest!: mod_starD list_assn_len)[] + apply(auto)[] + subgoal for _ _ _ _ _ _ _ _ tp tsia' tsin' _ _ _ _ _ _ _ _ _ _ tsia tsin tti ttsi + apply(auto dest!: mod_starD list_assn_len simp: prod_assn_def)[] + apply(vcg) + using False apply(auto dest!: mod_starD list_assn_len) + done + apply(sep_auto dest!: mod_starD del: impCE) + using assms apply (auto dest!: list_assn_len)[] + apply(sep_auto del: impCE) + using assms apply (auto dest!: list_assn_len mod_starD)[] + (* Issue: we do not know yet what 'xa' is pointing at *) + subgoal for list_heap1 list_heap2 _ _ _ _ _ _ tp ttsia' ttsin' tti' ttsi' _ _ _ _ _ _ _ _ ttsia ttsin tti ttsi subi sepi subp + apply(subgoal_tac "z = (subi, sepi)") + prefer 2 + apply (metis assms(3) list_assn_len nth_append_length) + apply simp + apply(vcg) + subgoal + (* still the "IF" branch *) + apply(rule entailsI) + (* solves impossible case*) + using False apply (auto dest!: list_assn_len mod_starD)[] + done + apply simp + subgoal for subtsi subti subts ti subi subtsl ttsl + (* TODO different nodei rule here *) + supply R = node\<^sub>i_rule_ins[where k=k and c="(max (2 * k) (Suc (_ + ttsin)))" and lsi=subts] + thm R + apply(cases subtsi) + apply(sep_auto heap add: R pfa_append_extend_grow_rule dest!: mod_starD del: impCE) + (* all of these cases are vacuous *) + using assms apply (auto dest!: list_assn_len pfa_assn_len)[] + using assms apply (auto dest!: list_assn_len pfa_assn_len)[] + using assms apply (auto dest!: list_assn_len pfa_assn_len)[] + apply(sep_auto split!: btupi.splits del: impCE) + using assms apply (auto dest!: list_assn_len pfa_assn_len)[] + apply(thin_tac "_ \ _")+ + apply(rule hoare_triple_preI) + apply (cases rsi) + apply(auto dest!: list_assn_len mod_starD)[] + (* TODO avoid creating subgoals here but still split the heap? do we need to do that anyways *) + subgoal for subtsa subtsn mtsa mtsn mtt mtsi _ _ _ _ _ _ _ _ rsubsep _ rrsi rssi + (* ensuring that the tree to the right is not none *) + apply (cases rsubsep) + apply(subgoal_tac "rsubsep = rrsi") + prefer 2 + using assms apply(auto dest!: list_assn_len mod_starD del: impCE simp add: second_last_access)[] + apply (simp add: prod_assn_def) + apply(cases rss) + apply simp + subgoal for rsubi rsepi rsub rsep + apply(subgoal_tac "height rsub \ 0") + prefer 2 + using assms apply(auto)[] + apply(cases rsubi; cases rsub) + apply simp+ + (* now we may proceed *) + apply (vcg (ss)) + apply (vcg (ss)) + apply (vcg (ss)) + apply (vcg (ss)) + apply (vcg (ss)) + subgoal for rsubi rsubts rsubt rsubtsi' rsubti rsubtsi subnode + apply(cases "kvs subnode") + apply (vcg (ss)) + apply (vcg (ss)) + apply (vcg (ss)) + apply (vcg (ss)) + apply (vcg (ss)) + subgoal for _ rsubtsn subtsmergedi + apply (cases subtsmergedi) + apply simp + apply (vcg (ss)) + subgoal for subtsmergeda _ + supply R = node\<^sub>i_rule_ins[where + k=k and + c="max (2*k) (Suc (subtsn + rsubtsn))" and + ls="mts" and + al="Suc (subtsn+rsubtsn)" and + aa=subtsmergeda and + ti=rsubti and + rsi=rsubtsi and + li=subti and a=sep and ai=sep + ] + thm R + apply(rule P_imp_Q_implies_P) + apply(auto del: impCE dest!: mod_starD list_assn_len)[] + apply(rule hoare_triple_preI) + apply(subgoal_tac "subtsn \ 2*k \ rsubtsn \ 2*k") + prefer 2 + apply (auto simp add: is_pfa_def)[] + apply (sep_auto heap add: R del: impCE) + apply(sep_auto split!: btupi.splits del: impCE) + using assms apply(auto dest!: mod_starD list_assn_len)[] + apply(sep_auto del: impCE) + using assms apply(auto dest!: mod_starD list_assn_len pfa_assn_len del: impCE)[] + apply(thin_tac "_ \ _")+ + apply(rule hoare_triple_preI) + apply (drule btupi_assn_T mod_starD | erule conjE exE)+ + apply vcg + apply simp + subgoal for rsubtsi ai tsian + apply(cases tsian) + apply simp + apply(rule P_imp_Q_implies_P) + apply(rule ent_ex_postI[where x="lsi @ (ai, rsep) # rssi"]) + apply(rule ent_ex_postI[where x="(ttsia, ttsin)"]) + apply(rule ent_ex_postI[where x="tti"]) + apply(rule ent_ex_postI[where x="ttsi"]) + using assms apply (sep_auto dest!: list_assn_len) + done + subgoal for _ _ rsubp rsubtsa _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ li ai ri + apply(sep_auto del: impCE) + using assms apply(auto dest!: list_assn_len)[] + apply(sep_auto del: impCE) + using assms apply(auto dest!: list_assn_len)[] + apply(thin_tac "_ \ _")+ + apply(rule hoare_triple_preI) + apply (drule btupi_assn_Up mod_starD | erule conjE exE)+ + apply vcg + (* generates two identical subgoals ? *) + apply(simp split!: list.split) + apply(rule ent_ex_postI[where x="(lsi @ (li, ai) # (ri, rsepi) # rssi)"]) + apply(rule ent_ex_postI[where x="(ttsia, ttsin)"]) + apply(rule ent_ex_postI[where x="tti"]) + apply(rule ent_ex_postI[where x="ttsi"]) + using assms apply (sep_auto dest!: list_assn_len) + apply(rule ent_ex_postI[where x="(lsi @ (li, ai) # (ri, rsepi) # rssi)"]) + apply(rule ent_ex_postI[where x="(ttsia, ttsin)"]) + apply(rule ent_ex_postI[where x="tti"]) + apply(rule ent_ex_postI[where x="ttsi"]) + using assms apply (sep_auto dest!: list_assn_len) + done + done + done + done + done + done + done + done + done + qed + qed + qed + qed +qed + +lemma rebalance_last_tree_rule: + assumes "height t = height sub" + and "ts = list@[(sub,sep)]" + shows " + rebalance_last_tree k tsia ti + <\r. btnode_assn k (abs_split.rebalance_last_tree k ts t) r >\<^sub>t" + apply(subst rebalance_last_tree_def) + apply(rule hoare_triple_preI) + using assms apply(auto dest!: mod_starD) + apply(subgoal_tac "length tsi - Suc 0 = length list") + prefer 2 + apply(auto dest!: list_assn_len)[] + using assms apply(sep_auto) + supply R = rebalance_middle_tree_rule[where + ls="list" and + rs="[]" and + i="length tsi - 1", simplified] + apply(cases tsia) + using R by blast + +partial_function (heap) split_max ::"nat \ ('a::{default,heap,linorder}) btnode ref option \ ('a btnode ref option \ 'a) Heap" + where + "split_max k r_t = (case r_t of Some p_t \ do { + t \ !p_t; + (case (last t) of None \ do { + (sub,sep) \ pfa_last (kvs t); + tsi' \ pfa_butlast (kvs t); + p_t := Btnode tsi' sub; + return (Some p_t, sep) + } | + Some x \ do { + (sub,sep) \ split_max k (Some x); + p_t' \ rebalance_last_tree k (kvs t) sub; + p_t := p_t'; + return (Some p_t, sep) + }) +}) +" + + +declare abs_split.split_max.simps [simp del] abs_split.rebalance_last_tree.simps [simp del] height_btree.simps [simp del] + +lemma split_max_rule: + assumes "abs_split.nonempty_lasttreebal t" + and "t \ Leaf" + shows " + split_max k ti + <((btree_assn k) \\<^sub>a id_assn) (abs_split.split_max k t)>\<^sub>t" + using assms +proof(induction k t arbitrary: ti rule: abs_split.split_max.induct) + case (2 Leaf) + then show ?case by auto +next + case (1 k ts tt) + then show ?case + proof(cases tt) + case Leaf + then show ?thesis + apply(subst split_max.simps) + apply (vcg) + using assms apply auto[] + apply (vcg (ss)) + apply simp + apply (vcg (ss)) + apply (vcg (ss)) + apply (vcg (ss)) + apply (vcg (ss)) + apply (vcg (ss)) + apply (vcg (ss)) + apply (vcg (ss)) + apply (vcg (ss)) + apply (vcg (ss)) + apply (vcg (ss)) + apply(rule hoare_triple_preI) + apply (vcg (ss)) + using 1 apply(auto dest!: mod_starD)[] + apply (vcg (ss)) + apply (vcg (ss)) + apply (vcg (ss)) + apply (vcg (ss)) + apply (vcg (ss)) + subgoal for tp tsi tti tsi' tnode subsep sub sep + apply(cases tsi) + apply(rule hoare_triple_preI) + apply (vcg) + apply(auto simp add: prod_assn_def abs_split.split_max.simps split!: prod.splits) + subgoal for tsia tsin _ _ tsin' lastsep lastsub + apply(rule ent_ex_postI[where x="(tsia, tsin')"]) + apply(rule ent_ex_postI[where x="sub"]) + apply(rule ent_ex_postI[where x="(butlast tsi')"]) + using 1 apply (auto dest!: mod_starD simp add: list_assn_append_Cons_left) + apply sep_auto + done + done + apply(sep_auto) + done + next + case (Node tts ttt) + have IH_help: "abs_split.nonempty_lasttreebal tt \ +tt \ Leaf \ + split_max k (Some ttp) <(btree_assn k \\<^sub>a id_assn) (abs_split.split_max k tt)>\<^sub>t" + for ttp + using "1.IH" Node by blast + obtain butlasttts l_sub l_sep where ts_split:"tts = butlasttts@[(l_sub, l_sep)]" + using 1 Node by auto + from Node show ?thesis + apply(subst split_max.simps) + apply (vcg) + using 1 apply auto[] + apply (vcg (ss)) + apply simp + apply (vcg (ss)) + apply (vcg (ss)) + apply (vcg (ss)) + apply (vcg (ss)) + apply (vcg (ss)) + apply (vcg (ss)) + apply (vcg (ss)) + apply (vcg (ss)) + using 1 apply(auto dest!: mod_starD)[] + apply (vcg (ss)) + subgoal for tp tsi tti tsi' tnode ttp + using "1.prems" apply (vcg heap add: IH_help) + apply simp + apply simp + apply(subst prod_assn_def) + apply(cases "abs_split.split_max k tt") + apply (auto simp del: abs_split.split_max.simps abs_split.rebalance_last_tree.simps height_btree.simps)[] + subgoal for ttsubi ttmaxi ttsub ttmax butlasttsi' lasttssubi butlastts lasttssub lasttssepi lasttssep + apply(rule hoare_triple_preI) + supply R = rebalance_last_tree_rule[where k=k and tsia=tsi and ti=ttsubi and t=ttsub and tsi=tsi' and ts=" (butlasttsi' @ [(lasttssubi, lasttssepi)])" + and list=butlasttsi' and sub=lasttssubi and sep=lasttssepi] + thm R + using ts_split + (*TODO weird post conditions... *) + apply (sep_auto heap add: R + simp del: abs_split.split_max.simps abs_split.rebalance_last_tree.simps height_btree.simps + dest!: mod_starD) + apply (metis abs_split.nonempty_lasttreebal.simps(2) abs_split.split_max_height btree.distinct(1)) + apply simp + apply(rule hoare_triple_preI) + apply (simp add: prod_assn_def) + apply vcg + apply(subst abs_split.split_max.simps) + using "1.prems" apply(auto dest!: mod_starD split!: prod.splits btree.splits) + subgoal for _ _ _ _ _ _ _ _ _ _ tp' + apply(cases "abs_split.rebalance_last_tree k (butlasttsi' @ [(lasttssubi, lasttssepi)]) ttsub"; cases tp') + apply auto + apply(rule ent_ex_preI) + subgoal for _ _ tsia' tsin' tt' _ tsi' + apply(rule ent_ex_postI[where x="(tsia', tsin')"]) + apply(rule ent_ex_postI[where x="tt'"]) + apply(rule ent_ex_postI[where x="tsi'"]) + apply sep_auto + done + done + done + done + done + qed +qed + +partial_function (heap) del ::"nat \ 'a \ ('a::{default,heap,linorder}) btnode ref option \ 'a btnode ref option Heap" + where + "del k x ti = (case ti of None \ return None | + Some p \ do { + node \ !p; + i \ imp_split (kvs node) x; + tsl \ pfa_length (kvs node); + if i < tsl then do { + (sub,sep) \ pfa_get (kvs node) i; + if sep \ x then do { + sub' \ del k x sub; + kvs' \ pfa_set (kvs node) i (sub',sep); + node' \ rebalance_middle_tree k kvs' i (last node); + p := node'; + return (Some p) + } + else if sub = None then do{ + kvs' \ pfa_delete (kvs node) i; + p := (Btnode kvs' (last node)); + return (Some p) + } + else do { + sm \ split_max k sub; + kvs' \ pfa_set (kvs node) i sm; + node' \ rebalance_middle_tree k kvs' i (last node); + p := node'; + return (Some p) + } + } else do { + t' \ del k x (last node); + node' \ rebalance_last_tree k (kvs node) t'; + p := node'; + return (Some p) + } +})" + +lemma rebalance_middle_tree_update_rule: + assumes "height tt = height sub" + and "case rs of (rsub,rsep) # list \ height rsub = height tt | [] \ True" + and "i = length ls" + shows " + rebalance_middle_tree k a i ti + \<^sub>t" +proof (cases a) + case [simp]: (Pair a n) + note R=rebalance_middle_tree_rule[of tt sub rs i ls k "zs1@(x', sep)#zs2" a n sep ti] + show ?thesis + apply(rule hoare_triple_preI) + using R assms apply (sep_auto dest!: mod_starD list_assn_len simp add: prod_assn_def) + using assn_times_assoc star_aci(3) by auto +qed + +lemma del_rule: + assumes "bal t" and "sorted (inorder t)" and "root_order k t" and "k > 0" + shows " + del k x ti + \<^sub>t" + using assms +proof (induction k x t arbitrary: ti rule: abs_split.del.induct) + case (1 k x) + then show ?case + apply(subst del.simps) + apply sep_auto + done +next + case (2 k x ts tt ti) + obtain ls rs where split_ts[simp]: "split ts x = (ls, rs)" + by (cases "split ts x") + obtain tss lastts_sub lastts_sep where last_ts: "ts = tss@[(lastts_sub, lastts_sep)]" + using "2.prems" apply auto + by (metis abs_split.isin.cases neq_Nil_rev_conv) + show ?case + proof(cases "rs") + case Nil + then show ?thesis + apply(subst del.simps) + apply sep_auto + using "2.prems"(2) sorted_inorder_separators apply blast + apply(rule hoare_triple_preI) + apply (sep_auto) + using Nil apply (auto simp add: split_relation_alt dest!: mod_starD list_assn_len)[] + using Nil apply (auto simp add: split_relation_alt dest!: mod_starD list_assn_len)[] + using Nil apply (auto simp add: split_relation_alt dest!: mod_starD list_assn_len)[] + apply (sep_auto heap add: "2.IH"(1)) + using "2.prems" apply (auto dest!: mod_starD)[] + using "2.prems" apply (auto dest!: mod_starD simp add: sorted_wrt_append)[] + using "2.prems" order_impl_root_order apply (auto dest!: mod_starD)[] + using "2.prems" apply (auto)[] + subgoal for tp tsia tsin tti tsi i _ _ tti' + apply(rule hoare_triple_preI) + supply R = rebalance_last_tree_rule[where t="(abs_split.del k x tt)" and ti=tti' and ts=ts and sub=lastts_sub + and list=tss and sep=lastts_sep] + thm R + using last_ts apply(sep_auto heap add: R) + using "2.prems" abs_split.del_height[of k tt x] order_impl_root_order[of k tt] apply (auto dest!: mod_starD)[] + apply simp + apply(rule hoare_triple_preI) + apply (sep_auto) + apply(cases "abs_split.rebalance_last_tree k ts (abs_split.del k x tt)") + apply(auto simp add: split_relation_alt dest!: mod_starD list_assn_len) + subgoal for tnode + apply (cases tnode; sep_auto) + done + done + done + next + case [simp]: (Cons rrs rss) + then obtain sub sep where [simp]: "rrs = (sub, sep)" + by (cases rrs) + consider (sep_n_x) "sep \ x" | + (sep_x_Leaf) "sep = x \ sub = Leaf" | + (sep_x_Node) "sep = x \ (\ts t. sub = Node ts t)" + using btree.exhaust by blast + then show ?thesis + proof(cases) + case sep_n_x + then show ?thesis + apply(subst del.simps) + apply sep_auto + using "2.prems"(2) sorted_inorder_separators apply blast + apply(vcg (ss)) + apply(vcg (ss)) + apply(vcg (ss)) + apply(vcg (ss)) + apply(vcg (ss)) + apply(vcg (ss)) + apply(vcg (ss)) + apply(vcg (ss)) + apply(vcg (ss)) + apply(vcg (ss)) + apply(vcg (ss)) + apply(vcg (ss)) + apply simp + apply(vcg (ss)) + apply(vcg (ss)) + apply(vcg (ss)) + subgoal for tp tsi ti' tsi' tnode i tsi'l subsep subi sepi + (* TODO this causes 4 subgoals *) + apply(auto simp add: split_relation_alt list_assn_append_Cons_left; + rule norm_pre_ex_rule; rule norm_pre_ex_rule; rule norm_pre_ex_rule; + rule hoare_triple_preI; + auto dest!: mod_starD)[] + apply (auto simp add: split_relation_alt dest!: list_assn_len)[] + subgoal for lsi subi rsi + apply(subgoal_tac "subi = None") + prefer 2 + apply(auto dest!: list_assn_len)[] + supply R = "2.IH"(2)[of ls rs rrs rss sub sep] + thm R + using split_ts apply(sep_auto heap add: R) + using "2.prems" apply auto[] + apply (metis "2.prems"(2) sorted_inorder_induct_subtree) + using "2.prems" apply auto[] + apply (meson "2.prems"(4) order_impl_root_order) + using "2.prems"(4) apply fastforce + apply(vcg (ss)) + apply(vcg (ss)) + apply(vcg (ss)) + apply (auto simp add: split_relation_alt dest!: list_assn_len)[] + apply(vcg (ss)) + apply(vcg (ss); simp) + apply(cases tsi; simp) + subgoal for subi' _ tsia' tsin' + supply R = rebalance_middle_tree_update_rule + thm R + (* TODO create a new heap rule, in the node_i style *) + apply(auto dest!: list_assn_len)[] + apply(rule hoare_triple_preI) + apply (sep_auto heap add: R dest!: mod_starD) + using "2.prems" abs_split.del_height[of k sub x] order_impl_root_order[of k sub] apply (auto)[] + using "2.prems" apply (auto split!: list.splits)[] + apply auto[] + apply sep_auto + subgoal for _ _ _ _ _ _ _ _ _ _ _ _ _ _ tnode'' + apply (cases "(abs_split.rebalance_middle_tree k ls (abs_split.del k x sub) sepi rss tt)"; cases tnode'') + apply sep_auto + apply sep_auto + done + done + done + apply (auto simp add: split_relation_alt dest!: mod_starD list_assn_len)[] + (* copy pasta of "none" branch *) + subgoal for subnode lsi subi rsi + apply(subgoal_tac "subi = Some subnode") + prefer 2 + apply(auto dest!: list_assn_len)[] + supply R = "2.IH"(2)[of ls rs rrs rss sub sep] + thm R + using split_ts apply(sep_auto heap add: R) + using "2.prems" apply auto[] + apply (metis "2.prems"(2) sorted_inorder_induct_subtree) + using "2.prems" apply auto[] + apply (meson "2.prems"(4) order_impl_root_order) + using "2.prems"(4) apply fastforce + apply(vcg (ss)) + apply(vcg (ss)) + apply(vcg (ss)) + apply (auto simp add: split_relation_alt dest!: list_assn_len)[] + apply(vcg (ss)) + apply(vcg (ss); simp) + apply(cases tsi; simp) + subgoal for x' xab a n + supply R = rebalance_middle_tree_update_rule + thm R + (* TODO create a new heap rule, in the node_i style *) + apply(auto dest!: list_assn_len)[] + apply(rule hoare_triple_preI) + apply (sep_auto heap add: R dest!: mod_starD) + using "2.prems" abs_split.del_height[of k sub x] order_impl_root_order[of k sub] apply (auto)[] + using "2.prems" apply (auto split!: list.splits)[] + apply auto[] + apply sep_auto + subgoal for _ _ _ _ _ _ _ _ _ _ _ _ _ _ tnode' + apply (cases "(abs_split.rebalance_middle_tree k ls (abs_split.del k x sub) sepi rss tt)"; cases tnode') + apply sep_auto + apply sep_auto + done + done + done + done + apply(rule hoare_triple_preI) + using Cons apply (auto simp add: split_relation_alt dest!: mod_starD list_assn_len)[] + done + next + case sep_x_Leaf + then show ?thesis + apply(subst del.simps) + apply sep_auto + using "2.prems"(2) sorted_inorder_separators apply blast + apply(vcg (ss)) + apply(vcg (ss)) + apply(vcg (ss)) + apply(vcg (ss)) + apply(vcg (ss)) + apply(vcg (ss)) + apply(vcg (ss)) + apply(vcg (ss)) + apply(vcg (ss)) + apply(vcg (ss)) + apply(vcg (ss)) + apply(vcg (ss)) + apply simp + apply(vcg (ss)) + apply(vcg (ss)) + apply(vcg (ss)) + subgoal for tp tsi ti' tsi' tnode i tsi'l subsep subi sepi + (* TODO this causes 4 subgoals *) + apply(auto simp add: split_relation_alt list_assn_append_Cons_left; + rule norm_pre_ex_rule; rule norm_pre_ex_rule; rule norm_pre_ex_rule; + rule hoare_triple_preI; + auto dest!: mod_starD)[] + (* the correct subbranch *) + subgoal for lsi subi rsi + apply(cases tsi) + apply (sep_auto) + apply(auto simp add: is_pfa_def dest!: list_assn_len)[] + apply (metis add_Suc_right le_imp_less_Suc length_append length_take less_add_Suc1 less_trans_Suc list.size(4) min.cobounded2 not_less_eq) + apply vcg + apply auto + subgoal for tsin tsia + apply(rule ent_ex_postI[where x="(tsia, tsin-1)"]) + apply(rule ent_ex_postI[where x="ti'"]) + apply(rule ent_ex_postI[where x="lsi@rsi"]) + apply (sep_auto dest!: list_assn_len) + done + done + apply (auto simp add: split_relation_alt dest!: list_assn_len)[] + apply (auto simp add: split_relation_alt dest!: list_assn_len)[] + apply (auto simp add: split_relation_alt dest!: list_assn_len)[] + done + apply(rule hoare_triple_preI) + using Cons apply (auto simp add: split_relation_alt dest!: mod_starD list_assn_len)[] + done + next + case sep_x_Node + then show ?thesis + apply(subst del.simps) + apply sep_auto + using "2.prems"(2) sorted_inorder_separators apply blast + apply(vcg (ss)) + apply(vcg (ss)) + apply(vcg (ss)) + apply(vcg (ss)) + apply(vcg (ss)) + apply(vcg (ss)) + apply(vcg (ss)) + apply(vcg (ss)) + apply(vcg (ss)) + apply(vcg (ss)) + apply(vcg (ss)) + apply(vcg (ss)) + apply simp + apply(vcg (ss)) + apply(vcg (ss)) + apply(vcg (ss)) + subgoal for subts subt tp tsi ti tsi' tnode i tsi'l subsep subi sepi + (* TODO this causes 4 subgoals *) + apply(auto simp add: split_relation_alt list_assn_append_Cons_left; + rule norm_pre_ex_rule; rule norm_pre_ex_rule; rule norm_pre_ex_rule; + rule hoare_triple_preI; + auto dest!: mod_starD)[] + apply (auto simp add: split_relation_alt dest!: list_assn_len)[] + apply (auto simp add: split_relation_alt dest!: list_assn_len)[] + (* the correct sub branch *) + subgoal for subnode lsi subi rsi + apply(subgoal_tac "subi = Some subnode") + apply (simp del: btree_assn.simps) + supply R = split_max_rule[of "Node subts subt" k "Some subnode"] + thm R + apply(sep_auto heap add: R simp del: btree_assn.simps) + using "2.prems" apply(auto dest!: list_assn_len mod_starD simp del: bal.simps order.simps)[] + subgoal + proof(goal_cases) + case 1 + then have "order k (Node subts subt)" + by blast + moreover have "k > 0" + by (simp add: "2.prems"(4)) + ultimately obtain sub_ls lsub lsep where sub_ts_split: "subts = sub_ls@[(lsub,lsep)]" + by (metis abs_split.isin.cases le_0_eq list.size(3) order.simps(2) rev_exhaust zero_less_iff_neq_zero) + from 1 have "bal (Node subts subt)" + by auto + then have "height lsub = height subt" + by (simp add: sub_ts_split) + then show ?thesis using sub_ts_split by blast + qed + using "2.prems" abs_split.order_bal_nonempty_lasttreebal[of k subt] order_impl_root_order[of k subt] + apply(auto)[] + apply (auto simp add: split_relation_alt dest!: list_assn_len)[] + apply vcg + apply auto[] + apply(cases "abs_split.split_max k (Node subts subt)"; simp) + subgoal for split_res _ split_sub split_sep + apply(cases split_res; simp) + subgoal for split_subi split_sepi + supply R = rebalance_middle_tree_update_rule[ + of tt split_sub rss "length lsi" ls k lsi split_subi split_sep rsi tsi ti + ] + thm R + (* id_assn split_sepi doesnt match yet... *) + apply(auto simp add: prod_assn_def dest!: list_assn_len) + apply (sep_auto) + apply(rule hoare_triple_preI) + apply(auto dest!: mod_starD)[] + apply (sep_auto heap add: R) + using "2.prems" abs_split.split_max_height[of k sub] order_impl_root_order[of k sub] + abs_split.order_bal_nonempty_lasttreebal[of k sub] apply (auto)[] + using "2.prems" abs_split.split_max_bal[of sub k] order_impl_root_order[of k sub] + apply (auto split!: list.splits)[] + apply auto[] + apply(rule hoare_triple_preI) + apply(auto dest!: mod_starD)[] + subgoal for subtsi''a subtsi''n ti subtsi'' tnode' + apply(cases "(abs_split.rebalance_middle_tree k ls split_sub split_sep rss tt)"; cases "tnode'") + apply auto + apply sep_auto + done + done + done + apply (auto simp add: split_relation_alt dest!: list_assn_len)[] + done + apply (auto simp add: split_relation_alt dest!: list_assn_len)[] + done + apply(rule hoare_triple_preI) + using Cons apply (auto simp add: split_relation_alt dest!: mod_starD list_assn_len)[] + done + qed + qed +qed + +definition reduce_root ::"('a::{default,heap,linorder}) btnode ref option \ 'a btnode ref option Heap" + where + "reduce_root ti = (case ti of + None \ return None | + Some p_t \ do { + node \ !p_t; + tsl \ pfa_length (kvs node); + case tsl of 0 \ return (last node) | + _ \ return ti +})" + +lemma reduce_root_rule: + " reduce_root ti \<^sub>t" + apply(subst reduce_root_def) + apply(cases t; cases ti) + apply (sep_auto split!: nat.splits list.splits)+ + done + +definition delete ::"nat \ 'a \ ('a::{default,heap,linorder}) btnode ref option \ 'a btnode ref option Heap" + where + "delete k x ti = do { + ti' \ del k x ti; + reduce_root ti' +}" + +lemma delete_rule: + assumes "bal t" and "root_order k t" and "k > 0" and "sorted (inorder t)" + shows " delete k x ti \<^sub>t" + apply(subst delete_def) + using assms apply (sep_auto heap add: del_rule reduce_root_rule) + done lemma empty_rule: shows " empty <\r. btree_assn k (abs_split.empty_btree) r>" apply(subst empty_def) apply(sep_auto simp add: abs_split.empty_btree_def) done end end diff --git a/thys/BTree/BTree_ImpSplit.thy b/thys/BTree/BTree_ImpSplit.thy --- a/thys/BTree/BTree_ImpSplit.thy +++ b/thys/BTree/BTree_ImpSplit.thy @@ -1,419 +1,424 @@ theory BTree_ImpSplit imports BTree_ImpSet BTree_Split Imperative_Loops begin section "Imperative split operations" text "So far, we have only given a functional specification of a possible split. We will now provide imperative split functions that refine the functional specification. However, rather than tracing the execution of the abstract specification, the imperative versions are implemented using while-loops." subsection "Linear split" text "The linear split is the most simple split function for binary trees. It serves a good example on how to use while-loops in Imperative/HOL and how to prove Hoare-Triples about its application using loop invariants." definition lin_split :: "('a::heap \ 'b::{heap,linorder}) pfarray \ 'b \ nat Heap" where "lin_split \ \ (a,n) p. do { i \ heap_WHILET (\i. if i Array.nth a i; return (si. return (i+1)) 0; return i }" lemma lin_split_rule: " < is_pfa c xs (a,n)> lin_split (a,n) p <\i. is_pfa c xs (a,n) * \(i\n \ (\j (i snd (xs!i)\p))>\<^sub>t" unfolding lin_split_def supply R = heap_WHILET_rule''[where R = "measure (\i. n - i)" and I = "\i. is_pfa c xs (a,n) * \(i\n \ (\j 'b \ nat Heap" where "bin'_split \ \(a,n) p. do { (low',high') \ heap_WHILET (\(low,high). return (low < high)) (\(low,high). let mid = ((low + high) div 2) in do { s \ Array.nth a mid; if p < s then return (low, mid) else if p > s then return (mid+1, high) else return (mid,mid) }) (0::nat,n); return low' }" thm sorted_wrt_nth_less (* alternative: replace (\j 0 \ xs!(l-1) < p)*) lemma bin'_split_rule: " sorted_less xs \ < is_pfa c xs (a,n)> bin'_split (a,n) p <\l. is_pfa c xs (a,n) * \(l \ n \ (\j (l xs!l\p)) >\<^sub>t" unfolding bin'_split_def supply R = heap_WHILET_rule''[where R = "measure (\(l,h). h-l)" and I = "\(l,h). is_pfa c xs (a,n) * \(l\h \ h \ n \ (\j (h xs!h\p))" and b = "\(l,h). l length l'a" assume a: "l'a ! ((aa + n) div 2) < p" moreover assume "aa < n" ultimately have b: "((aa+n)div 2) < n" by linarith then have "(take n l'a) ! ((aa + n) div 2) < p" using a by auto moreover assume "sorted_less (take n l'a)" ultimately have "\j. j < (aa+n)div 2 \ (take n l'a) ! j < (take n l'a) ! ((aa + n) div 2)" using sorted_wrt_nth_less[where ?P="(<)" and xs="(take n l'a)" and ?j="((aa + n) div 2)"] a b "0" by auto moreover fix j assume "j < (aa+n) div 2" ultimately show "l'a ! j < p" using "0" b using \take n l'a ! ((aa + n) div 2) < p\ dual_order.strict_trans by auto qed subgoal for l' aa b l'a aaa ba j proof - assume t0: "n \ length l'a" assume t1: "aa < b" assume a: "l'a ! ((aa + b) div 2) < p" moreover assume "b \ n" ultimately have b: "((aa+b)div 2) < n" using t1 by linarith then have "(take n l'a) ! ((aa + b) div 2) < p" using a by auto moreover assume "sorted_less (take n l'a)" ultimately have "\j. j < (aa+b)div 2 \ (take n l'a) ! j < (take n l'a) ! ((aa + b) div 2)" using sorted_wrt_nth_less[where ?P="(<)" and xs="(take n l'a)" and ?j="((aa + b) div 2)"] a b t0 by auto moreover fix j assume "j < (aa+b) div 2" ultimately show "l'a ! j < p" using t0 b using \take n l'a ! ((aa + b) div 2) < p\ dual_order.strict_trans by auto qed apply sep_auto apply (metis le_less nth_take) apply (metis le_less nth_take) apply sep_auto subgoal for l' aa l'a aaa ba j proof - assume t0: "aa < n" assume t1: " n \ length l'a" assume t4: "sorted_less (take n l'a)" assume t5: "j < (aa + n) div 2" have "(aa+n) div 2 < n" using t0 by linarith then have "(take n l'a) ! j < (take n l'a) ! ((aa + n) div 2)" using t0 sorted_wrt_nth_less[where xs="take n l'a" and ?j="((aa + n) div 2)"] t1 t4 t5 by auto then show ?thesis using \(aa + n) div 2 < n\ t5 by auto qed subgoal for l' aa b l'a aaa ba j proof - assume t0: "aa < b" assume t1: " n \ length l'a" assume t3: "b \ n" assume t4: "sorted_less (take n l'a)" assume t5: "j < (aa + b) div 2" have "(aa+b) div 2 < n" using t3 t0 by linarith then have "(take n l'a) ! j < (take n l'a) ! ((aa + b) div 2)" using t0 sorted_wrt_nth_less[where xs="take n l'a" and ?j="((aa + b) div 2)"] t1 t4 t5 by auto then show ?thesis using \(aa + b) div 2 < n\ t5 by auto qed apply (metis nth_take order_mono_setup.refl) apply sep_auto apply (sep_auto simp add: is_pfa_def) done text "Then, using the same loop invariant, a binary split for B-tree-like arrays is derived in a straightforward manner." definition bin_split :: "('a::heap \ 'b::{heap,linorder}) pfarray \ 'b \ nat Heap" where "bin_split \ \(a,n) p. do { (low',high') \ heap_WHILET (\(low,high). return (low < high)) (\(low,high). let mid = ((low + high) div 2) in do { (_,s) \ Array.nth a mid; if p < s then return (low, mid) else if p > s then return (mid+1, high) else return (mid,mid) }) (0::nat,n); return low' }" thm nth_take lemma nth_take_eq: "take n ls = take n ls' \ i < n \ ls!i = ls'!i" by (metis nth_take) lemma map_snd_sorted_less: "\sorted_less (map snd xs); i < j; j < length xs\ \ snd (xs ! i) < snd (xs ! j)" by (metis (mono_tags, hide_lams) length_map less_trans nth_map sorted_wrt_iff_nth_less) lemma map_snd_sorted_lesseq: "\sorted_less (map snd xs); i \ j; j < length xs\ \ snd (xs ! i) \ snd (xs ! j)" by (metis eq_iff less_imp_le map_snd_sorted_less order.not_eq_order_implies_strict) lemma bin_split_rule: " sorted_less (map snd xs) \ < is_pfa c xs (a,n)> bin_split (a,n) p <\l. is_pfa c xs (a,n) * \(l \ n \ (\j (l snd(xs!l)\p)) >\<^sub>t" (* this works in principle, as demonstrated above *) unfolding bin_split_def supply R = heap_WHILET_rule''[where R = "measure (\(l,h). h-l)" and I = "\(l,h). is_pfa c xs (a,n) * \(l\h \ h \ n \ (\j (h snd (xs!h)\p))" and b = "\(l,h). l(_,s). s(_,s). sAny function that yields the heap rule we have obtained for bin\_split and lin\_split also refines this abstract split.\ locale imp_split_smeq = fixes split_fun :: "('a::{heap,default,linorder} btnode ref option \ 'a) array \ nat \ 'a \ nat Heap" assumes split_rule: "sorted_less (separators xs) \ split_fun (a, n) (p::'a) <\r. is_pfa c xs (a, n) * \ (r \ n \ (\j (r < n \ p \ snd (xs ! r)))>\<^sub>t" begin lemma abs_split_full: "\(_,s) \ set xs. s < p \ abs_split xs p = (xs,[])" by (simp add: abs_split_def) lemma abs_split_split: assumes "n < length xs" and "(\(_,s) \ set (take n xs). s < p)" and " (case (xs!n) of (_,s) \ \(s < p))" shows "abs_split xs p = (take n xs, drop n xs)" using assms apply (auto simp add: abs_split_def) apply (metis (mono_tags, lifting) id_take_nth_drop old.prod.case takeWhile_eq_all_conv takeWhile_tail) by (metis (no_types, lifting) Cons_nth_drop_Suc case_prod_conv dropWhile.simps(2) dropWhile_append2 id_take_nth_drop) lemma split_rule_abs_split: shows "sorted_less (separators ts) \ < is_pfa c tsi (a,n) * list_assn (A \\<^sub>a id_assn) ts tsi> split_fun (a,n) p <\i. is_pfa c tsi (a,n) * list_assn (A \\<^sub>a id_assn) ts tsi * \(split_relation ts (abs_split ts p) i)>\<^sub>t" apply(rule hoare_triple_preI) apply (sep_auto heap: split_rule dest!: mod_starD id_assn_list simp add: list_assn_prod_map split_ismeq) apply(auto simp add: is_pfa_def) proof - fix h l' assume heap_init: "h \ a \\<^sub>a l'" "map snd ts = (map snd (take n l'))" "n \ length l'" show full_thm: "\j split_relation ts (abs_split ts p) n" proof - assume sm_list: "\jj < length (map snd (take n l')). ((map snd (take n l'))!j) < p" by simp then have "\j(_,s) \ set ts. s < p" by (metis case_prod_unfold in_set_conv_nth length_map nth_map) then have "abs_split ts p = (ts, [])" using abs_split_full[of ts p] by simp then show "split_relation ts (abs_split ts p) n" using split_relation_length by (metis heap_init(2) heap_init(3) length_map length_take min.absorb2) qed then show "\j p \ snd (take n l' ! n) \ split_relation ts (abs_split ts p) n" by simp show part_thm: "\x. x < n \ \j p \ snd (l' ! x) \ split_relation ts (abs_split ts p) x" proof - fix x assume x_sm_len: "x < n" moreover assume sm_list: "\jjj(_,x) \ set (take x ts). x < p" by (auto simp add: in_set_conv_nth min.absorb2)+ moreover assume "p \ snd (l' ! x)" then have "case l'!x of (_,s) \ \(s < p)" by (simp add: case_prod_unfold) then have "case ts!x of (_,s) \ \(s < p)" using heap_init x_sm_len x_sm_len_ts by (metis (mono_tags, lifting) case_prod_unfold length_map length_take min.absorb2 nth_take snd_map_help(2)) ultimately have "abs_split ts p = (take x ts, drop x ts)" using x_sm_len_ts abs_split_split[of x ts p] heap_init by (metis length_map length_take min.absorb2) then show "split_relation ts (abs_split ts p) x" using x_sm_len_ts by (metis append_take_drop_id heap_init(2) heap_init(3) length_map length_take less_imp_le_nat min.absorb2 split_relation_alt) qed qed sublocale imp_split abs_split split_fun apply(unfold_locales) apply(sep_auto heap: split_rule_abs_split) done end subsection "Obtaining executable code" text "In order to obtain fully defined functions, we need to plug our split function implementations into the locales we introduced previously." interpretation btree_imp_linear_split: imp_split_smeq lin_split apply unfold_locales apply(sep_auto heap: lin_split_rule) done text "Obtaining actual code turns out to be slightly more difficult due to the use of locales. However, we successfully obtain the B-tree insertion and membership query with binary search splitting." global_interpretation btree_imp_binary_split: imp_split_smeq bin_split defines btree_isin = btree_imp_binary_split.isin and btree_ins = btree_imp_binary_split.ins and btree_insert = btree_imp_binary_split.insert + and btree_del = btree_imp_binary_split.del + and btree_split_max = btree_imp_binary_split.split_max + and btree_delete = btree_imp_binary_split.delete and btree_empty = btree_imp_binary_split.empty apply unfold_locales apply(sep_auto heap: bin_split_rule) done -thm btree_imp_binary_split.ins.simps -declare btree_imp_binary_split.ins.simps[code] btree_imp_binary_split.isin.simps[code] +declare btree_imp_binary_split.ins.simps[code] +declare btree_imp_binary_split.isin.simps[code] +declare btree_imp_binary_split.del.simps[code] btree_imp_binary_split.split_max.simps[code] -export_code btree_empty btree_isin btree_insert checking SML Scala -export_code btree_empty btree_isin btree_insert in SML module_name BTreeInsert -export_code btree_empty btree_isin btree_insert in Scala module_name BTreeInsert +export_code btree_empty btree_isin btree_insert btree_delete checking OCaml SML Scala +export_code btree_empty btree_isin btree_insert btree_delete in OCaml module_name BTree +export_code btree_empty btree_isin btree_insert btree_delete in SML module_name BTree +export_code btree_empty btree_isin btree_insert btree_delete in Scala module_name BTree end diff --git a/thys/BTree/BTree_Map.thy b/thys/BTree/BTree_Map.thy new file mode 100644 --- /dev/null +++ b/thys/BTree/BTree_Map.thy @@ -0,0 +1,91 @@ +theory BTree_Map +imports BTree_Set "HOL-Data_Structures.Map_Specs" +begin + +term "(1,2)" +term int + +fun eq_kv where +"eq_kv (k1, v1) (k2,v2) = (k1 = k2)" + +datatype ('a, 'b) ukv = KV 'a 'b + +quotient_type ('a,'b) kv = "('a, 'b) prod" / eq_kv + apply(rule equivpI) + apply (auto simp add: reflp_def symp_def transp_def) + done + +type_notation (ASCII) + kv (infixr "\" 20) + +instantiation kv :: (linorder, type) linorder +begin + +fun less_eq_ukv::"'a \ 'b \ 'a \ 'b \ bool" + where "less_eq_ukv (k1, v1) (k2, v2) = (k1 \ k2)" + +lift_definition less_eq_kv :: "'a \ 'b \ 'a \ 'b \ bool" is less_eq_ukv + by auto + +definition less_kv:: "'a \ 'b \ 'a \ 'b \ bool" + where "less_kv a b = (a \ b \ \b \ a)" + + +instance +proof(standard, goal_cases) + case (1 x y) + then show ?case by (simp add: less_kv_def) +next + case (2 x) + then show ?case + by (transfer; clarsimp) +next + case (3 x y z) + then show ?case + by (transfer; auto) +next + case (4 x y) + then show ?case + by (transfer; auto) +next + case (5 x y) + then show ?case + by (transfer; auto) +qed +end + +locale split_default = abs_split: BTree_Set.split split + for split:: + "(('a \ 'b) btree \ ('a::{linorder} \ 'b::{default})) list \ ('a \ 'b) + \ (('a \ 'b) btree \ ('a \ 'b)) list \ (('a \ 'b) btree \ ('a \ 'b)) list" +begin + +lift_definition lift :: "'a \ ('a \ 'b)" is + "\a. (a, default)" . + +lift_definition val :: "('a \ 'b) \ 'b" is + "\(a,b). b" + apply auto + sorry + +fun find where + "find (Leaf) y = None" | + "find (Node ts t) y = ( + case split ts y of (_,(sub,sep)#rs) \ ( + if y = sep then + Some sep + else + find sub y + ) + | (_,[]) \ find t y + )" + + +fun lookup :: "('a \ 'b option) btree \ 'a \ 'a option" + where "lookup t x = (case find t (x,None) of Some (a,b) \ Some b | None \ None)" + +interpretation btree_map: Map_by_Ordered +empty_btree + +end +end \ No newline at end of file diff --git a/thys/BTree/BTree_Set.thy b/thys/BTree/BTree_Set.thy --- a/thys/BTree/BTree_Set.thy +++ b/thys/BTree/BTree_Set.thy @@ -1,1931 +1,1936 @@ theory BTree_Set imports BTree "HOL-Data_Structures.Set_Specs" begin section "Set interpretation" subsection "Auxiliary functions" fun split_half:: "('a btree\'a) list \ (('a btree\'a) list \ ('a btree\'a) list)" where "split_half xs = (take (length xs div 2) xs, drop (length xs div 2) xs)" lemma drop_not_empty: "xs \ [] \ drop (length xs div 2) xs \ []" apply(induction xs) apply(auto split!: list.splits) done lemma split_half_not_empty: "length xs \ 1 \ \ls sub sep rs. split_half xs = (ls,(sub,sep)#rs)" using drop_not_empty by (metis (no_types, hide_lams) drop0 drop_eq_Nil eq_snd_iff hd_Cons_tl le_trans not_one_le_zero split_half.simps) subsection "The split function locale" text "Here, we abstract away the inner workings of the split function for B-tree operations." (* TODO what if we define a function "list_split" that returns a split list for mapping arbitrary f (separators) and g (subtrees) s.th. f :: 'a \ ('b::linorder) and g :: 'a \ 'a btree this would allow for key,pointer pairs to be inserted into the tree *) (* TODO what if the keys are the pointers? *) locale split = fixes split :: "('a btree\'a::linorder) list \ 'a \ (('a btree\'a) list \ ('a btree\'a) list)" assumes split_req: "\split xs p = (ls,rs)\ \ xs = ls @ rs" "\split xs p = (ls@[(sub,sep)],rs); sorted_less (separators xs)\ \ sep < p" "\split xs p = (ls,(sub,sep)#rs); sorted_less (separators xs)\ \ p \ sep" begin lemmas split_conc = split_req(1) lemmas split_sorted = split_req(2,3) lemma [termination_simp]:"(ls, (sub, sep) # rs) = split ts y \ size sub < Suc (size_list (\x. Suc (size (fst x))) ts + size l)" using split_conc[of ts y ls "(sub,sep)#rs"] by auto fun invar_inorder where "invar_inorder k t = (bal t \ root_order k t)" definition "empty_btree = Leaf" subsection "Membership" fun isin:: "'a btree \ 'a \ bool" where "isin (Leaf) y = False" | "isin (Node ts t) y = ( case split ts y of (_,(sub,sep)#rs) \ ( if y = sep then True else isin sub y ) | (_,[]) \ isin t y )" subsection "Insertion" text "The insert function requires an auxiliary data structure and auxiliary invariant functions." datatype 'b up\<^sub>i = T\<^sub>i "'b btree" | Up\<^sub>i "'b btree" 'b "'b btree" fun order_up\<^sub>i where "order_up\<^sub>i k (T\<^sub>i sub) = order k sub" | "order_up\<^sub>i k (Up\<^sub>i l a r) = (order k l \ order k r)" fun root_order_up\<^sub>i where "root_order_up\<^sub>i k (T\<^sub>i sub) = root_order k sub" | "root_order_up\<^sub>i k (Up\<^sub>i l a r) = (order k l \ order k r)" fun height_up\<^sub>i where "height_up\<^sub>i (T\<^sub>i t) = height t" | "height_up\<^sub>i (Up\<^sub>i l a r) = max (height l) (height r)" fun bal_up\<^sub>i where "bal_up\<^sub>i (T\<^sub>i t) = bal t" | "bal_up\<^sub>i (Up\<^sub>i l a r) = (height l = height r \ bal l \ bal r)" fun inorder_up\<^sub>i where "inorder_up\<^sub>i (T\<^sub>i t) = inorder t" | "inorder_up\<^sub>i (Up\<^sub>i l a r) = inorder l @ [a] @ inorder r" text "The following function merges two nodes and returns separately split nodes if an overflow occurs" fun node\<^sub>i:: "nat \ ('a btree \ 'a) list \ 'a btree \ 'a up\<^sub>i" where "node\<^sub>i k ts t = ( if length ts \ 2*k then T\<^sub>i (Node ts t) else ( case split_half ts of (ls, (sub,sep)#rs) \ Up\<^sub>i (Node ls sub) sep (Node rs t) ) )" +lemma nodei_ti_simp: "node\<^sub>i k ts t = T\<^sub>i x \ x = Node ts t" + apply (cases "length ts \ 2*k") + apply (auto split!: list.splits) + done + fun ins:: "nat \ 'a \ 'a btree \ 'a up\<^sub>i" where "ins k x Leaf = (Up\<^sub>i Leaf x Leaf)" | "ins k x (Node ts t) = ( case split ts x of - (ls,(sub,sep)#rs) \ + (ls,(sub,sep)#rs) \ (if sep = x then T\<^sub>i (Node ts t) else - (case ins k x sub of + (case ins k x sub of Up\<^sub>i l a r \ - node\<^sub>i k (ls @ (l,a)#(r,sep)#rs) t | + node\<^sub>i k (ls @ (l,a)#(r,sep)#rs) t | T\<^sub>i a \ T\<^sub>i (Node (ls @ (a,sep) # rs) t))) | (ls, []) \ (case ins k x t of Up\<^sub>i l a r \ node\<^sub>i k (ls@[(l,a)]) r | T\<^sub>i a \ T\<^sub>i (Node ls a) ) )" fun tree\<^sub>i::"'a up\<^sub>i \ 'a btree" where "tree\<^sub>i (T\<^sub>i sub) = sub" | "tree\<^sub>i (Up\<^sub>i l a r) = (Node [(l,a)] r)" fun insert::"nat \ 'a \ 'a btree \ 'a btree" where "insert k x t = tree\<^sub>i (ins k x t)" subsection "Deletion" -text "The following deletion method is inspired by Bayer (70) and Fielding (??). +text "The following deletion method is inspired by Bauer (70) and Fielding (??). Rather than stealing only a single node from the neighbour, the neighbour is fully merged with the potentially underflowing node. If the resulting node is still larger than allowed, the merged node is split again, using the rules known from insertion splits. If the resulting node has admissable size, it is simply kept in the tree." fun rebalance_middle_tree where "rebalance_middle_tree k ls Leaf sep rs Leaf = ( Node (ls@(Leaf,sep)#rs) Leaf )" | "rebalance_middle_tree k ls (Node mts mt) sep rs (Node tts tt) = ( - if length mts \ k \ length tts \ k then + if length mts \ k \ length tts \ k then Node (ls@(Node mts mt,sep)#rs) (Node tts tt) else ( case rs of [] \ ( case node\<^sub>i k (mts@(mt,sep)#tts) tt of T\<^sub>i u \ Node ls u | Up\<^sub>i l a r \ Node (ls@[(l,a)]) r) | (Node rts rt,rsep)#rs \ ( case node\<^sub>i k (mts@(mt,sep)#rts) rt of T\<^sub>i u \ Node (ls@(u,rsep)#rs) (Node tts tt) | Up\<^sub>i l a r \ Node (ls@(l,a)#(r,rsep)#rs) (Node tts tt)) ))" text "Deletion" text "All trees are merged with the right neighbour on underflow. Obviously for the last tree this would not work since it has no right neighbour. Therefore this tree, as the only exception, is merged with the left neighbour. However since we it does not make a difference, we treat the situation as if the second to last tree underflowed." fun rebalance_last_tree where "rebalance_last_tree k ts t = ( case last ts of (sub,sep) \ rebalance_middle_tree k (butlast ts) sub sep [] t )" text "Rather than deleting the minimal key from the right subtree, we remove the maximal key of the left subtree. This is due to the fact that the last tree can easily be accessed and the left neighbour is way easier to access than the right neighbour, it resides in the same pair as the separating element to be removed." fun split_max where "split_max k (Node ts t) = (case t of Leaf \ ( - let (sub,sep) = last ts in + let (sub,sep) = last ts in (Node (butlast ts) sub, sep) )| -_ \ +_ \ case split_max k t of (sub, sep) \ (rebalance_last_tree k ts sub, sep) )" fun del where "del k x Leaf = Leaf" | "del k x (Node ts t) = ( - case split ts x of - (ls,[]) \ + case split ts x of + (ls,[]) \ rebalance_last_tree k ls (del k x t) | (ls,(sub,sep)#rs) \ ( - if sep \ x then + if sep \ x then rebalance_middle_tree k ls (del k x sub) sep rs t else if sub = Leaf then Node (ls@rs) t else let (sub_s, max_s) = split_max k sub in rebalance_middle_tree k ls sub_s max_s rs t ) )" fun reduce_root where "reduce_root Leaf = Leaf" | "reduce_root (Node ts t) = (case ts of [] \ t | _ \ (Node ts t) )" fun delete where "delete k x t = reduce_root (del k x t)" text "An invariant for intermediate states at deletion. In particular we allow for an underflow to 0 subtrees." fun almost_order where "almost_order k Leaf = True" | "almost_order k (Node ts t) = ( (length ts \ 2*k) \ (\s \ set (subtrees ts). order k s) \ order k t )" text "A recursive property of the \"spine\" we want to walk along for splitting off the maximum of the left subtree." fun nonempty_lasttreebal where "nonempty_lasttreebal Leaf = True" | "nonempty_lasttreebal (Node ts t) = ( (\ls tsub tsep. ts = (ls@[(tsub,tsep)]) \ height tsub = height t) \ nonempty_lasttreebal t )" subsection "Proofs of functional correctness" -lemma split_set: +lemma split_set: assumes "split ts z = (ls,(a,b)#rs)" shows "(a,b) \ set ts" and "(x,y) \ set ls \ (x,y) \ set ts" and "(x,y) \ set rs \ (x,y) \ set ts" and "set ls \ set rs \ {(a,b)} = set ts" and "\x \ set ts. b \ Basic_BNFs.snds x" using split_conc assms by fastforce+ lemma split_length: "split ts x = (ls, rs) \ length ls + length rs = length ts" by (auto dest: split_conc) text "Isin proof" thm isin_simps (* copied from comment in List_Ins_Del *) lemma sorted_ConsD: "sorted_less (y # xs) \ x \ y \ x \ set xs" by (auto simp: sorted_Cons_iff) lemma sorted_snocD: "sorted_less (xs @ [y]) \ y \ x \ x \ set xs" by (auto simp: sorted_snoc_iff) lemmas isin_simps2 = sorted_lems sorted_ConsD sorted_snocD (*-----------------------------*) lemma isin_sorted: "sorted_less (xs@a#ys) \ (x \ set (xs@a#ys)) = (if x < a then x \ set xs else x \ set (a#ys))" by (auto simp: isin_simps2) (* lift to split *) lemma isin_sorted_split: assumes "sorted_less (inorder (Node ts t))" and "split ts x = (ls, rs)" shows "x \ set (inorder (Node ts t)) = (x \ set (inorder_list rs @ inorder t))" proof (cases ls) case Nil then have "ts = rs" using assms by (auto dest!: split_conc) then show ?thesis by simp next case Cons then obtain ls' sub sep where ls_tail_split: "ls = ls' @ [(sub,sep)]" by (metis list.simps(3) rev_exhaust surj_pair) then have "sep < x" using split_req(2)[of ts x ls' sub sep rs] using sorted_inorder_separators[OF assms(1)] using assms by simp then show ?thesis using assms(1) split_conc[OF assms(2)] ls_tail_split using isin_sorted[of "inorder_list ls' @ inorder sub" sep "inorder_list rs @ inorder t" x] by auto qed lemma isin_sorted_split_right: assumes "split ts x = (ls, (sub,sep)#rs)" and "sorted_less (inorder (Node ts t))" and "sep \ x" shows "x \ set (inorder_list ((sub,sep)#rs) @ inorder t) = (x \ set (inorder sub))" proof - from assms have "x < sep" proof - from assms have "sorted_less (separators ts)" by (simp add: sorted_inorder_separators) then show ?thesis using split_req(3) using assms by fastforce qed moreover have "sorted_less (inorder_list ((sub,sep)#rs) @ inorder t)" using assms sorted_wrt_append split_conc by fastforce ultimately show ?thesis using isin_sorted[of "inorder sub" "sep" "inorder_list rs @ inorder t" x] by simp qed theorem isin_set_inorder: "sorted_less (inorder t) \ isin t x = (x \ set (inorder t))" proof(induction t x rule: isin.induct) case (2 ts t x) then obtain ls rs where list_split: "split ts x = (ls, rs)" by (meson surj_pair) - then have list_conc: "ts = ls @ rs" + then have list_conc: "ts = ls @ rs" using split_conc by auto show ?case proof (cases rs) case Nil then have "isin (Node ts t) x = isin t x" by (simp add: list_split) also have "\ = (x \ set (inorder t))" using "2.IH"(1) list_split Nil using "2.prems" sorted_inorder_induct_last by auto also have "\ = (x \ set (inorder (Node ts t)))" using isin_sorted_split[of ts t x ls rs] using "2.prems" list_split list_conc Nil by simp finally show ?thesis . next case (Cons a list) then obtain sub sep where a_split: "a = (sub,sep)" by (cases a) then show ?thesis proof (cases "x = sep") case True then show ?thesis using list_conc Cons a_split list_split by auto next case False then have "isin (Node ts t) x = isin sub x" using list_split Cons a_split False by auto also have "\ = (x \ set (inorder sub))" using "2.IH"(2) using "2.prems" False a_split list_conc list_split local.Cons sorted_inorder_induct_subtree by fastforce also have "\ = (x \ set (inorder (Node ts t)))" using isin_sorted_split[OF "2.prems" list_split] using isin_sorted_split_right "2.prems" list_split Cons a_split False by simp finally show ?thesis . qed qed qed auto (* TODO way to use this for custom case distinction? *) lemma node\<^sub>i_cases: "length xs \ k \ (\ls sub sep rs. split_half xs = (ls,(sub,sep)#rs))" proof - have "\ length xs \ k \ length xs \ 1" by linarith then show ?thesis using split_half_not_empty by blast qed lemma root_order_tree\<^sub>i: "root_order_up\<^sub>i (Suc k) t = root_order (Suc k) (tree\<^sub>i t)" apply (cases t) apply auto done lemma node\<^sub>i_root_order: assumes "length ts > 0" and "length ts \ 4*k+1" and "\x \ set (subtrees ts). order k x" and "order k t" shows "root_order_up\<^sub>i k (node\<^sub>i k ts t)" proof (cases "length ts \ 2*k") case True then show ?thesis using assms by (simp add: node\<^sub>i.simps) next case False - then obtain ls sub sep rs where split_half_ts: + then obtain ls sub sep rs where split_half_ts: "take (length ts div 2) ts = ls" - "drop (length ts div 2) ts = (sub,sep)#rs" + "drop (length ts div 2) ts = (sub,sep)#rs" using split_half_not_empty[of ts] by auto then have length_rs: "length rs = length ts - (length ts div 2) - 1" using length_drop by (metis One_nat_def add_diff_cancel_right' list.size(4)) also have "\ \ 4*k - ((4*k + 1) div 2)" using assms(2) by simp also have "\ = 2*k" by auto finally have "length rs \ 2*k" by simp - moreover have "length rs \ k" + moreover have "length rs \ k" using False length_rs by simp moreover have "set ((sub,sep)#rs) \ set ts" by (metis split_half_ts(2) set_drop_subset) ultimately have o_r: "order k sub" "order k (Node rs t)" using split_half_ts assms by auto moreover have "length ls \ k" using length_take assms split_half_ts False by auto moreover have "length ls \ 2*k" using assms(2) split_half_ts by auto ultimately have o_l: "order k (Node ls sub)" using set_take_subset assms split_half_ts by fastforce from o_r o_l show ?thesis by (simp add: node\<^sub>i.simps False split_half_ts) qed lemma node\<^sub>i_order_helper: assumes "length ts \ k" and "length ts \ 4*k+1" and "\x \ set (subtrees ts). order k x" and "order k t" shows "case (node\<^sub>i k ts t) of T\<^sub>i t \ order k t | _ \ True" proof (cases "length ts \ 2*k") case True then show ?thesis using assms by (simp add: node\<^sub>i.simps) next case False - then obtain sub sep rs where - "drop (length ts div 2) ts = (sub,sep)#rs" + then obtain sub sep rs where + "drop (length ts div 2) ts = (sub,sep)#rs" using split_half_not_empty[of ts] by auto then show ?thesis using assms by (simp add: node\<^sub>i.simps) qed lemma node\<^sub>i_order: assumes "length ts \ k" and "length ts \ 4*k+1" and "\x \ set (subtrees ts). order k x" and "order k t" shows "order_up\<^sub>i k (node\<^sub>i k ts t)" apply(cases "node\<^sub>i k ts t") using node\<^sub>i_root_order node\<^sub>i_order_helper assms apply fastforce apply (metis node\<^sub>i_root_order assms(2,3,4) le0 length_greater_0_conv list.size(3) node\<^sub>i.simps order_up\<^sub>i.simps(2) root_order_up\<^sub>i.simps(2) up\<^sub>i.distinct(1)) done (* explicit proof *) -lemma ins_order: +lemma ins_order: "order k t \ order_up\<^sub>i k (ins k x t)" proof(induction k x t rule: ins.induct) case (2 k x ts t) then obtain ls rs where split_res: "split ts x = (ls, rs)" by (meson surj_pair) then have split_app: "ls@rs = ts" using split_conc by simp show ?case proof (cases rs) case Nil then have "order_up\<^sub>i k (ins k x t)" using 2 split_res by simp then show ?thesis using Nil 2 split_app split_res Nil node\<^sub>i_order by (auto split!: up\<^sub>i.splits simp del: node\<^sub>i.simps) next case (Cons a list) then obtain sub sep where a_prod: "a = (sub, sep)" by (cases a) then show ?thesis proof (cases "x = sep") case True then show ?thesis using 2 a_prod Cons split_res by simp next case False then have "order_up\<^sub>i k (ins k x sub)" using "2.IH"(2) "2.prems" a_prod local.Cons split_app split_res by auto then show ?thesis using 2 split_app Cons length_append node\<^sub>i_order a_prod split_res by (auto split!: up\<^sub>i.splits simp del: node\<^sub>i.simps simp add: order_impl_root_order) qed qed qed simp (* notice this is almost a duplicate of ins_order *) -lemma ins_root_order: +lemma ins_root_order: assumes "root_order k t" shows "root_order_up\<^sub>i k (ins k x t)" proof(cases t) case (Node ts t) then obtain ls rs where split_res: "split ts x = (ls, rs)" by (meson surj_pair) then have split_app: "ls@rs = ts" using split_conc by fastforce show ?thesis proof (cases rs) case Nil then have "order_up\<^sub>i k (ins k x t)" using Node assms split_res by (simp add: ins_order) then show ?thesis using Nil Node split_app split_res assms node\<^sub>i_root_order by (auto split!: up\<^sub>i.splits simp del: node\<^sub>i.simps simp add: order_impl_root_order) next case (Cons a list) then obtain sub sep where a_prod: "a = (sub, sep)" by (cases a) then show ?thesis proof (cases "x = sep") case True then show ?thesis using assms Node a_prod Cons split_res by simp next case False then have "order_up\<^sub>i k (ins k x sub)" using Node a_prod assms ins_order local.Cons split_app by auto then show ?thesis using assms split_app Cons length_append Node node\<^sub>i_root_order a_prod split_res by (auto split!: up\<^sub>i.splits simp del: node\<^sub>i.simps simp add: order_impl_root_order) qed qed qed simp lemma height_list_split: "height_up\<^sub>i (Up\<^sub>i (Node ls a) b (Node rs t)) = height (Node (ls@(a,b)#rs) t) " by (induction ls) (auto simp add: max.commute) lemma node\<^sub>i_height: "height_up\<^sub>i (node\<^sub>i k ts t) = height (Node ts t)" proof(cases "length ts \ 2*k") case False then obtain ls sub sep rs where split_half_ts: "split_half ts = (ls, (sub, sep) # rs)" by (meson node\<^sub>i_cases) then have "node\<^sub>i k ts t = Up\<^sub>i (Node ls (sub)) sep (Node rs t)" using False by simp then show ?thesis using split_half_ts by (metis append_take_drop_id fst_conv height_list_split snd_conv split_half.elims) qed simp lemma bal_up\<^sub>i_tree: "bal_up\<^sub>i t = bal (tree\<^sub>i t)" apply(cases t) apply auto done lemma bal_list_split: "bal (Node (ls@(a,b)#rs) t) \ bal_up\<^sub>i (Up\<^sub>i (Node ls a) b (Node rs t))" by (auto simp add: image_constant_conv) lemma node\<^sub>i_bal: assumes "bal (Node ts t)" shows "bal_up\<^sub>i (node\<^sub>i k ts t)" using assms proof(cases "length ts \ 2*k") case False then obtain ls sub sep rs where split_half_ts: "split_half ts = (ls, (sub, sep) # rs)" by (meson node\<^sub>i_cases) then have "bal (Node (ls@(sub,sep)#rs) t)" using assms append_take_drop_id[where n="length ts div 2" and xs=ts] by auto then show ?thesis using split_half_ts assms False by (auto simp del: bal.simps bal_up\<^sub>i.simps dest!: bal_list_split[of ls sub sep rs t]) qed simp lemma height_up\<^sub>i_merge: "height_up\<^sub>i (Up\<^sub>i l a r) = height t \ height (Node (ls@(t,x)#rs) tt) = height (Node (ls@(l,a)#(r,x)#rs) tt)" by simp lemma ins_height: "height_up\<^sub>i (ins k x t) = height t" proof(induction k x t rule: ins.induct) case (2 k x ts t) then obtain ls rs where split_list: "split ts x = (ls,rs)" by (meson surj_pair) then have split_append: "ls@rs = ts" using split_conc by auto then show ?case proof (cases rs) case Nil then have height_sub: "height_up\<^sub>i (ins k x t) = height t" using 2 by (simp add: split_list) then show ?thesis proof (cases "ins k x t") case (T\<^sub>i a) then have "height (Node ts t) = height (Node ts a)" using height_sub by simp then show ?thesis using T\<^sub>i Nil split_list split_append by simp next case (Up\<^sub>i l a r) then have "height (Node ls t) = height (Node (ls@[(l,a)]) r)" using height_btree_order height_sub by (induction ls) auto then show ?thesis using 2 Nil split_list Up\<^sub>i split_append by (simp del: node\<^sub>i.simps add: node\<^sub>i_height) qed next case (Cons a list) then obtain sub sep where a_split: "a = (sub,sep)" by (cases a) then show ?thesis proof (cases "x = sep") case True then show ?thesis using Cons a_split 2 split_list by (simp del: height_btree.simps) next case False then have height_sub: "height_up\<^sub>i (ins k x sub) = height sub" by (metis "2.IH"(2) a_split Cons split_list) then show ?thesis proof (cases "ins k x sub") case (T\<^sub>i a) then have "height a = height sub" using height_sub by auto then have "height (Node (ls@(sub,sep)#rs) t) = height (Node (ls@(a,sep)#rs) t)" by auto - then show ?thesis + then show ?thesis using T\<^sub>i height_sub False Cons 2 split_list a_split split_append by (auto simp add: image_Un max.commute finite_set_ins_swap) next case (Up\<^sub>i l a r) then have "height (Node (ls@(sub,sep)#list) t) = height (Node (ls@(l,a)#(r,sep)#list) t)" using height_up\<^sub>i_merge height_sub by fastforce then show ?thesis using Up\<^sub>i False Cons 2 split_list a_split split_append by (auto simp del: node\<^sub>i.simps simp add: node\<^sub>i_height image_Un max.commute finite_set_ins_swap) qed qed qed qed simp (* the below proof is overly complicated as a number of lemmas regarding height are missing *) lemma ins_bal: "bal t \ bal_up\<^sub>i (ins k x t)" proof(induction k x t rule: ins.induct) case (2 k x ts t) then obtain ls rs where split_res: "split ts x = (ls, rs)" by (meson surj_pair) then have split_app: "ls@rs = ts" using split_conc by fastforce show ?case proof (cases rs) case Nil - then show ?thesis + then show ?thesis proof (cases "ins k x t") case (T\<^sub>i a) then have "bal (Node ls a)" unfolding bal.simps by (metis "2.IH"(1) "2.prems" append_Nil2 bal.simps(2) bal_up\<^sub>i.simps(1) height_up\<^sub>i.simps(1) ins_height local.Nil split_app split_res) - then show ?thesis + then show ?thesis using Nil T\<^sub>i 2 split_res by simp next case (Up\<^sub>i l a r) - then have + then have "(\x\set (subtrees (ls@[(l,a)])). bal x)" "(\x\set (subtrees ls). height r = height x)" using 2 Up\<^sub>i Nil split_res split_app by simp_all (metis height_up\<^sub>i.simps(2) ins_height max_def) then show ?thesis unfolding ins.simps using Up\<^sub>i Nil 2 split_res by (simp del: node\<^sub>i.simps add: node\<^sub>i_bal) qed next case (Cons a list) then obtain sub sep where a_prod: "a = (sub, sep)" by (cases a) then show ?thesis proof (cases "x = sep") case True then show ?thesis using a_prod 2 split_res Cons by simp next case False then have "bal_up\<^sub>i (ins k x sub)" using 2 split_res using a_prod local.Cons split_app by auto show ?thesis proof (cases "ins k x sub") case (T\<^sub>i x1) then have "height x1 = height t" by (metis "2.prems" a_prod add_diff_cancel_left' bal_split_left(1) bal_split_left(2) height_bal_tree height_up\<^sub>i.simps(1) ins_height local.Cons plus_1_eq_Suc split_app) then show ?thesis using split_app Cons T\<^sub>i 2 split_res a_prod by auto next case (Up\<^sub>i l a r) (* The only case where explicit reasoning is required - likely due to the insertion of 2 elements in the list *) then have "\x \ set (subtrees (ls@(l,a)#(r,sep)#list)). bal x" using Up\<^sub>i split_app Cons 2 \bal_up\<^sub>i (ins k x sub)\ by auto moreover have "\x \ set (subtrees (ls@(l,a)#(r,sep)#list)). height x = height t" using False Up\<^sub>i split_app Cons 2 \bal_up\<^sub>i (ins k x sub)\ ins_height split_res a_prod apply auto by (metis height_up\<^sub>i.simps(2) sup.idem sup_nat_def) ultimately show ?thesis using Up\<^sub>i Cons 2 split_res a_prod by (simp del: node\<^sub>i.simps add: node\<^sub>i_bal) qed qed qed qed simp (* ins acts as ins_list wrt inorder *) (* "simple enough" to be automatically solved *) lemma node\<^sub>i_inorder: "inorder_up\<^sub>i (node\<^sub>i k ts t) = inorder (Node ts t)" apply(cases "length ts \ 2*k") apply (auto split!: list.splits) (* we want to only transform in one direction here.. *) supply R = sym[OF append_take_drop_id, of "map _ ts" "(length ts div 2)"] thm R apply(subst R) apply (simp del: append_take_drop_id add: take_map drop_map) done corollary node\<^sub>i_inorder_simps: "node\<^sub>i k ts t = T\<^sub>i t' \ inorder t' = inorder (Node ts t)" "node\<^sub>i k ts t = Up\<^sub>i l a r \ inorder l @ a # inorder r = inorder (Node ts t)" apply (metis inorder_up\<^sub>i.simps(1) node\<^sub>i_inorder) by (metis append_Cons inorder_up\<^sub>i.simps(2) node\<^sub>i_inorder self_append_conv2) lemma ins_sorted_inorder: "sorted_less (inorder t) \ (inorder_up\<^sub>i (ins k (x::('a::linorder)) t)) = ins_list x (inorder t)" apply(induction k x t rule: ins.induct) using split_axioms apply (auto split!: prod.splits list.splits up\<^sub>i.splits simp del: node\<^sub>i.simps simp add: node\<^sub>i_inorder node\<^sub>i_inorder_simps) (* from here on we prefer an explicit proof, showing how to apply the IH *) oops (* specialize ins_list_sorted since it is cumbersome to express "inorder_list ts" as "xs @ [a]" and always having to use the implicit properties of split*) lemma ins_list_split: assumes "split ts x = (ls, rs)" and "sorted_less (inorder (Node ts t))" shows "ins_list x (inorder (Node ts t)) = inorder_list ls @ ins_list x (inorder_list rs @ inorder t)" proof (cases ls) case Nil then show ?thesis using assms by (auto dest!: split_conc) next case Cons then obtain ls' sub sep where ls_tail_split: "ls = ls' @ [(sub,sep)]" by (metis list.distinct(1) rev_exhaust surj_pair) moreover have "sep < x" using split_req(2)[of ts x ls' sub sep rs] using sorted_inorder_separators using assms(1) assms(2) ls_tail_split by auto moreover have "sorted_less (inorder_list ls)" using assms sorted_wrt_append split_conc by fastforce ultimately show ?thesis using assms(2) split_conc[OF assms(1)] using ins_list_sorted[of "inorder_list ls' @ inorder sub" sep] by auto qed lemma ins_list_split_right_general: assumes "split ts x = (ls, (sub,sep)#rs)" and "sorted_less (inorder_list ts)" and "sep \ x" shows "ins_list x (inorder_list ((sub,sep)#rs) @ zs) = ins_list x (inorder sub) @ sep # inorder_list rs @ zs" proof - from assms have "x < sep" proof - from assms have "sorted_less (separators ts)" by (simp add: sorted_inorder_list_separators) then show ?thesis using split_req(3) using assms by fastforce qed moreover have "sorted_less (inorder_pair (sub,sep))" by (metis (no_types, lifting) assms(1) assms(2) concat.simps(2) concat_append list.simps(9) map_append sorted_wrt_append split_conc) ultimately show ?thesis using ins_list_sorted[of "inorder sub" "sep"] by auto qed (* this fits the actual use cases better *) corollary ins_list_split_right: assumes "split ts x = (ls, (sub,sep)#rs)" and "sorted_less (inorder (Node ts t))" and "sep \ x" shows "ins_list x (inorder_list ((sub,sep)#rs) @ inorder t) = ins_list x (inorder sub) @ sep # inorder_list rs @ inorder t" using assms sorted_wrt_append split.ins_list_split_right_general split_axioms by fastforce (* a simple lemma, missing from the standard as of now *) lemma ins_list_idem_eq_isin: "sorted_less xs \ x \ set xs \ (ins_list x xs = xs)" apply(induction xs) apply auto done lemma ins_list_contains_idem: "\sorted_less xs; x \ set xs\ \ (ins_list x xs = xs)" using ins_list_idem_eq_isin by auto declare node\<^sub>i.simps [simp del] -declare node\<^sub>i_inorder [simp add] +declare node\<^sub>i_inorder [simp add] lemma ins_inorder: "sorted_less (inorder t) \ (inorder_up\<^sub>i (ins k x t)) = ins_list x (inorder t)" proof(induction k x t rule: ins.induct) case (1 k x) then show ?case by auto next case (2 k x ts t) then obtain ls rs where list_split: "split ts x = (ls,rs)" by (cases "split ts x") then have list_conc: "ts = ls@rs" using split.split_conc split_axioms by blast then show ?case proof (cases rs) case Nil then show ?thesis proof (cases "ins k x t") case (T\<^sub>i a) then have IH:"inorder a = ins_list x (inorder t)" using "2.IH"(1) "2.prems" list_split local.Nil sorted_inorder_induct_last by auto have "inorder_up\<^sub>i (ins k x (Node ts t)) = inorder_list ls @ inorder a" using list_split T\<^sub>i Nil by (auto simp add: list_conc) also have "\ = inorder_list ls @ (ins_list x (inorder t))" by (simp add: IH) also have "\ = ins_list x (inorder (Node ts t))" using ins_list_split using "2.prems" list_split Nil by auto finally show ?thesis . next case (Up\<^sub>i l a r) then have IH:"inorder_up\<^sub>i (Up\<^sub>i l a r) = ins_list x (inorder t)" using "2.IH"(1) "2.prems" list_split local.Nil sorted_inorder_induct_last by auto have "inorder_up\<^sub>i (ins k x (Node ts t)) = inorder_list ls @ inorder_up\<^sub>i (Up\<^sub>i l a r)" using list_split Up\<^sub>i Nil by (auto simp add: list_conc) also have "\ = inorder_list ls @ ins_list x (inorder t)" using IH by simp also have "\ = ins_list x (inorder (Node ts t))" using ins_list_split using "2.prems" list_split local.Nil by auto finally show ?thesis . qed next case (Cons h list) then obtain sub sep where h_split: "h = (sub,sep)" by (cases h) then have sorted_inorder_sub: "sorted_less (inorder sub)" using "2.prems" list_conc local.Cons sorted_inorder_induct_subtree by fastforce then show ?thesis proof(cases "x = sep") case True then have "x \ set (inorder (Node ts t))" using list_conc h_split Cons by simp then have "ins_list x (inorder (Node ts t)) = inorder (Node ts t)" using "2.prems" ins_list_contains_idem by blast also have "\ = inorder_up\<^sub>i (ins k x (Node ts t))" using list_split h_split Cons True by auto finally show ?thesis by simp next case False then show ?thesis proof (cases "ins k x sub") case (T\<^sub>i a) then have IH:"inorder a = ins_list x (inorder sub)" using "2.IH"(2) "2.prems" list_split Cons sorted_inorder_sub h_split False by auto have "inorder_up\<^sub>i (ins k x (Node ts t)) = inorder_list ls @ inorder a @ sep # inorder_list list @ inorder t" using h_split False list_split T\<^sub>i Cons by simp also have "\ = inorder_list ls @ ins_list x (inorder sub) @ sep # inorder_list list @ inorder t" using IH by simp also have "\ = ins_list x (inorder (Node ts t))" using ins_list_split ins_list_split_right using list_split "2.prems" Cons h_split False by auto finally show ?thesis . next case (Up\<^sub>i l a r) then have IH:"inorder_up\<^sub>i (Up\<^sub>i l a r) = ins_list x (inorder sub)" using "2.IH"(2) False h_split list_split local.Cons sorted_inorder_sub by auto have "inorder_up\<^sub>i (ins k x (Node ts t)) = inorder_list ls @ inorder l @ a # inorder r @ sep # inorder_list list @ inorder t" using h_split False list_split Up\<^sub>i Cons by simp also have "\ = inorder_list ls @ ins_list x (inorder sub) @ sep # inorder_list list @ inorder t" using IH by simp also have "\ = ins_list x (inorder (Node ts t))" using ins_list_split ins_list_split_right using list_split "2.prems" Cons h_split False by auto finally show ?thesis . qed qed qed qed declare node\<^sub>i.simps [simp add] declare node\<^sub>i_inorder [simp del] thm ins.induct thm btree.induct (* wrapped up insert invariants *) lemma tree\<^sub>i_bal: "bal_up\<^sub>i u \ bal (tree\<^sub>i u)" apply(cases u) apply(auto) done lemma tree\<^sub>i_order: "\k > 0; root_order_up\<^sub>i k u\ \ root_order k (tree\<^sub>i u)" apply(cases u) apply(auto simp add: order_impl_root_order) done lemma tree\<^sub>i_inorder: "inorder_up\<^sub>i u = inorder (tree\<^sub>i u)" apply (cases u) apply auto done lemma insert_bal: "bal t \ bal (insert k x t)" using ins_bal by (simp add: tree\<^sub>i_bal) lemma insert_order: "\k > 0; root_order k t\ \ root_order k (insert k x t)" using ins_root_order by (simp add: tree\<^sub>i_order) lemma insert_inorder: "sorted_less (inorder t) \ inorder (insert k x t) = ins_list x (inorder t)" using ins_inorder by (simp add: tree\<^sub>i_inorder) text "Deletion proofs" thm list.simps lemma rebalance_middle_tree_height: assumes "height t = height sub" and "case rs of (rsub,rsep) # list \ height rsub = height t | [] \ True" shows "height (rebalance_middle_tree k ls sub sep rs t) = height (Node (ls@(sub,sep)#rs) t)" proof (cases "height t") case 0 then have "t = Leaf" "sub = Leaf" using height_Leaf assms by auto then show ?thesis by simp next case (Suc nat) then obtain tts tt where t_node: "t = Node tts tt" using height_Leaf by (cases t) simp then obtain mts mt where sub_node: "sub = Node mts mt" using assms by (cases sub) simp then show ?thesis proof (cases "length mts \ k \ length tts \ k") case False then show ?thesis proof (cases rs) case Nil then have "height_up\<^sub>i (node\<^sub>i k (mts@(mt,sep)#tts) tt) = height (Node (mts@(mt,sep)#tts) tt)" using node\<^sub>i_height by blast also have "\ = max (height t) (height sub)" by (metis assms(1) height_up\<^sub>i.simps(2) height_list_split sub_node t_node) finally have height_max: "height_up\<^sub>i (node\<^sub>i k (mts @ (mt, sep) # tts) tt) = max (height t) (height sub)" by simp then show ?thesis proof (cases "node\<^sub>i k (mts@(mt,sep)#tts) tt") case (T\<^sub>i u) then have "height u = max (height t) (height sub)" using height_max by simp then have "height (Node ls u) = height (Node (ls@[(sub,sep)]) t)" by (induction ls) (auto simp add: max.commute) then show ?thesis using Nil False T\<^sub>i by (simp add: sub_node t_node) next case (Up\<^sub>i l a r) then have "height (Node (ls@[(sub,sep)]) t) = height (Node (ls@[(l,a)]) r)" using assms(1) height_max by (induction ls) auto then show ?thesis using Up\<^sub>i Nil sub_node t_node by auto qed next case (Cons a list) then obtain rsub rsep where a_split: "a = (rsub, rsep)" by (cases a) then obtain rts rt where r_node: "rsub = Node rts rt" using assms(2) Cons height_Leaf Suc by (cases rsub) simp_all then have "height_up\<^sub>i (node\<^sub>i k (mts@(mt,sep)#rts) rt) = height (Node (mts@(mt,sep)#rts) rt)" using node\<^sub>i_height by blast also have "\ = max (height rsub) (height sub)" by (metis r_node height_up\<^sub>i.simps(2) height_list_split max.commute sub_node) finally have height_max: "height_up\<^sub>i (node\<^sub>i k (mts @ (mt, sep) # rts) rt) = max (height rsub) (height sub)" by simp then show ?thesis proof (cases "node\<^sub>i k (mts@(mt,sep)#rts) rt") case (T\<^sub>i u) then have "height u = max (height rsub) (height sub)" using height_max by simp - then show ?thesis + then show ?thesis using T\<^sub>i False Cons r_node a_split sub_node t_node by auto next case (Up\<^sub>i l a r) then have height_max: "max (height l) (height r) = max (height rsub) (height sub)" using height_max by auto then show ?thesis using Cons a_split r_node Up\<^sub>i sub_node t_node by auto qed qed qed (simp add: sub_node t_node) qed lemma rebalance_last_tree_height: assumes "height t = height sub" and "ts = list@[(sub,sep)]" shows "height (rebalance_last_tree k ts t) = height (Node ts t)" using rebalance_middle_tree_height assms by auto lemma split_max_height: assumes "split_max k t = (sub,sep)" and "nonempty_lasttreebal t" and "t \ Leaf" shows "height sub = height t" using assms proof(induction t arbitrary: k sub sep) case Node1: (Node tts tt) then obtain ls tsub tsep where tts_split: "tts = ls@[(tsub,tsep)]" by auto then show ?case proof (cases tt) case Leaf then have "height (Node (ls@[(tsub,tsep)]) tt) = max (height (Node ls tsub)) (Suc (height tt))" using height_btree_last height_btree_order by metis moreover have "split_max k (Node tts tt) = (Node ls tsub, tsep)" using Leaf Node1 tts_split by auto ultimately show ?thesis using Leaf Node1 height_Leaf max_def by auto next case Node2: (Node l a) then obtain subsub subsep where sub_split: "split_max k tt = (subsub,subsep)" by (cases "split_max k tt") then have "height subsub = height tt" using Node1 Node2 by auto moreover have "split_max k (Node tts tt) = (rebalance_last_tree k tts subsub, subsep)" using Node1 Node2 tts_split sub_split by auto ultimately show ?thesis using rebalance_last_tree_height Node1 Node2 by auto qed qed auto lemma order_bal_nonempty_lasttreebal: "\k > 0; root_order k t; bal t\ \ nonempty_lasttreebal t" proof(induction k t rule: order.induct) case (2 k ts t) then have "length ts > 0" by auto then obtain ls tsub tsep where ts_split: "ts = (ls@[(tsub,tsep)])" by (metis eq_fst_iff length_greater_0_conv snoc_eq_iff_butlast) moreover have "height tsub = height t" using "2.prems"(3) ts_split by auto moreover have "nonempty_lasttreebal t" using 2 order_impl_root_order by auto ultimately show ?case by simp qed simp lemma bal_sub_height: "bal (Node (ls@a#rs) t) \ (case rs of [] \ True | (sub,sep)#_ \ height sub = height t)" by (cases rs) (auto) lemma del_height: "\k > 0; root_order k t; bal t\ \ height (del k x t) = height t" proof(induction k x t rule: del.induct) case (2 k x ts t) then obtain ls list where list_split: "split ts x = (ls, list)" by (cases "split ts x") then show ?case proof(cases list) case Nil then have "height (del k x t) = height t" using 2 list_split order_bal_nonempty_lasttreebal by (simp add: order_impl_root_order) moreover obtain lls sub sep where "ls = lls@[(sub,sep)]" using split_conc 2 list_split Nil by (metis append_Nil2 nonempty_lasttreebal.simps(2) order_bal_nonempty_lasttreebal) moreover have "Node ls t = Node ts t" using split_conc Nil list_split by auto ultimately show ?thesis - using rebalance_last_tree_height 2 list_split Nil split_conc + using rebalance_last_tree_height 2 list_split Nil split_conc by (auto simp add: max.assoc sup_nat_def max_def) next case (Cons a rs) then have rs_height: "case rs of [] \ True | (rsub,rsep)#_ \ height rsub = height t" (* notice the difference if rsub and t are switched *) using "2.prems"(3) bal_sub_height list_split split_conc by blast from Cons obtain sub sep where a_split: "a = (sub,sep)" by (cases a) consider (sep_n_x) "sep \ x" | - (sep_x_Leaf) "sep = x \ sub = Leaf" | + (sep_x_Leaf) "sep = x \ sub = Leaf" | (sep_x_Node) "sep = x \ (\ts t. sub = Node ts t)" using btree.exhaust by blast then show ?thesis proof cases case sep_n_x have height_t_sub: "height t = height sub" using "2.prems"(3) a_split list_split local.Cons split.split_set(1) split_axioms by fastforce have height_t_del: "height (del k x sub) = height t" by (metis "2.IH"(2) "2.prems"(1) "2.prems"(2) "2.prems"(3) a_split bal.simps(2) list_split local.Cons order_impl_root_order root_order.simps(2) sep_n_x some_child_sub(1) split_set(1)) then have "height (rebalance_middle_tree k ls (del k x sub) sep rs t) = height (Node (ls@((del k x sub),sep)#rs) t)" using rs_height rebalance_middle_tree_height by simp also have "\ = height (Node (ls@(sub,sep)#rs) t)" using height_t_sub "2.prems" height_t_del by auto also have "\ = height (Node ts t)" using 2 a_split sep_n_x list_split Cons split_set(1) split_conc by auto finally show ?thesis using sep_n_x Cons a_split list_split 2 by simp next case sep_x_Leaf then have "height (Node ts t) = height (Node (ls@rs) t)" using bal_split_last(2) "2.prems"(3) a_split list_split Cons split_conc by metis then show ?thesis using a_split list_split Cons sep_x_Leaf 2 by auto next case sep_x_Node then obtain sts st where sub_node: "sub = Node sts st" by blast obtain sub_s max_s where sub_split: "split_max k sub = (sub_s, max_s)" by (cases "split_max k sub") then have "height sub_s = height t" by (metis "2.prems"(1) "2.prems"(2) "2.prems"(3) a_split bal.simps(2) btree.distinct(1) list_split Cons order_bal_nonempty_lasttreebal order_impl_root_order root_order.simps(2) some_child_sub(1) split_set(1) split_max_height sub_node) then have "height (rebalance_middle_tree k ls sub_s max_s rs t) = height (Node (ls@(sub_s,sep)#rs) t)" using rs_height rebalance_middle_tree_height by simp also have "\ = height (Node ts t)" using 2 a_split sep_x_Node list_split Cons split_set(1) \height sub_s = height t\ by (auto simp add: split_conc[of ts]) finally show ?thesis using sep_x_Node Cons a_split list_split 2 sub_node sub_split by auto qed qed qed simp (* proof for inorders *) (* note: this works (as it should, since there is not even recursion involved) automatically. *yay* *) lemma rebalance_middle_tree_inorder: assumes "height t = height sub" and "case rs of (rsub,rsep) # list \ height rsub = height t | [] \ True" shows "inorder (rebalance_middle_tree k ls sub sep rs t) = inorder (Node (ls@(sub,sep)#rs) t)" apply(cases sub; cases t) - using assms + using assms apply (auto split!: btree.splits up\<^sub>i.splits list.splits simp del: node\<^sub>i.simps simp add: node\<^sub>i_inorder_simps ) done lemma rebalance_last_tree_inorder: assumes "height t = height sub" and "ts = list@[(sub,sep)]" shows "inorder (rebalance_last_tree k ts t) = inorder (Node ts t)" using rebalance_middle_tree_inorder assms by auto lemma butlast_inorder_app_id: "xs = xs' @ [(sub,sep)] \ inorder_list xs' @ inorder sub @ [sep] = inorder_list xs" by simp lemma split_max_inorder: assumes "nonempty_lasttreebal t" and "t \ Leaf" shows "inorder_pair (split_max k t) = inorder t" - using assms + using assms proof (induction k t rule: split_max.induct) case (1 k ts t) then show ?case proof (cases t) case Leaf then have "ts = butlast ts @ [last ts]" using "1.prems"(1) by auto moreover obtain sub sep where "last ts = (sub,sep)" by fastforce ultimately show ?thesis - using Leaf + using Leaf apply (auto split!: prod.splits btree.splits) by (simp add: butlast_inorder_app_id) next case (Node tts tt) then have IH: "inorder_pair (split_max k t) = inorder t" using "1.IH" "1.prems"(1) by auto obtain sub sep where split_sub_sep: "split_max k t = (sub,sep)" by fastforce then have height_sub: "height sub = height t" by (metis "1.prems"(1) Node btree.distinct(1) nonempty_lasttreebal.simps(2) split_max_height) have "inorder_pair (split_max k (Node ts t)) = inorder (rebalance_last_tree k ts sub) @ [sep]" using Node 1 split_sub_sep by auto also have "\ = inorder_list ts @ inorder sub @ [sep]" using rebalance_last_tree_inorder height_sub "1.prems" by (auto simp del: rebalance_last_tree.simps) also have "\ = inorder (Node ts t)" using IH split_sub_sep by simp finally show ?thesis . qed qed simp lemma height_bal_subtrees_merge: "\height (Node as a) = height (Node bs b); bal (Node as a); bal (Node bs b)\ \ \x \ set (subtrees as) \ {a}. height x = height b" by (metis Suc_inject Un_iff bal.simps(2) height_bal_tree singletonD) -lemma bal_list_merge: +lemma bal_list_merge: assumes "bal_up\<^sub>i (Up\<^sub>i (Node as a) x (Node bs b))" shows "bal (Node (as@(a,x)#bs) b)" proof - have "\x\set (subtrees (as @ (a, x) # bs)). bal x" using subtrees_split assms by auto moreover have "bal b" using assms by auto moreover have "\x\set (subtrees as) \ {a} \ set (subtrees bs). height x = height b" using assms height_bal_subtrees_merge unfolding bal_up\<^sub>i.simps by blast ultimately show ?thesis by auto qed -lemma node\<^sub>i_bal_up\<^sub>i: +lemma node\<^sub>i_bal_up\<^sub>i: assumes "bal_up\<^sub>i (node\<^sub>i k ts t)" shows "bal (Node ts t)" using assms proof(cases "length ts \ 2*k") case False then obtain ls sub sep rs where split_list: "split_half ts = (ls, (sub,sep)#rs)" using node\<^sub>i_cases by blast then have "node\<^sub>i k ts t = Up\<^sub>i (Node ls sub) sep (Node rs t)" using False by auto moreover have "ts = ls@(sub,sep)#rs" by (metis append_take_drop_id fst_conv local.split_list snd_conv split_half.elims) ultimately show ?thesis using bal_list_merge[of ls sub sep rs t] assms by (simp del: bal.simps bal_up\<^sub>i.simps) qed simp lemma node\<^sub>i_bal_simp: "bal_up\<^sub>i (node\<^sub>i k ts t) = bal (Node ts t)" using node\<^sub>i_bal node\<^sub>i_bal_up\<^sub>i by blast lemma rebalance_middle_tree_bal: "bal (Node (ls@(sub,sep)#rs) t) \ bal (rebalance_middle_tree k ls sub sep rs t)" proof (cases t) case t_node: (Node tts tt) assume assms: "bal (Node (ls @ (sub, sep) # rs) t)" then obtain mts mt where sub_node: "sub = Node mts mt" by (cases sub) (auto simp add: t_node) have sub_heights: "height sub = height t" "bal sub" "bal t" using assms by auto show ?thesis proof (cases "length mts \ k \ length tts \ k") case True then show ?thesis using t_node sub_node assms by (auto simp del: bal.simps) next case False then show ?thesis proof (cases rs) case Nil have "height_up\<^sub>i (node\<^sub>i k (mts@(mt,sep)#tts) tt) = height (Node (mts@(mt,sep)#tts) tt)" using node\<^sub>i_height by blast also have "\ = Suc (height tt)" by (metis height_bal_tree height_up\<^sub>i.simps(2) height_list_split max.idem sub_heights(1) sub_heights(3) sub_node t_node) also have "\ = height t" using height_bal_tree sub_heights(3) t_node by fastforce finally have "height_up\<^sub>i (node\<^sub>i k (mts@(mt,sep)#tts) tt) = height t" by simp moreover have "bal_up\<^sub>i (node\<^sub>i k (mts@(mt,sep)#tts) tt)" by (metis bal_list_merge bal_up\<^sub>i.simps(2) node\<^sub>i_bal sub_heights(1) sub_heights(2) sub_heights(3) sub_node t_node) ultimately show ?thesis apply (cases "node\<^sub>i k (mts@(mt,sep)#tts) tt") using assms Nil sub_node t_node by auto next case (Cons r rs) then obtain rsub rsep where r_split: "r = (rsub,rsep)" by (cases r) then have rsub_height: "height rsub = height t" "bal rsub" using assms Cons by auto then obtain rts rt where r_node: "rsub = (Node rts rt)" apply(cases rsub) using t_node by simp have "height_up\<^sub>i (node\<^sub>i k (mts@(mt,sep)#rts) rt) = height (Node (mts@(mt,sep)#rts) rt)" using node\<^sub>i_height by blast also have "\ = Suc (height rt)" by (metis Un_iff \height rsub = height t\ assms bal.simps(2) bal_split_last(1) height_bal_tree height_up\<^sub>i.simps(2) height_list_split list.set_intros(1) Cons max.idem r_node r_split set_append some_child_sub(1) sub_heights(1) sub_node) also have "\ = height rsub" using height_bal_tree r_node rsub_height(2) by fastforce finally have 1: "height_up\<^sub>i (node\<^sub>i k (mts@(mt,sep)#rts) rt) = height rsub" . moreover have 2: "bal_up\<^sub>i (node\<^sub>i k (mts@(mt,sep)#rts) rt)" by (metis bal_list_merge bal_up\<^sub>i.simps(2) node\<^sub>i_bal r_node rsub_height(1) rsub_height(2) sub_heights(1) sub_heights(2) sub_node) ultimately show ?thesis proof (cases "node\<^sub>i k (mts@(mt,sep)#rts) rt") case (T\<^sub>i u) then have "bal (Node (ls@(u,rsep)#rs) t)" using 1 2 Cons assms t_node subtrees_split sub_heights r_split rsub_height unfolding bal.simps by (auto simp del: height_btree.simps) then show ?thesis using Cons assms t_node sub_node r_split r_node False T\<^sub>i by (auto simp del: node\<^sub>i.simps bal.simps) next case (Up\<^sub>i l a r) then have "bal (Node (ls@(l,a)#(r,rsep)#rs) t)" using 1 2 Cons assms t_node subtrees_split sub_heights r_split rsub_height unfolding bal.simps by (auto simp del: height_btree.simps) then show ?thesis using Cons assms t_node sub_node r_split r_node False Up\<^sub>i by (auto simp del: node\<^sub>i.simps bal.simps) qed qed qed qed (simp add: height_Leaf) lemma rebalance_last_tree_bal: "\bal (Node ts t); ts \ []\ \ bal (rebalance_last_tree k ts t)" using rebalance_middle_tree_bal append_butlast_last_id[of ts] - apply(cases "last ts") + apply(cases "last ts") apply(auto simp del: bal.simps rebalance_middle_tree.simps) done -lemma split_max_bal: +lemma split_max_bal: assumes "bal t" - and "t \ Leaf" + and "t \ Leaf" and "nonempty_lasttreebal t" shows "bal (fst (split_max k t))" using assms proof(induction k t rule: split_max.induct) case (1 k ts t) then show ?case proof (cases t) case Leaf then obtain sub sep where last_split: "last ts = (sub,sep)" using 1 by auto then have "height sub = height t" using 1 by auto then have "bal (Node (butlast ts) sub)" using 1 last_split by auto then show ?thesis using 1 Leaf last_split by auto next case (Node tts tt) then obtain sub sep where t_split: "split_max k t = (sub,sep)" by (cases "split_max k t") then have "height sub = height t" using 1 Node by (metis btree.distinct(1) nonempty_lasttreebal.simps(2) split_max_height) moreover have "bal sub" using "1.IH" "1.prems"(1) "1.prems"(3) Node t_split by fastforce ultimately have "bal (Node ts sub)" using 1 t_split Node by auto then show ?thesis using rebalance_last_tree_bal t_split Node 1 by (auto simp del: bal.simps rebalance_middle_tree.simps) qed qed simp -lemma del_bal: +lemma del_bal: assumes "k > 0" and "root_order k t" and "bal t" shows "bal (del k x t)" using assms proof(induction k x t rule: del.induct) case (2 k x ts t) then obtain ls rs where list_split: "split ts x = (ls,rs)" by (cases "split ts x") then show ?case proof (cases rs) case Nil then have "bal (del k x t)" using 2 list_split by (simp add: order_impl_root_order) moreover have "height (del k x t) = height t" using 2 del_height by (simp add: order_impl_root_order) moreover have "ts \ []" using 2 by auto ultimately have "bal (rebalance_last_tree k ts (del k x t))" using 2 Nil order_bal_nonempty_lasttreebal rebalance_last_tree_bal by simp - then have "bal (rebalance_last_tree k ls (del k x t))" + then have "bal (rebalance_last_tree k ls (del k x t))" using list_split split_conc Nil by fastforce then show ?thesis using 2 list_split Nil by auto next case (Cons r rs) then obtain sub sep where r_split: "r = (sub,sep)" by (cases r) then have sub_height: "height sub = height t" "bal sub" using 2 Cons list_split split_set(1) by fastforce+ consider (sep_n_x) "sep \ x" | - (sep_x_Leaf) "sep = x \ sub = Leaf" | + (sep_x_Leaf) "sep = x \ sub = Leaf" | (sep_x_Node) "sep = x \ (\ts t. sub = Node ts t)" using btree.exhaust by blast then show ?thesis proof cases case sep_n_x then have "bal (del k x sub)" "height (del k x sub) = height sub" using sub_height apply (metis "2.IH"(2) "2.prems"(1) "2.prems"(2) list_split local.Cons order_impl_root_order r_split root_order.simps(2) some_child_sub(1) split_set(1)) by (metis "2.prems"(1) "2.prems"(2) list_split Cons order_impl_root_order r_split root_order.simps(2) some_child_sub(1) del_height split_set(1) sub_height(2)) moreover have "bal (Node (ls@(sub,sep)#rs) t)" using "2.prems"(3) list_split Cons r_split split_conc by blast ultimately have "bal (Node (ls@(del k x sub,sep)#rs) t)" using bal_substitute_subtree[of ls sub sep rs t "del k x sub"] by metis then have "bal (rebalance_middle_tree k ls (del k x sub) sep rs t)" using rebalance_middle_tree_bal[of ls "del k x sub" sep rs t k] by metis then show ?thesis using 2 list_split Cons r_split sep_n_x by auto next case sep_x_Leaf moreover have "bal (Node (ls@rs) t)" using bal_split_last(1) list_split split_conc r_split by (metis "2.prems"(3) Cons) ultimately show ?thesis using 2 list_split Cons r_split by auto next case sep_x_Node then obtain sts st where sub_node: "sub = Node sts st" by auto then obtain sub_s max_s where sub_split: "split_max k sub = (sub_s, max_s)" by (cases "split_max k sub") then have "height sub_s = height sub" using split_max_height by (metis "2.prems"(1) "2.prems"(2) btree.distinct(1) list_split Cons order_bal_nonempty_lasttreebal order_impl_root_order r_split root_order.simps(2) some_child_sub(1) split_set(1) sub_height(2) sub_node) moreover have "bal sub_s" using split_max_bal by (metis "2.prems"(1) "2.prems"(2) btree.distinct(1) fst_conv list_split local.Cons order_bal_nonempty_lasttreebal order_impl_root_order r_split root_order.simps(2) some_child_sub(1) split_set(1) sub_height(2) sub_node sub_split) moreover have "bal (Node (ls@(sub,sep)#rs) t)" using "2.prems"(3) list_split Cons r_split split_conc by blast ultimately have "bal (Node (ls@(sub_s,sep)#rs) t)" using bal_substitute_subtree[of ls sub sep rs t "sub_s"] by metis then have "bal (Node (ls@(sub_s,max_s)#rs) t)" using bal_substitute_separator by metis then have "bal (rebalance_middle_tree k ls sub_s max_s rs t)" using rebalance_middle_tree_bal[of ls sub_s max_s rs t k] by metis then show ?thesis using 2 list_split Cons r_split sep_x_Node sub_node sub_split by auto qed qed qed simp lemma rebalance_middle_tree_order: assumes "almost_order k sub" and "\s \ set (subtrees (ls@rs)). order k s" "order k t" and "case rs of (rsub,rsep) # list \ height rsub = height t | [] \ True" and "length (ls@(sub,sep)#rs) \ 2*k" and "height sub = height t" shows "almost_order k (rebalance_middle_tree k ls sub sep rs t)" proof(cases t) case Leaf then have "sub = Leaf" using height_Leaf assms by auto then show ?thesis using Leaf assms by auto next case t_node: (Node tts tt) then obtain mts mt where sub_node: "sub = Node mts mt" using assms by (cases sub) (auto) then show ?thesis proof(cases "length mts \ k \ length tts \ k") case True then have "order k sub" using assms by (simp add: sub_node) then show ?thesis using True t_node sub_node assms by auto next case False then show ?thesis proof (cases rs) case Nil have "order_up\<^sub>i k (node\<^sub>i k (mts@(mt,sep)#tts) tt)" using node\<^sub>i_order[of k "mts@(mt,sep)#tts" tt] assms(1,3) t_node sub_node by (auto simp del: order_up\<^sub>i.simps node\<^sub>i.simps) then show ?thesis apply(cases "node\<^sub>i k (mts@(mt,sep)#tts) tt") using assms t_node sub_node False Nil apply (auto simp del: node\<^sub>i.simps) done next case (Cons r rs) then obtain rsub rsep where r_split: "r = (rsub,rsep)" by (cases r) then have rsub_height: "height rsub = height t" using assms Cons by auto then obtain rts rt where r_node: "rsub = (Node rts rt)" apply(cases rsub) using t_node by simp have "order_up\<^sub>i k (node\<^sub>i k (mts@(mt,sep)#rts) rt)" using node\<^sub>i_order[of k "mts@(mt,sep)#rts" rt] assms(1,2) t_node sub_node r_node r_split Cons by (auto simp del: order_up\<^sub>i.simps node\<^sub>i.simps) - then show ?thesis + then show ?thesis apply(cases "node\<^sub>i k (mts@(mt,sep)#rts) rt") using assms t_node sub_node False Cons r_split r_node apply (auto simp del: node\<^sub>i.simps) done qed qed qed (* we have to proof the order invariant once for an underflowing last tree *) lemma rebalance_middle_tree_last_order: assumes "almost_order k t" and "\s \ set (subtrees (ls@(sub,sep)#rs)). order k s" and "rs = []" and "length (ls@(sub,sep)#rs) \ 2*k" and "height sub = height t" shows "almost_order k (rebalance_middle_tree k ls sub sep rs t)" proof (cases t) case Leaf then have "sub = Leaf" using height_Leaf assms by auto then show ?thesis using Leaf assms by auto next case t_node: (Node tts tt) then obtain mts mt where sub_node: "sub = Node mts mt" using assms by (cases sub) (auto) then show ?thesis proof(cases "length mts \ k \ length tts \ k") case True then have "order k sub" using assms by (simp add: sub_node) then show ?thesis using True t_node sub_node assms by auto next case False have "order_up\<^sub>i k (node\<^sub>i k (mts@(mt,sep)#tts) tt)" using node\<^sub>i_order[of k "mts@(mt,sep)#tts" tt] assms t_node sub_node by (auto simp del: order_up\<^sub>i.simps node\<^sub>i.simps) then show ?thesis apply(cases "node\<^sub>i k (mts@(mt,sep)#tts) tt") using assms t_node sub_node False Nil apply (auto simp del: node\<^sub>i.simps) done qed qed lemma rebalance_last_tree_order: assumes "ts = ls@[(sub,sep)]" and "\s \ set (subtrees (ts)). order k s" "almost_order k t" and "length ts \ 2*k" and "height sub = height t" shows "almost_order k (rebalance_last_tree k ts t)" using rebalance_middle_tree_last_order assms by auto -lemma split_max_order: +lemma split_max_order: assumes "order k t" and "t \ Leaf" and "nonempty_lasttreebal t" shows "almost_order k (fst (split_max k t))" using assms proof(induction k t rule: split_max.induct) case (1 k ts t) then obtain ls sub sep where ts_not_empty: "ts = ls@[(sub,sep)]" by auto then show ?case proof (cases t) case Leaf then show ?thesis using ts_not_empty 1 by auto next case (Node) then obtain s_sub s_max where sub_split: "split_max k t = (s_sub, s_max)" by (cases "split_max k t") moreover have "height sub = height s_sub" by (metis "1.prems"(3) Node Pair_inject append1_eq_conv btree.distinct(1) nonempty_lasttreebal.simps(2) split_max_height sub_split ts_not_empty) ultimately have "almost_order k (rebalance_last_tree k ts s_sub)" using rebalance_last_tree_order[of ts ls sub sep k s_sub] 1 ts_not_empty Node sub_split by force then show ?thesis using Node 1 sub_split by auto qed qed simp -lemma del_order: +lemma del_order: assumes "k > 0" and "root_order k t" and "bal t" shows "almost_order k (del k x t)" using assms proof (induction k x t rule: del.induct) case (2 k x ts t) then obtain ls list where list_split: "split ts x = (ls, list)" by (cases "split ts x") then show ?case proof (cases list) case Nil then have "almost_order k (del k x t)" using 2 list_split by (simp add: order_impl_root_order) moreover obtain lls lsub lsep where ls_split: "ls = lls@[(lsub,lsep)]" using 2 Nil list_split by (metis append_Nil2 nonempty_lasttreebal.simps(2) order_bal_nonempty_lasttreebal split_conc) moreover have "height t = height (del k x t)" using del_height 2 by (simp add: order_impl_root_order) moreover have "length ls = length ts" using Nil list_split by (auto dest: split_length) ultimately have "almost_order k (rebalance_last_tree k ls (del k x t))" using rebalance_last_tree_order[of ls lls lsub lsep k "del k x t"] by (metis "2.prems"(2) "2.prems"(3) Un_iff append_Nil2 bal.simps(2) list_split Nil root_order.simps(2) singletonI split_conc subtrees_split) then show ?thesis - using 2 list_split Nil by auto + using 2 list_split Nil by auto next case (Cons r rs) from Cons obtain sub sep where r_split: "r = (sub,sep)" by (cases r) have inductive_help: "case rs of [] \ True | (rsub,rsep)#_ \ height rsub = height t" "\s\set (subtrees (ls @ rs)). order k s" "Suc (length (ls @ rs)) \ 2 * k" "order k t" using Cons r_split "2.prems" list_split split_set by (auto dest: split_conc split!: list.splits) consider (sep_n_x) "sep \ x" | - (sep_x_Leaf) "sep = x \ sub = Leaf" | + (sep_x_Leaf) "sep = x \ sub = Leaf" | (sep_x_Node) "sep = x \ (\ts t. sub = Node ts t)" using btree.exhaust by blast - then show ?thesis + then show ?thesis proof cases case sep_n_x then have "almost_order k (del k x sub)" using 2 list_split Cons r_split order_impl_root_order by (metis bal.simps(2) root_order.simps(2) some_child_sub(1) split_set(1)) moreover have "height (del k x sub) = height t" by (metis "2.prems"(1) "2.prems"(2) "2.prems"(3) bal.simps(2) list_split Cons order_impl_root_order r_split root_order.simps(2) some_child_sub(1) del_height split_set(1)) ultimately have "almost_order k (rebalance_middle_tree k ls (del k x sub) sep rs t)" using rebalance_middle_tree_order[of k "del k x sub" ls rs t sep] using inductive_help using Cons r_split sep_n_x list_split by auto then show ?thesis using 2 Cons r_split sep_n_x list_split by auto next case sep_x_Leaf then have "almost_order k (Node (ls@rs) t)" using inductive_help by auto then show ?thesis using 2 Cons r_split sep_x_Leaf list_split by auto next case sep_x_Node then obtain sts st where sub_node: "sub = Node sts st" by auto then obtain sub_s max_s where sub_split: "split_max k sub = (sub_s, max_s)" by (cases "split_max k sub") then have "height sub_s = height t" using split_max_height by (metis "2.prems"(1) "2.prems"(2) "2.prems"(3) bal.simps(2) btree.distinct(1) list_split Cons order_bal_nonempty_lasttreebal order_impl_root_order r_split root_order.simps(2) some_child_sub(1) split_set(1) sub_node) moreover have "almost_order k sub_s" using split_max_order by (metis "2.prems"(1) "2.prems"(2) "2.prems"(3) bal.simps(2) btree.distinct(1) fst_conv list_split local.Cons order_bal_nonempty_lasttreebal order_impl_root_order r_split root_order.simps(2) some_child_sub(1) split_set(1) sub_node sub_split) ultimately have "almost_order k (rebalance_middle_tree k ls sub_s max_s rs t)" using rebalance_middle_tree_order[of k sub_s ls rs t max_s] inductive_help by auto then show ?thesis using 2 Cons r_split list_split sep_x_Node sub_split by auto qed qed qed simp (* sortedness of delete by inorder *) (* generalize del_list_sorted since its cumbersome to express inorder_list ts as xs @ [a] note that the proof scheme is almost identical to ins_list_sorted *) thm del_list_sorted lemma del_list_split: assumes "split ts x = (ls, rs)" and "sorted_less (inorder (Node ts t))" shows "del_list x (inorder (Node ts t)) = inorder_list ls @ del_list x (inorder_list rs @ inorder t)" proof (cases ls) case Nil then show ?thesis using assms by (auto dest!: split_conc) next case Cons then obtain ls' sub sep where ls_tail_split: "ls = ls' @ [(sub,sep)]" by (metis list.distinct(1) rev_exhaust surj_pair) moreover have "sep < x" using split_req(2)[of ts x ls' sub sep rs] using assms(1) assms(2) ls_tail_split sorted_inorder_separators by blast moreover have "sorted_less (inorder_list ls)" using assms sorted_wrt_append split_conc by fastforce ultimately show ?thesis using assms(2) split_conc[OF assms(1)] using del_list_sorted[of "inorder_list ls' @ inorder sub" sep] by auto qed (* del sorted requires sortedness of the full list so we need to change the right specialization a bit *) lemma del_list_split_right: assumes "split ts x = (ls, (sub,sep)#rs)" and "sorted_less (inorder (Node ts t))" and "sep \ x" shows "del_list x (inorder_list ((sub,sep)#rs) @ inorder t) = del_list x (inorder sub) @ sep # inorder_list rs @ inorder t" proof - from assms have "x < sep" proof - from assms have "sorted_less (separators ts)" using sorted_inorder_separators by blast then show ?thesis using split_req(3) using assms by fastforce qed moreover have "sorted_less (inorder sub @ sep # inorder_list rs @ inorder t)" - using assms sorted_wrt_append[where xs="inorder_list ls"] + using assms sorted_wrt_append[where xs="inorder_list ls"] by (auto dest!: split_conc) ultimately show ?thesis using del_list_sorted[of "inorder sub" "sep"] by auto qed thm del_list_idem lemma del_inorder: assumes "k > 0" and "root_order k t" and "bal t" and "sorted_less (inorder t)" shows "inorder (del k x t) = del_list x (inorder t)" using assms proof (induction k x t rule: del.induct) case (2 k x ts t) then obtain ls rs where list_split: "split ts x = (ls, rs)" by (meson surj_pair) then have list_conc: "ts = ls @ rs" using split.split_conc split_axioms by blast show ?case proof (cases rs) case Nil then have IH: "inorder (del k x t) = del_list x (inorder t)" by (metis "2.IH"(1) "2.prems" bal.simps(2) list_split order_impl_root_order root_order.simps(2) sorted_inorder_induct_last) have "inorder (del k x (Node ts t)) = inorder (rebalance_last_tree k ts (del k x t))" using list_split Nil list_conc by auto also have "\ = inorder_list ts @ inorder (del k x t)" proof - obtain ts' sub sep where ts_split: "ts = ts' @ [(sub, sep)]" by (meson "2.prems"(1) "2.prems"(2) "2.prems"(3) nonempty_lasttreebal.simps(2) order_bal_nonempty_lasttreebal) then have "height sub = height t" using "2.prems"(3) by auto moreover have "height t = height (del k x t)" by (metis "2.prems"(1) "2.prems"(2) "2.prems"(3) bal.simps(2) del_height order_impl_root_order root_order.simps(2)) ultimately show ?thesis using rebalance_last_tree_inorder using ts_split by auto qed also have "\ = inorder_list ts @ del_list x (inorder t)" using IH by blast also have "\ = del_list x (inorder (Node ts t))" using "2.prems"(4) list_conc list_split Nil del_list_split by auto finally show ?thesis . next case (Cons h rs) then obtain sub sep where h_split: "h = (sub,sep)" by (cases h) - then have node_sorted_split: + then have node_sorted_split: "sorted_less (inorder (Node (ls@(sub,sep)#rs) t))" "root_order k (Node (ls@(sub,sep)#rs) t)" "bal (Node (ls@(sub,sep)#rs) t)" using "2.prems" h_split list_conc Cons by blast+ consider (sep_n_x) "sep \ x" | (sep_x_Leaf) "sep = x \ sub = Leaf" | (sep_x_Node) "sep = x \ (\ts t. sub = Node ts t)" using btree.exhaust by blast then show ?thesis proof cases case sep_n_x then have IH: "inorder (del k x sub) = del_list x (inorder sub)" by (metis "2.IH"(2) "2.prems"(1) "2.prems"(2) bal.simps(2) bal_split_left(1) h_split list_split local.Cons node_sorted_split(1) node_sorted_split(3) order_impl_root_order root_order.simps(2) some_child_sub(1) sorted_inorder_induct_subtree split_set(1)) from sep_n_x have "inorder (del k x (Node ts t)) = inorder (rebalance_middle_tree k ls (del k x sub) sep rs t)" using list_split Cons h_split by auto also have "\ = inorder (Node (ls@(del k x sub, sep)#rs) t)" proof - have "height t = height (del k x sub)" using del_height using order_impl_root_order "2.prems" by (auto simp add: order_impl_root_order Cons list_conc h_split) moreover have "case rs of [] \ True | (rsub, rsep) # list \ height rsub = height t" using "2.prems"(3) bal_sub_height list_conc Cons by blast ultimately show ?thesis using rebalance_middle_tree_inorder by simp qed also have "\ = inorder_list ls @ del_list x (inorder sub) @ sep # inorder_list rs @ inorder t" using IH by simp also have "\ = del_list x (inorder (Node ts t))" using del_list_split[of ts x ls "(sub,sep)#rs" t] using del_list_split_right[of ts x ls sub sep rs t] using list_split list_conc h_split Cons "2.prems"(4) sep_n_x by auto finally show ?thesis . next case sep_x_Leaf then have "del_list x (inorder (Node ts t)) = inorder (Node (ls@rs) t)" using list_conc h_split Cons using del_list_split[OF list_split "2.prems"(4)] by simp also have "\ = inorder (del k x (Node ts t))" using list_split sep_x_Leaf list_conc h_split Cons by auto finally show ?thesis by simp next case sep_x_Node obtain ssub ssep where split_split: "split_max k sub = (ssub, ssep)" by fastforce from sep_x_Node have "x = sep" by simp then have "del_list x (inorder (Node ts t)) = inorder_list ls @ inorder sub @ inorder_list rs @ inorder t" using list_split list_conc h_split Cons "2.prems"(4) using del_list_split[OF list_split "2.prems"(4)] using del_list_sorted1[of "inorder sub" sep "inorder_list rs @ inorder t" x] sorted_wrt_append by auto also have "\ = inorder_list ls @ inorder_pair (split_max k sub) @ inorder_list rs @ inorder t" using sym[OF split_max_inorder[of sub k]] using order_bal_nonempty_lasttreebal[of k sub] "2.prems" list_conc h_split Cons sep_x_Node by (auto simp del: split_max.simps simp add: order_impl_root_order) also have "\ = inorder_list ls @ inorder ssub @ ssep # inorder_list rs @ inorder t" using split_split by auto also have "\ = inorder (rebalance_middle_tree k ls ssub ssep rs t)" proof - have "height t = height ssub" using split_max_height by (metis "2.prems"(1,2,3) bal.simps(2) btree.distinct(1) h_split list_split local.Cons order_bal_nonempty_lasttreebal order_impl_root_order root_order.simps(2) sep_x_Node some_child_sub(1) split_set(1) split_split) moreover have "case rs of [] \ True | (rsub, rsep) # list \ height rsub = height t" using "2.prems"(3) bal_sub_height list_conc local.Cons by blast ultimately show ?thesis using rebalance_middle_tree_inorder by auto qed also have "\ = inorder (del k x (Node ts t))" using list_split sep_x_Node list_conc h_split Cons split_split by auto finally show ?thesis by simp qed qed qed auto lemma reduce_root_order: "\k > 0; almost_order k t\ \ root_order k (reduce_root t)" apply(cases t) apply(auto split!: list.splits simp add: order_impl_root_order) done lemma reduce_root_bal: "bal (reduce_root t) = bal t" apply(cases t) apply(auto split!: list.splits) done lemma reduce_root_inorder: "inorder (reduce_root t) = inorder t" apply (cases t) apply (auto split!: list.splits) done lemma delete_order: "\k > 0; bal t; root_order k t\ \ root_order k (delete k x t)" using del_order by (simp add: reduce_root_order) lemma delete_bal: "\k > 0; bal t; root_order k t\ \ bal (delete k x t)" using del_bal by (simp add: reduce_root_bal) lemma delete_inorder: "\k > 0; bal t; root_order k t; sorted_less (inorder t)\ \ inorder (delete k x t) = del_list x (inorder t)" using del_inorder by (simp add: reduce_root_inorder) (* TODO (opt) runtime wrt runtime of split *) (* we are interested in a) number of comparisons b) number of fetches c) number of writes *) (* a) is dependent on t_split, the remainder is not (we assume the number of fetches and writes for split fun is 0 *) (* TODO simpler induction schemes /less boilerplate isabelle/src/HOL/ex/Induction_Schema *) subsection "Set specification by inorder" interpretation S_ordered: Set_by_Ordered where empty = empty_btree and - insert = "insert (Suc k)" and + insert = "insert (Suc k)" and delete = "delete (Suc k)" and isin = "isin" and inorder = "inorder" and inv = "invar_inorder (Suc k)" proof (standard, goal_cases) case (2 s x) then show ?case by (simp add: isin_set_inorder) next case (3 s x) then show ?case using insert_inorder by simp next case (4 s x) then show ?case using delete_inorder by auto next case (6 s x) then show ?case using insert_order insert_bal by auto next case (7 s x) then show ?case using delete_order delete_bal by auto qed (simp add: empty_btree_def)+ (* if we remove this, it is not possible to remove the simp rules in subsequent contexts... *) declare node\<^sub>i.simps[simp del] end end diff --git a/thys/BTree/Partially_Filled_Array.thy b/thys/BTree/Partially_Filled_Array.thy --- a/thys/BTree/Partially_Filled_Array.thy +++ b/thys/BTree/Partially_Filled_Array.thy @@ -1,427 +1,427 @@ theory Partially_Filled_Array imports "Refine_Imperative_HOL.IICF_Array_List" Array_SBlit begin section "Partially Filled Arrays" text \An array that is only partially filled. The number of actual elements contained is kept in a second element. This represents a weakened version of the array\_list from IICF.\ type_synonym 'a pfarray = "'a array_list" subsection "Operations on Partly Filled Arrays" definition is_pfa where "is_pfa c l \ \(a,n). \\<^sub>A l'. a \\<^sub>a l' * \(c = length l' \ n \ c \ l = (take n l'))" lemma is_pfa_prec[safe_constraint_rules]: "precise (is_pfa c)" unfolding is_pfa_def[abs_def] apply(rule preciseI) apply(simp split: prod.splits) using preciseD snga_prec by fastforce definition pfa_init where "pfa_init cap v n \ do { a \ Array.new cap v; return (a,n) }" lemma pfa_init_rule[sep_heap_rules]: "n \ N \ < emp > pfa_init N x n " by (sep_auto simp: pfa_init_def is_pfa_def) definition pfa_empty where "pfa_empty cap \ pfa_init cap default 0" lemma pfa_empty_rule[sep_heap_rules]: "< emp > pfa_empty N " by (sep_auto simp: pfa_empty_def is_pfa_def) definition "pfa_length \ arl_length" lemma pfa_length_rule[sep_heap_rules]: " pfa_length a <\r. is_pfa c l a * \(r=length l)>" by (sep_auto simp: pfa_length_def arl_length_def is_pfa_def) definition "pfa_capacity \ \(a,n). Array.len a " lemma pfa_capacity_rule[sep_heap_rules]: " pfa_capacity a <\r. is_pfa c l a * \(c=r)>" by (sep_auto simp: pfa_capacity_def arl_length_def is_pfa_def) definition "pfa_is_empty \ arl_is_empty" lemma pfa_is_empty_rule[sep_heap_rules]: " pfa_is_empty a <\r. is_pfa c l a * \(r\(l=[]))>" by (sep_auto simp: pfa_is_empty_def arl_is_empty_def is_pfa_def) definition "pfa_append \ \(a,n) x. do { Array.upd n x a; return (a,n+1) }" lemma pfa_append_rule[sep_heap_rules]: " n < c \ < is_pfa c l (a,n) > pfa_append (a,n) x <\(a',n'). is_pfa c (l@[x]) (a',n') * \(a' = a \ n' = n+1) >" by (sep_auto simp: pfa_append_def arl_append_def is_pfa_def take_update_last neq_Nil_conv split: prod.splits nat.split) definition "pfa_last \ arl_last" lemma pfa_last_rule[sep_heap_rules]: " l\[] \ pfa_last a <\r. is_pfa c l a * \(r=last l)>" by (sep_auto simp: pfa_last_def arl_last_def is_pfa_def last_take_nth_conv) definition pfa_butlast :: "'a::heap pfarray \ 'a pfarray Heap" where "pfa_butlast \ \(a,n). return (a,n-1) " lemma pfa_butlast_rule[sep_heap_rules]: " pfa_butlast (a,n) <\(a',n'). is_pfa c (butlast l) (a',n') * \(a' = a)>" by (sep_auto split: prod.splits simp: pfa_butlast_def is_pfa_def butlast_take) definition "pfa_get \ arl_get" lemma pfa_get_rule[sep_heap_rules]: " i < length l \ < is_pfa c l a> pfa_get a i <\r. is_pfa c l a * \((l!i) = r)>" by (sep_auto simp: is_pfa_def pfa_get_def arl_get_def split: prod.split) definition "pfa_set \ arl_set" lemma pfa_set_rule[sep_heap_rules]: " i pfa_set a i x <\a'. is_pfa c (l[i:=x]) a' * \(a' = a)>" by (sep_auto simp: pfa_set_def arl_set_def is_pfa_def split: prod.split) definition pfa_shrink :: "nat \ 'a::heap pfarray \ 'a pfarray Heap" where "pfa_shrink k \ \(a,n). return (a,k) " lemma pfa_shrink_rule[sep_heap_rules]: " - k \ length l \ - < is_pfa c l (a,n) > + k \ length xs \ + < is_pfa c xs (a,n) > pfa_shrink k (a,n) - <\(a',n'). is_pfa c (take k l) (a',n') * \(n' = k \ a'=a) >" + <\(a',n'). is_pfa c (take k xs) (a',n') * \(n' = k \ a'=a) >" by (sep_auto simp: pfa_shrink_def is_pfa_def min.absorb1 split: prod.splits nat.split) definition pfa_shrink_cap :: "nat \ 'a::heap pfarray \ 'a pfarray Heap" where "pfa_shrink_cap k \ \(a,n). do { a' \ array_shrink a k; return (a',min k n) } " lemma pfa_shrink_cap_rule_preserve[sep_heap_rules]: " \n \ k; k \ c\ \ < is_pfa c l (a,n) > pfa_shrink_cap k (a,n) <\a'. is_pfa k l a' >\<^sub>t" by (sep_auto simp: pfa_shrink_cap_def is_pfa_def min.absorb1 min.absorb2 split: prod.splits nat.split) lemma pfa_shrink_cap_rule: " \k \ c\ \ < is_pfa c l a > pfa_shrink_cap k a <\a'. is_pfa k (take k l) a' >\<^sub>t" by (sep_auto simp: pfa_shrink_cap_def is_pfa_def min.absorb1 min.absorb2 split: prod.splits nat.split dest: mod_starD) definition "array_ensure a s x \ do { l\Array.len a; if l\s then return a else do { a'\Array.new s x; blit a 0 a' 0 l; return a' } }" lemma array_ensure_rule[sep_heap_rules]: shows " < a\\<^sub>ala > array_ensure a s x <\a'. a'\\<^sub>a (la @ replicate (s-length la) x)>\<^sub>t" unfolding array_ensure_def by sep_auto (* Ensure a certain capacity *) definition pfa_ensure :: "'a::{heap,default} pfarray \ nat \ 'a pfarray Heap" where "pfa_ensure \ \(a,n) k. do { a' \ array_ensure a k default; return (a',n) } " lemma pfa_ensure_rule[sep_heap_rules]: " < is_pfa c l (a,n) > pfa_ensure (a,n) k <\(a',n'). is_pfa (max c k) l (a',n') * \(n' = n \ c \ n)>\<^sub>t" by (sep_auto simp: pfa_ensure_def is_pfa_def) definition "pfa_copy \ arl_copy" lemma pfa_copy_rule[sep_heap_rules]: "< is_pfa c l a > pfa_copy a <\r. is_pfa c l a * is_pfa c l r>\<^sub>t" by (sep_auto simp: pfa_copy_def arl_copy_def is_pfa_def) definition pfa_blit :: "'a::heap pfarray \ nat \ 'a::heap pfarray \ nat \ nat \ unit Heap" where "pfa_blit \ \(src,sn) si (dst,dn) di l. blit src si dst di l" lemma min_nat: "min a (a+b) = (a::nat)" by auto lemma pfa_blit_rule[sep_heap_rules]: assumes LEN: "si+len \ sn" "di \ dn" "di+len \ dc" shows "< is_pfa sc src (srci,sn) * is_pfa dc dst (dsti,dn) > pfa_blit (srci,sn) si (dsti,dn) di len <\_. is_pfa sc src (srci,sn) * is_pfa dc (take di dst @ take len (drop si src) @ drop (di+len) dst) (dsti,max (di+len) dn) >" using LEN apply(sep_auto simp add: min_nat is_pfa_def pfa_blit_def min.commute min.absorb1 heap: blit_rule) apply (simp add: min.absorb1 take_drop) apply (simp add: drop_take max_def) done definition pfa_drop :: "('a::heap) pfarray \ nat \ 'a pfarray \ 'a pfarray Heap" where "pfa_drop \ \(src,sn) si (dst,dn). do { blit src si dst 0 (sn-si); return (dst,(sn-si)) } " lemma pfa_drop_rule[sep_heap_rules]: assumes LEN: "k \ sn" "(sn-k) \ dc" shows "< is_pfa sc src (srci,sn) * is_pfa dc dst (dsti,dn) > pfa_drop (srci,sn) k (dsti,dn) <\(dsti',dn'). is_pfa sc src (srci,sn) * is_pfa dc (drop k src) (dsti',dn') * \(dsti' = dsti) >" using LEN apply (sep_auto simp add: drop_take is_pfa_def pfa_drop_def dest!: mod_starD heap: pfa_blit_rule) done definition "pfa_append_grow \ \(a,n) x. do { l \ pfa_capacity (a,n); a' \ if l = n then array_grow a (l+1) x else Array.upd n x a; return (a',n+1) }" lemma pfa_append_grow_full_rule[sep_heap_rules]: "n = c \ array_grow a (c+1) x <\a'. is_pfa (c+1) (l@[x]) (a',n+1)>\<^sub>t" apply(sep_auto simp add: is_pfa_def heap del: array_grow_rule) apply(vcg heap del: array_grow_rule heap add: array_grow_rule[of l "(Suc (length l))" a x]) apply simp apply(rule ent_ex_postI[where ?x="l@[x]"]) apply sep_auto done lemma pfa_append_grow_less_rule: "n < c \ Array.upd n x a <\a'. is_pfa c (l@[x]) (a',n+1)>\<^sub>t" apply(sep_auto simp add: is_pfa_def take_update_last) done lemma pfa_append_grow_rule[sep_heap_rules]: " pfa_append_grow (a,n) x <\(a',n'). is_pfa (if c = n then c+1 else c) (l@[x]) (a',n') * \(n'=n+1 \ c \ n)>\<^sub>t" apply(subst pfa_append_grow_def) apply(rule hoare_triple_preI) apply (sep_auto heap add: pfa_append_grow_full_rule pfa_append_grow_less_rule) apply(sep_auto simp add: is_pfa_def) apply(sep_auto simp add: is_pfa_def) done (* This definition has only one access to the array length *) definition "pfa_append_grow' \ \(a,n) x. do { a' \ pfa_ensure (a,n) (n+1); a'' \ pfa_append a' x; return a'' }" lemma pfa_append_grow'_rule[sep_heap_rules]: " pfa_append_grow' (a,n) x <\(a',n'). is_pfa (max (n+1) c) (l@[x]) (a',n') * \(n'=n+1 \ c \ n)>\<^sub>t" unfolding pfa_append_grow'_def by (sep_auto simp add: max_def) definition "pfa_insert \ \(a,n) i x. do { a' \ array_shr a i 1; a'' \ Array.upd i x a; return (a'',n+1) }" lemma list_update_last: "length ls = Suc i \ ls[i:=x] = (take i ls)@[x]" by (metis append_eq_conv_conj length_Suc_rev_conv list_update_length) lemma pfa_insert_rule[sep_heap_rules]: "\i \ n; n < c\ \ pfa_insert (a,n) i x <\(a',n'). is_pfa c (take i l@x#drop i l) (a',n') * \(n' = n+1 \ a=a')>" unfolding pfa_insert_def is_pfa_def by (sep_auto simp add: list_update_append1 list_update_last Suc_diff_le drop_take min_def) definition pfa_insert_grow :: "'a::{heap,default} pfarray \ nat \ 'a \ 'a pfarray Heap" where "pfa_insert_grow \ \(a,n) i x. do { a' \ pfa_ensure (a,n) (n+1); a'' \ pfa_insert a' i x; return a'' }" -lemma pfa_insert_grow_rule: +lemma pfa_insert_grow_rule[sep_heap_rules]: "i \ n \ pfa_insert_grow (a,n) i x <\(a',n'). is_pfa (max c (n+1)) (take i l@x#drop i l) (a',n') * \(n'=n+1 \ c \ n)>\<^sub>t" unfolding pfa_insert_grow_def by (sep_auto heap add: pfa_insert_rule[of i n "max c (Suc n)"]) definition pfa_extend where "pfa_extend \ \ (a,n) (b,m). do{ blit b 0 a n m; return (a,n+m) }" -lemma pfa_extend_rule: +lemma pfa_extend_rule[sep_heap_rules]: "n+m \ c \ pfa_extend (a,n) (b,m) <\(a',n'). is_pfa c (l1@l2) (a',n') * \(a' = a \ n'=n+m) * is_pfa d l2 (b,m)>\<^sub>t" unfolding pfa_extend_def by (sep_auto simp add: is_pfa_def min.absorb1 min.absorb2 heap add: blit_rule) definition pfa_extend_grow where "pfa_extend_grow \ \ (a,n) (b,m). do{ a' \ array_ensure a (n+m) default; blit b 0 a' n m; return (a',n+m) }" -lemma pfa_extend_grow_rule: +lemma pfa_extend_grow_rule[sep_heap_rules]: " pfa_extend_grow (a,n) (b,m) <\(a',n'). is_pfa (max c (n+m)) (l1@l2) (a',n') * \(n'=n+m \ c \ n) * is_pfa d l2 (b,m)>\<^sub>t" unfolding pfa_extend_grow_def by (sep_auto simp add: is_pfa_def min.absorb1 min.absorb2 heap add: blit_rule) definition pfa_append_extend_grow where "pfa_append_extend_grow \ \ (a,n) x (b,m). do{ a' \ array_ensure a (n+m+1) default; a'' \ Array.upd n x a'; blit b 0 a'' (n+1) m; return (a'',n+m+1) }" -lemma pfa_append_extend_grow_rule: +lemma pfa_append_extend_grow_rule[sep_heap_rules]: " pfa_append_extend_grow (a,n) x (b,m) <\(a',n'). is_pfa (max c (n+m+1)) (l1@x#l2) (a',n') * \(n'=n+m+1 \ c \ n) * is_pfa d l2 (b,m)>\<^sub>t" unfolding pfa_append_extend_grow_def by (sep_auto simp add: list_update_last is_pfa_def min.absorb1 min.absorb2 heap add: blit_rule) definition "pfa_delete \ \(a,n) i. do { array_shl a (i+1) 1; return (a,n-1) }" lemma pfa_delete_rule[sep_heap_rules]: "i < n \ pfa_delete (a,n) i <\(a',n'). is_pfa c (take i l@drop (i+1) l) (a',n') * \(n' = n-1 \ a=a')>" unfolding pfa_delete_def is_pfa_def apply (sep_auto simp add: drop_take min_def) by (metis Suc_diff_Suc diff_zero dual_order.strict_trans2 le_less_Suc_eq zero_le) end \ No newline at end of file diff --git a/thys/BTree/README.md b/thys/BTree/README.md --- a/thys/BTree/README.md +++ b/thys/BTree/README.md @@ -1,37 +1,37 @@ # A Verified Imperative Implementation of B-Trees This repository contains all definitions, lemmas and proofs related to the Bachelors Thesis "A Verified Imperative Implementation of B-Trees" by Niels Mündler. For a detailed description of the project, [see the thesis](https://github.com/nielstron/btrees-thesis). ## Overview A functional specification of B-trees, B-tree operations and a height analysis may be found in the files `BTree*.thy` that do not contain `Imp`. An imperative specification of B-trees may be found in `BTree_Imp*.thy`. This imperative specification makes use of the auxiliary definition of "Partially Filled Arrays" as list refinements, which may be found in `Partially_Filled_Array.thy`. Further an extension of the standard array blit operation in Isabelle, such that it allows error-free array-internal copy operations, may be found in `Array_SBlit.thy`. The remaining files contain auxiliary lemmas and definitions that are due to Dr. Peter Lammich or me. All above mentioned files contain definitions as well as proofs of functional correctness. ## Usage -These theories have been tested with [Isabelle2020](https://isabelle.in.tum.de/website-Isabelle2020/index.html), although it should be compatible to newer versions of the tool. +These theories have been tested with [Isabelle2021](https://isabelle.in.tum.de/website-Isabelle2021/index.html). -The files `BTree*.thy` that do not contain `Imp` only need a regular Isabelle2020 setup. +The files `BTree*.thy` that do not contain `Imp` only need a regular Isabelle setup. -The rest of the theories build upon [Refine_Imperative_HOL](https://www.isa-afp.org/entries/Refine_Imperative_HOL.html), you will need to succesfully set up that project first as described in the [rArchive of Formal Proofs](https://www.isa-afp.org/using.html). +The rest of the theories build upon [Refine_Imperative_HOL](https://www.isa-afp.org/entries/Refine_Imperative_HOL.html), you will need to succesfully set up that project first as described in the [Archive of Formal Proofs](https://www.isa-afp.org/using.html). The script `start_isabelle.sh` uses and if not available compiles a session containing the content of the Refinement Framework which significantly enhances working with the files provided in this project. diff --git a/thys/BTree/ROOT b/thys/BTree/ROOT --- a/thys/BTree/ROOT +++ b/thys/BTree/ROOT @@ -1,19 +1,19 @@ chapter AFP -session "BTree" (AFP) = Refine_Imperative_HOL + - options [timeout = 600] +session "BTree" (AFP) = "Refine_Imperative_HOL" + + options [timeout = 600, document = pdf] sessions "HOL-Data_Structures" theories BTree BTree_Height BTree_Set BTree_Split Array_SBlit Partially_Filled_Array BTree_Imp BTree_ImpSet BTree_ImpSplit document_files "root.tex" "root.bib" diff --git a/thys/BTree/document/root.tex b/thys/BTree/document/root.tex --- a/thys/BTree/document/root.tex +++ b/thys/BTree/document/root.tex @@ -1,64 +1,64 @@ \documentclass[11pt,a4paper]{article} +\usepackage{isabelle,isabellesym} \usepackage[T1]{fontenc} -\usepackage{isabelle,isabellesym} \usepackage{mathtools} \usepackage{amssymb} \usepackage{stmaryrd} \usepackage[numbers]{natbib} % this should be the last package used \usepackage{pdfsetup} \usepackage{doi} % urls in roman style, theory text in math-similar italics \urlstyle{rm} \isabellestyle{it} \DeclarePairedDelimiter{\norm}{\lVert}{\rVert} \begin{document} \title{A Verified Imperative Implementation of B-Trees} \author{Niels Mündler} \date{} \maketitle \begin{abstract} In this work, we use the interactive theorem prover Isabelle/HOL to verify an imperative implementation of the classical B-tree data structure \cite{DBLP:journals/acta/BayerM72}. -The implementation supports set membership and insertion queries +The implementation supports set membership, insertion and deletion queries with efficient binary search for intra-node navigation. This is accomplished by first specifying the structure abstractly in the functional modeling language HOL and proving functional correctness. Using manual refinement, we derive an imperative implementation in Imperative/HOL. We show the validity of this refinement using the separation logic utilities from the Isabelle Refinement Framework \cite{Refine_Imperative_HOL-AFP}. -The code can be exported to the programming languages SML and Scala. +The code can be exported to the programming languages SML, Scala and OCaml. We examine the runtime of all operations indirectly by reproducing results of the logarithmic relationship between height and the number of nodes. The results are discussed in greater detail in the related Bachelor's Thesis \cite{BTNielsMuendler}. \end{abstract} \tableofcontents % sane default for proof documents \parindent 0pt\parskip 0.5ex % generated text of all theories \input{session} % optional bibliography {\raggedright \bibliographystyle{plainnat} \bibliography{root} } \end{document} %%% Local Variables: %%% mode: latex %%% TeX-master: t %%% End: