diff --git a/metadata/metadata b/metadata/metadata --- a/metadata/metadata +++ b/metadata/metadata @@ -1,11241 +1,11284 @@ [Arith_Prog_Rel_Primes] title = Arithmetic progressions and relative primes author = José Manuel Rodríguez Caballero topic = Mathematics/Number theory date = 2020-02-01 notify = jose.manuel.rodriguez.caballero@ut.ee abstract = This article provides a formalization of the solution obtained by the author of the Problem “ARITHMETIC PROGRESSIONS” from the Putnam exam problems of 2002. The statement of the problem is as follows: For which integers n > 1 does the set of positive integers less than and relatively prime to n constitute an arithmetic progression? [Banach_Steinhaus] title = Banach-Steinhaus Theorem author = Dominique Unruh , Jose Manuel Rodriguez Caballero topic = Mathematics/Analysis date = 2020-05-02 notify = jose.manuel.rodriguez.caballero@ut.ee, unruh@ut.ee abstract = We formalize in Isabelle/HOL a result due to S. Banach and H. Steinhaus known as the Banach-Steinhaus theorem or Uniform boundedness principle: a pointwise-bounded family of continuous linear operators from a Banach space to a normed space is uniformly bounded. Our approach is an adaptation to Isabelle/HOL of a proof due to A. Sokal. [Complex_Geometry] title = Complex Geometry author = Filip Marić , Danijela Simić topic = Mathematics/Geometry date = 2019-12-16 notify = danijela@matf.bg.ac.rs, filip@matf.bg.ac.rs, boutry@unistra.fr abstract = A formalization of geometry of complex numbers is presented. Fundamental objects that are investigated are the complex plane extended by a single infinite point, its objects (points, lines and circles), and groups of transformations that act on them (e.g., inversions and Möbius transformations). Most objects are defined algebraically, but correspondence with classical geometric definitions is shown. [Poincare_Disc] title = Poincaré Disc Model author = Danijela Simić , Filip Marić , Pierre Boutry topic = Mathematics/Geometry date = 2019-12-16 notify = danijela@matf.bg.ac.rs, filip@matf.bg.ac.rs, boutry@unistra.fr abstract = We describe formalization of the Poincaré disc model of hyperbolic geometry within the Isabelle/HOL proof assistant. The model is defined within the extended complex plane (one dimensional complex projectives space ℂP1), formalized in the AFP entry “Complex Geometry”. Points, lines, congruence of pairs of points, betweenness of triples of points, circles, and isometries are defined within the model. It is shown that the model satisfies all Tarski's axioms except the Euclid's axiom. It is shown that it satisfies its negation and the limiting parallels axiom (which proves it to be a model of hyperbolic geometry). [Fourier] title = Fourier Series author = Lawrence C Paulson topic = Mathematics/Analysis date = 2019-09-06 notify = lp15@cam.ac.uk abstract = This development formalises the square integrable functions over the reals and the basics of Fourier series. It culminates with a proof that every well-behaved periodic function can be approximated by a Fourier series. The material is ported from HOL Light: https://github.com/jrh13/hol-light/blob/master/100/fourier.ml [Generic_Deriving] title = Deriving generic class instances for datatypes author = Jonas Rädle , Lars Hupel topic = Computer science/Data structures date = 2018-11-06 notify = jonas.raedle@gmail.com abstract =

We provide a framework for automatically deriving instances for generic type classes. Our approach is inspired by Haskell's generic-deriving package and Scala's shapeless library. In addition to generating the code for type class functions, we also attempt to automatically prove type class laws for these instances. As of now, however, some manual proofs are still required for recursive datatypes.

Note: There are already articles in the AFP that provide automatic instantiation for a number of classes. Concretely, Deriving allows the automatic instantiation of comparators, linear orders, equality, and hashing. Show instantiates a Haskell-style show class.

Our approach works for arbitrary classes (with some Isabelle/HOL overhead for each class), but a smaller set of datatypes.

[Partial_Order_Reduction] title = Partial Order Reduction author = Julian Brunner topic = Computer science/Automata and formal languages date = 2018-06-05 notify = brunnerj@in.tum.de abstract = This entry provides a formalization of the abstract theory of ample set partial order reduction. The formalization includes transition systems with actions, trace theory, as well as basics on finite, infinite, and lazy sequences. We also provide a basic framework for static analysis on concurrent systems with respect to the ample set condition. [CakeML] title = CakeML author = Lars Hupel , Yu Zhang <> contributors = Johannes Åman Pohjola <> topic = Computer science/Programming languages/Language definitions date = 2018-03-12 notify = hupel@in.tum.de abstract = CakeML is a functional programming language with a proven-correct compiler and runtime system. This entry contains an unofficial version of the CakeML semantics that has been exported from the Lem specifications to Isabelle. Additionally, there are some hand-written theory files that adapt the exported code to Isabelle and port proofs from the HOL4 formalization, e.g. termination and equivalence proofs. [CakeML_Codegen] title = A Verified Code Generator from Isabelle/HOL to CakeML author = Lars Hupel topic = Computer science/Programming languages/Compiling, Logic/Rewriting date = 2019-07-08 notify = lars@hupel.info abstract = This entry contains the formalization that accompanies my PhD thesis (see https://lars.hupel.info/research/codegen/). I develop a verified compilation toolchain from executable specifications in Isabelle/HOL to CakeML abstract syntax trees. This improves over the state-of-the-art in Isabelle by providing a trustworthy procedure for code generation. [DiscretePricing] title = Pricing in discrete financial models author = Mnacho Echenim topic = Mathematics/Probability theory, Mathematics/Games and economics date = 2018-07-16 notify = mnacho.echenim@univ-grenoble-alpes.fr abstract = We have formalized the computation of fair prices for derivative products in discrete financial models. As an application, we derive a way to compute fair prices of derivative products in the Cox-Ross-Rubinstein model of a financial market, thus completing the work that was presented in this paper. extra-history = Change history: [2019-05-12]: Renamed discr_mkt predicate to stk_strict_subs and got rid of predicate A for a more natural definition of the type discrete_market; renamed basic quantity processes for coherent notation; renamed value_process into val_process and closing_value_process to cls_val_process; relaxed hypothesis of lemma CRR_market_fair_price. Added functions to price some basic options. (revision 0b813a1a833f)
[Pell] title = Pell's Equation author = Manuel Eberl topic = Mathematics/Number theory date = 2018-06-23 notify = manuel@pruvisto.org abstract =

This article gives the basic theory of Pell's equation x2 = 1 + Dy2, where D ∈ ℕ is a parameter and x, y are integer variables.

The main result that is proven is the following: If D is not a perfect square, then there exists a fundamental solution (x0, y0) that is not the trivial solution (1, 0) and which generates all other solutions (x, y) in the sense that there exists some n ∈ ℕ such that |x| + |y| √D = (x0 + y0 √D)n. This also implies that the set of solutions is infinite, and it gives us an explicit and executable characterisation of all the solutions.

Based on this, simple executable algorithms for computing the fundamental solution and the infinite sequence of all non-negative solutions are also provided.

[WebAssembly] title = WebAssembly author = Conrad Watt topic = Computer science/Programming languages/Language definitions date = 2018-04-29 notify = caw77@cam.ac.uk abstract = This is a mechanised specification of the WebAssembly language, drawn mainly from the previously published paper formalisation of Haas et al. Also included is a full proof of soundness of the type system, together with a verified type checker and interpreter. We include only a partial procedure for the extraction of the type checker and interpreter here. For more details, please see our paper in CPP 2018. [Knuth_Morris_Pratt] title = The string search algorithm by Knuth, Morris and Pratt author = Fabian Hellauer , Peter Lammich topic = Computer science/Algorithms date = 2017-12-18 notify = hellauer@in.tum.de, lammich@in.tum.de abstract = The Knuth-Morris-Pratt algorithm is often used to show that the problem of finding a string s in a text t can be solved deterministically in O(|s| + |t|) time. We use the Isabelle Refinement Framework to formulate and verify the algorithm. Via refinement, we apply some optimisations and finally use the Sepref tool to obtain executable code in Imperative/HOL. [Minkowskis_Theorem] title = Minkowski's Theorem author = Manuel Eberl topic = Mathematics/Geometry, Mathematics/Number theory date = 2017-07-13 notify = manuel@pruvisto.org abstract =

Minkowski's theorem relates a subset of ℝn, the Lebesgue measure, and the integer lattice ℤn: It states that any convex subset of ℝn with volume greater than 2n contains at least one lattice point from ℤn\{0}, i. e. a non-zero point with integer coefficients.

A related theorem which directly implies this is Blichfeldt's theorem, which states that any subset of ℝn with a volume greater than 1 contains two different points whose difference vector has integer components.

The entry contains a proof of both theorems.

[Name_Carrying_Type_Inference] title = Verified Metatheory and Type Inference for a Name-Carrying Simply-Typed Lambda Calculus author = Michael Rawson topic = Computer science/Programming languages/Type systems date = 2017-07-09 notify = mr644@cam.ac.uk, michaelrawson76@gmail.com abstract = I formalise a Church-style simply-typed \(\lambda\)-calculus, extended with pairs, a unit value, and projection functions, and show some metatheory of the calculus, such as the subject reduction property. Particular attention is paid to the treatment of names in the calculus. A nominal style of binding is used, but I use a manual approach over Nominal Isabelle in order to extract an executable type inference algorithm. More information can be found in my undergraduate dissertation. [Propositional_Proof_Systems] title = Propositional Proof Systems author = Julius Michaelis , Tobias Nipkow topic = Logic/Proof theory date = 2017-06-21 notify = maintainafpppt@liftm.de abstract = We formalize a range of proof systems for classical propositional logic (sequent calculus, natural deduction, Hilbert systems, resolution) and prove the most important meta-theoretic results about semantics and proofs: compactness, soundness, completeness, translations between proof systems, cut-elimination, interpolation and model existence. [Optics] title = Optics author = Simon Foster , Frank Zeyda topic = Computer science/Functional programming, Mathematics/Algebra date = 2017-05-25 notify = simon.foster@york.ac.uk abstract = Lenses provide an abstract interface for manipulating data types through spatially-separated views. They are defined abstractly in terms of two functions, get, the return a value from the source type, and put that updates the value. We mechanise the underlying theory of lenses, in terms of an algebraic hierarchy of lenses, including well-behaved and very well-behaved lenses, each lens class being characterised by a set of lens laws. We also mechanise a lens algebra in Isabelle that enables their composition and comparison, so as to allow construction of complex lenses. This is accompanied by a large library of algebraic laws. Moreover we also show how the lens classes can be applied by instantiating them with a number of Isabelle data types. extra-history = Change history: [2020-03-02]: Added partial bijective and symmetric lenses. Improved alphabet command generating additional lenses and results. Several additional lens relations, including observational equivalence. Additional theorems throughout. Adaptations for Isabelle 2020. (revision 44e2e5c) [2021-01-27] Addition of new theorems throughout, particularly for prisms. New "chantype" command allows the definition of an algebraic datatype with generated prisms. New "dataspace" command allows the definition of a local-based state space, including lenses and prisms. Addition of various examples for the above. (revision 89cf045a) [Game_Based_Crypto] title = Game-based cryptography in HOL author = Andreas Lochbihler , S. Reza Sefidgar <>, Bhargav Bhatt topic = Computer science/Security/Cryptography date = 2017-05-05 notify = mail@andreas-lochbihler.de abstract =

In this AFP entry, we show how to specify game-based cryptographic security notions and formally prove secure several cryptographic constructions from the literature using the CryptHOL framework. Among others, we formalise the notions of a random oracle, a pseudo-random function, an unpredictable function, and of encryption schemes that are indistinguishable under chosen plaintext and/or ciphertext attacks. We prove the random-permutation/random-function switching lemma, security of the Elgamal and hashed Elgamal public-key encryption scheme and correctness and security of several constructions with pseudo-random functions.

Our proofs follow the game-hopping style advocated by Shoup and Bellare and Rogaway, from which most of the examples have been taken. We generalise some of their results such that they can be reused in other proofs. Thanks to CryptHOL's integration with Isabelle's parametricity infrastructure, many simple hops are easily justified using the theory of representation independence.

extra-history = Change history: [2018-09-28]: added the CryptHOL tutorial for game-based cryptography (revision 489a395764ae) [Multi_Party_Computation] title = Multi-Party Computation author = David Aspinall , David Butler topic = Computer science/Security date = 2019-05-09 notify = dbutler@turing.ac.uk abstract = We use CryptHOL to consider Multi-Party Computation (MPC) protocols. MPC was first considered by Yao in 1983 and recent advances in efficiency and an increased demand mean it is now deployed in the real world. Security is considered using the real/ideal world paradigm. We first define security in the semi-honest security setting where parties are assumed not to deviate from the protocol transcript. In this setting we prove multiple Oblivious Transfer (OT) protocols secure and then show security for the gates of the GMW protocol. We then define malicious security, this is a stronger notion of security where parties are assumed to be fully corrupted by an adversary. In this setting we again consider OT, as it is a fundamental building block of almost all MPC protocols. [Sigma_Commit_Crypto] title = Sigma Protocols and Commitment Schemes author = David Butler , Andreas Lochbihler topic = Computer science/Security/Cryptography date = 2019-10-07 notify = dbutler@turing.ac.uk abstract = We use CryptHOL to formalise commitment schemes and Sigma-protocols. Both are widely used fundamental two party cryptographic primitives. Security for commitment schemes is considered using game-based definitions whereas the security of Sigma-protocols is considered using both the game-based and simulation-based security paradigms. In this work, we first define security for both primitives and then prove secure multiple case studies: the Schnorr, Chaum-Pedersen and Okamoto Sigma-protocols as well as a construction that allows for compound (AND and OR statements) Sigma-protocols and the Pedersen and Rivest commitment schemes. We also prove that commitment schemes can be constructed from Sigma-protocols. We formalise this proof at an abstract level, only assuming the existence of a Sigma-protocol; consequently, the instantiations of this result for the concrete Sigma-protocols we consider come for free. [CryptHOL] title = CryptHOL author = Andreas Lochbihler topic = Computer science/Security/Cryptography, Computer science/Functional programming, Mathematics/Probability theory date = 2017-05-05 notify = mail@andreas-lochbihler.de abstract =

CryptHOL provides a framework for formalising cryptographic arguments in Isabelle/HOL. It shallowly embeds a probabilistic functional programming language in higher order logic. The language features monadic sequencing, recursion, random sampling, failures and failure handling, and black-box access to oracles. Oracles are probabilistic functions which maintain hidden state between different invocations. All operators are defined in the new semantic domain of generative probabilistic values, a codatatype. We derive proof rules for the operators and establish a connection with the theory of relational parametricity. Thus, the resuting proofs are trustworthy and comprehensible, and the framework is extensible and widely applicable.

The framework is used in the accompanying AFP entry "Game-based Cryptography in HOL". There, we show-case our framework by formalizing different game-based proofs from the literature. This formalisation continues the work described in the author's ESOP 2016 paper.

[Constructive_Cryptography] title = Constructive Cryptography in HOL author = Andreas Lochbihler , S. Reza Sefidgar<> topic = Computer science/Security/Cryptography, Mathematics/Probability theory date = 2018-12-17 notify = mail@andreas-lochbihler.de, reza.sefidgar@inf.ethz.ch abstract = Inspired by Abstract Cryptography, we extend CryptHOL, a framework for formalizing game-based proofs, with an abstract model of Random Systems and provide proof rules about their composition and equality. This foundation facilitates the formalization of Constructive Cryptography proofs, where the security of a cryptographic scheme is realized as a special form of construction in which a complex random system is built from simpler ones. This is a first step towards a fully-featured compositional framework, similar to Universal Composability framework, that supports formalization of simulation-based proofs. [Probabilistic_While] title = Probabilistic while loop author = Andreas Lochbihler topic = Computer science/Functional programming, Mathematics/Probability theory, Computer science/Algorithms date = 2017-05-05 notify = mail@andreas-lochbihler.de abstract = This AFP entry defines a probabilistic while operator based on sub-probability mass functions and formalises zero-one laws and variant rules for probabilistic loop termination. As applications, we implement probabilistic algorithms for the Bernoulli, geometric and arbitrary uniform distributions that only use fair coin flips, and prove them correct and terminating with probability 1. extra-history = Change history: [2018-02-02]: Added a proof that probabilistic conditioning can be implemented by repeated sampling. (revision 305867c4e911)
[Monad_Normalisation] title = Monad normalisation author = Joshua Schneider <>, Manuel Eberl , Andreas Lochbihler topic = Tools, Computer science/Functional programming, Logic/Rewriting date = 2017-05-05 notify = mail@andreas-lochbihler.de abstract = The usual monad laws can directly be used as rewrite rules for Isabelle’s simplifier to normalise monadic HOL terms and decide equivalences. In a commutative monad, however, the commutativity law is a higher-order permutative rewrite rule that makes the simplifier loop. This AFP entry implements a simproc that normalises monadic expressions in commutative monads using ordered rewriting. The simproc can also permute computations across control operators like if and case. [Monomorphic_Monad] title = Effect polymorphism in higher-order logic author = Andreas Lochbihler topic = Computer science/Functional programming date = 2017-05-05 notify = mail@andreas-lochbihler.de abstract = The notion of a monad cannot be expressed within higher-order logic (HOL) due to type system restrictions. We show that if a monad is used with values of only one type, this notion can be formalised in HOL. Based on this idea, we develop a library of effect specifications and implementations of monads and monad transformers. Hence, we can abstract over the concrete monad in HOL definitions and thus use the same definition for different (combinations of) effects. We illustrate the usefulness of effect polymorphism with a monadic interpreter for a simple language. extra-history = Change history: [2018-02-15]: added further specifications and implementations of non-determinism; more examples (revision bc5399eea78e)
[Constructor_Funs] title = Constructor Functions author = Lars Hupel topic = Tools date = 2017-04-19 notify = hupel@in.tum.de abstract = Isabelle's code generator performs various adaptations for target languages. Among others, constructor applications have to be fully saturated. That means that for constructor calls occuring as arguments to higher-order functions, synthetic lambdas have to be inserted. This entry provides tooling to avoid this construction altogether by introducing constructor functions. [Lazy_Case] title = Lazifying case constants author = Lars Hupel topic = Tools date = 2017-04-18 notify = hupel@in.tum.de abstract = Isabelle's code generator performs various adaptations for target languages. Among others, case statements are printed as match expressions. Internally, this is a sophisticated procedure, because in HOL, case statements are represented as nested calls to the case combinators as generated by the datatype package. Furthermore, the procedure relies on laziness of match expressions in the target language, i.e., that branches guarded by patterns that fail to match are not evaluated. Similarly, if-then-else is printed to the corresponding construct in the target language. This entry provides tooling to replace these special cases in the code generator by ignoring these target language features, instead printing case expressions and if-then-else as functions. [Dict_Construction] title = Dictionary Construction author = Lars Hupel topic = Tools date = 2017-05-24 notify = hupel@in.tum.de abstract = Isabelle's code generator natively supports type classes. For targets that do not have language support for classes and instances, it performs the well-known dictionary translation, as described by Haftmann and Nipkow. This translation happens outside the logic, i.e., there is no guarantee that it is correct, besides the pen-and-paper proof. This work implements a certified dictionary translation that produces new class-free constants and derives equality theorems. [Higher_Order_Terms] title = An Algebra for Higher-Order Terms author = Lars Hupel contributors = Yu Zhang <> topic = Computer science/Programming languages/Lambda calculi date = 2019-01-15 notify = lars@hupel.info abstract = In this formalization, I introduce a higher-order term algebra, generalizing the notions of free variables, matching, and substitution. The need arose from the work on a verified compiler from Isabelle to CakeML. Terms can be thought of as consisting of a generic (free variables, constants, application) and a specific part. As example applications, this entry provides instantiations for de-Bruijn terms, terms with named variables, and Blanchette’s λ-free higher-order terms. Furthermore, I implement translation functions between de-Bruijn terms and named terms and prove their correctness. [Subresultants] title = Subresultants author = Sebastiaan Joosten , René Thiemann , Akihisa Yamada topic = Mathematics/Algebra date = 2017-04-06 notify = rene.thiemann@uibk.ac.at abstract = We formalize the theory of subresultants and the subresultant polynomial remainder sequence as described by Brown and Traub. As a result, we obtain efficient certified algorithms for computing the resultant and the greatest common divisor of polynomials. [Comparison_Sort_Lower_Bound] title = Lower bound on comparison-based sorting algorithms author = Manuel Eberl topic = Computer science/Algorithms date = 2017-03-15 notify = manuel@pruvisto.org abstract =

This article contains a formal proof of the well-known fact that number of comparisons that a comparison-based sorting algorithm needs to perform to sort a list of length n is at least log2 (n!) in the worst case, i. e. Ω(n log n).

For this purpose, a shallow embedding for comparison-based sorting algorithms is defined: a sorting algorithm is a recursive datatype containing either a HOL function or a query of a comparison oracle with a continuation containing the remaining computation. This makes it possible to force the algorithm to use only comparisons and to track the number of comparisons made.

[Quick_Sort_Cost] title = The number of comparisons in QuickSort author = Manuel Eberl topic = Computer science/Algorithms date = 2017-03-15 notify = manuel@pruvisto.org abstract =

We give a formal proof of the well-known results about the number of comparisons performed by two variants of QuickSort: first, the expected number of comparisons of randomised QuickSort (i. e. QuickSort with random pivot choice) is 2 (n+1) Hn - 4 n, which is asymptotically equivalent to 2 n ln n; second, the number of comparisons performed by the classic non-randomised QuickSort has the same distribution in the average case as the randomised one.

[Random_BSTs] title = Expected Shape of Random Binary Search Trees author = Manuel Eberl topic = Computer science/Data structures date = 2017-04-04 notify = manuel@pruvisto.org abstract =

This entry contains proofs for the textbook results about the distributions of the height and internal path length of random binary search trees (BSTs), i. e. BSTs that are formed by taking an empty BST and inserting elements from a fixed set in random order.

In particular, we prove a logarithmic upper bound on the expected height and the Θ(n log n) closed-form solution for the expected internal path length in terms of the harmonic numbers. We also show how the internal path length relates to the average-case cost of a lookup in a BST.

[Randomised_BSTs] title = Randomised Binary Search Trees author = Manuel Eberl topic = Computer science/Data structures date = 2018-10-19 notify = manuel@pruvisto.org abstract =

This work is a formalisation of the Randomised Binary Search Trees introduced by Martínez and Roura, including definitions and correctness proofs.

Like randomised treaps, they are a probabilistic data structure that behaves exactly as if elements were inserted into a non-balancing BST in random order. However, unlike treaps, they only use discrete probability distributions, but their use of randomness is more complicated.

[E_Transcendental] title = The Transcendence of e author = Manuel Eberl topic = Mathematics/Analysis, Mathematics/Number theory date = 2017-01-12 notify = manuel@pruvisto.org abstract =

This work contains a proof that Euler's number e is transcendental. The proof follows the standard approach of assuming that e is algebraic and then using a specific integer polynomial to derive two inconsistent bounds, leading to a contradiction.

This kind of approach can be found in many different sources; this formalisation mostly follows a PlanetMath article by Roger Lipsett.

[Pi_Transcendental] title = The Transcendence of π author = Manuel Eberl topic = Mathematics/Number theory date = 2018-09-28 notify = manuel@pruvisto.org abstract =

This entry shows the transcendence of π based on the classic proof using the fundamental theorem of symmetric polynomials first given by von Lindemann in 1882, but the formalisation mostly follows the version by Niven. The proof reuses much of the machinery developed in the AFP entry on the transcendence of e.

[Hermite_Lindemann] title = The Hermite–Lindemann–Weierstraß Transcendence Theorem author = Manuel Eberl topic = Mathematics/Number theory date = 2021-03-03 notify = manuel@pruvisto.org abstract =

This article provides a formalisation of the Hermite-Lindemann-Weierstraß Theorem (also known as simply Hermite-Lindemann or Lindemann-Weierstraß). This theorem is one of the crowning achievements of 19th century number theory.

The theorem states that if $\alpha_1, \ldots, \alpha_n\in\mathbb{C}$ are algebraic numbers that are linearly independent over $\mathbb{Z}$, then $e^{\alpha_1},\ldots,e^{\alpha_n}$ are algebraically independent over $\mathbb{Q}$.

Like the previous formalisation in Coq by Bernard, I proceeded by formalising Baker's version of the theorem and proof and then deriving the original one from that. Baker's version states that for any algebraic numbers $\beta_1, \ldots, \beta_n\in\mathbb{C}$ and distinct algebraic numbers $\alpha_i, \ldots, \alpha_n\in\mathbb{C}$, we have $\beta_1 e^{\alpha_1} + \ldots + \beta_n e^{\alpha_n} = 0$ if and only if all the $\beta_i$ are zero.

This has a number of direct corollaries, e.g.:

  • $e$ and $\pi$ are transcendental
  • $e^z$, $\sin z$, $\tan z$, etc. are transcendental for algebraic $z\in\mathbb{C}\setminus\{0\}$
  • $\ln z$ is transcendental for algebraic $z\in\mathbb{C}\setminus\{0, 1\}$
[DFS_Framework] title = A Framework for Verifying Depth-First Search Algorithms author = Peter Lammich , René Neumann notify = lammich@in.tum.de date = 2016-07-05 topic = Computer science/Algorithms/Graph abstract =

This entry presents a framework for the modular verification of DFS-based algorithms, which is described in our [CPP-2015] paper. It provides a generic DFS algorithm framework, that can be parameterized with user-defined actions on certain events (e.g. discovery of new node). It comes with an extensible library of invariants, which can be used to derive invariants of a specific parameterization. Using refinement techniques, efficient implementations of the algorithms can easily be derived. Here, the framework comes with templates for a recursive and a tail-recursive implementation, and also with several templates for implementing the data structures required by the DFS algorithm. Finally, this entry contains a set of re-usable DFS-based algorithms, which illustrate the application of the framework.

[CPP-2015] Peter Lammich, René Neumann: A Framework for Verifying Depth-First Search Algorithms. CPP 2015: 137-146

[Flow_Networks] title = Flow Networks and the Min-Cut-Max-Flow Theorem author = Peter Lammich , S. Reza Sefidgar <> topic = Mathematics/Graph theory date = 2017-06-01 notify = lammich@in.tum.de abstract = We present a formalization of flow networks and the Min-Cut-Max-Flow theorem. Our formal proof closely follows a standard textbook proof, and is accessible even without being an expert in Isabelle/HOL, the interactive theorem prover used for the formalization. [Prpu_Maxflow] title = Formalizing Push-Relabel Algorithms author = Peter Lammich , S. Reza Sefidgar <> topic = Computer science/Algorithms/Graph, Mathematics/Graph theory date = 2017-06-01 notify = lammich@in.tum.de abstract = We present a formalization of push-relabel algorithms for computing the maximum flow in a network. We start with Goldberg's et al.~generic push-relabel algorithm, for which we show correctness and the time complexity bound of O(V^2E). We then derive the relabel-to-front and FIFO implementation. Using stepwise refinement techniques, we derive an efficient verified implementation. Our formal proof of the abstract algorithms closely follows a standard textbook proof. It is accessible even without being an expert in Isabelle/HOL, the interactive theorem prover used for the formalization. [Buildings] title = Chamber Complexes, Coxeter Systems, and Buildings author = Jeremy Sylvestre notify = jeremy.sylvestre@ualberta.ca date = 2016-07-01 topic = Mathematics/Algebra, Mathematics/Geometry abstract = We provide a basic formal framework for the theory of chamber complexes and Coxeter systems, and for buildings as thick chamber complexes endowed with a system of apartments. Along the way, we develop some of the general theory of abstract simplicial complexes and of groups (relying on the group_add class for the basics), including free groups and group presentations, and their universal properties. The main results verified are that the deletion condition is both necessary and sufficient for a group with a set of generators of order two to be a Coxeter system, and that the apartments in a (thick) building are all uniformly Coxeter. [Algebraic_VCs] title = Program Construction and Verification Components Based on Kleene Algebra author = Victor B. F. Gomes , Georg Struth notify = victor.gomes@cl.cam.ac.uk, g.struth@sheffield.ac.uk date = 2016-06-18 topic = Mathematics/Algebra abstract = Variants of Kleene algebra support program construction and verification by algebraic reasoning. This entry provides a verification component for Hoare logic based on Kleene algebra with tests, verification components for weakest preconditions and strongest postconditions based on Kleene algebra with domain and a component for step-wise refinement based on refinement Kleene algebra with tests. In addition to these components for the partial correctness of while programs, a verification component for total correctness based on divergence Kleene algebras and one for (partial correctness) of recursive programs based on domain quantales are provided. Finally we have integrated memory models for programs with pointers and a program trace semantics into the weakest precondition component. [C2KA_DistributedSystems] title = Communicating Concurrent Kleene Algebra for Distributed Systems Specification author = Maxime Buyse , Jason Jaskolka topic = Computer science/Automata and formal languages, Mathematics/Algebra date = 2019-08-06 notify = maxime.buyse@polytechnique.edu, jason.jaskolka@carleton.ca abstract = Communicating Concurrent Kleene Algebra (C²KA) is a mathematical framework for capturing the communicating and concurrent behaviour of agents in distributed systems. It extends Hoare et al.'s Concurrent Kleene Algebra (CKA) with communication actions through the notions of stimuli and shared environments. C²KA has applications in studying system-level properties of distributed systems such as safety, security, and reliability. In this work, we formalize results about C²KA and its application for distributed systems specification. We first formalize the stimulus structure and behaviour structure (CKA). Next, we combine them to formalize C²KA and its properties. Then, we formalize notions and properties related to the topology of distributed systems and the potential for communication via stimuli and via shared environments of agents, all within the algebraic setting of C²KA. [Card_Equiv_Relations] title = Cardinality of Equivalence Relations author = Lukas Bulwahn notify = lukas.bulwahn@gmail.com date = 2016-05-24 topic = Mathematics/Combinatorics abstract = This entry provides formulae for counting the number of equivalence relations and partial equivalence relations over a finite carrier set with given cardinality. To count the number of equivalence relations, we provide bijections between equivalence relations and set partitions, and then transfer the main results of the two AFP entries, Cardinality of Set Partitions and Spivey's Generalized Recurrence for Bell Numbers, to theorems on equivalence relations. To count the number of partial equivalence relations, we observe that counting partial equivalence relations over a set A is equivalent to counting all equivalence relations over all subsets of the set A. From this observation and the results on equivalence relations, we show that the cardinality of partial equivalence relations over a finite set of cardinality n is equal to the n+1-th Bell number. [Twelvefold_Way] title = The Twelvefold Way author = Lukas Bulwahn topic = Mathematics/Combinatorics date = 2016-12-29 notify = lukas.bulwahn@gmail.com abstract = This entry provides all cardinality theorems of the Twelvefold Way. The Twelvefold Way systematically classifies twelve related combinatorial problems concerning two finite sets, which include counting permutations, combinations, multisets, set partitions and number partitions. This development builds upon the existing formal developments with cardinality theorems for those structures. It provides twelve bijections from the various structures to different equivalence classes on finite functions, and hence, proves cardinality formulae for these equivalence classes on finite functions. [Chord_Segments] title = Intersecting Chords Theorem author = Lukas Bulwahn notify = lukas.bulwahn@gmail.com date = 2016-10-11 topic = Mathematics/Geometry abstract = This entry provides a geometric proof of the intersecting chords theorem. The theorem states that when two chords intersect each other inside a circle, the products of their segments are equal. After a short review of existing proofs in the literature, I decided to use a proof approach that employs reasoning about lengths of line segments, the orthogonality of two lines and the Pythagoras Law. Hence, one can understand the formalized proof easily with the knowledge of a few general geometric facts that are commonly taught in high-school. This theorem is the 55th theorem of the Top 100 Theorems list. [Category3] title = Category Theory with Adjunctions and Limits author = Eugene W. Stark notify = stark@cs.stonybrook.edu date = 2016-06-26 topic = Mathematics/Category theory abstract =

This article attempts to develop a usable framework for doing category theory in Isabelle/HOL. Our point of view, which to some extent differs from that of the previous AFP articles on the subject, is to try to explore how category theory can be done efficaciously within HOL, rather than trying to match exactly the way things are done using a traditional approach. To this end, we define the notion of category in an "object-free" style, in which a category is represented by a single partial composition operation on arrows. This way of defining categories provides some advantages in the context of HOL, including the ability to avoid the use of records and the possibility of defining functors and natural transformations simply as certain functions on arrows, rather than as composite objects. We define various constructions associated with the basic notions, including: dual category, product category, functor category, discrete category, free category, functor composition, and horizontal and vertical composite of natural transformations. A "set category" locale is defined that axiomatizes the notion "category of all sets at a type and all functions between them," and a fairly extensive set of properties of set categories is derived from the locale assumptions. The notion of a set category is used to prove the Yoneda Lemma in a general setting of a category equipped with a "hom embedding," which maps arrows of the category to the "universe" of the set category. We also give a treatment of adjunctions, defining adjunctions via left and right adjoint functors, natural bijections between hom-sets, and unit and counit natural transformations, and showing the equivalence of these definitions. We also develop the theory of limits, including representations of functors, diagrams and cones, and diagonal functors. We show that right adjoint functors preserve limits, and that limits can be constructed via products and equalizers. We characterize the conditions under which limits exist in a set category. We also examine the case of limits in a functor category, ultimately culminating in a proof that the Yoneda embedding preserves limits.

Revisions made subsequent to the first version of this article added material on equivalence of categories, cartesian categories, categories with pullbacks, categories with finite limits, and cartesian closed categories. A construction was given of the category of hereditarily finite sets and functions between them, and it was shown that this category is cartesian closed.

extra-history = Change history: [2018-05-29]: Revised axioms for the category locale. Introduced notation for composition and "in hom". (revision 8318366d4575)
[2020-02-15]: Move ConcreteCategory.thy from Bicategory to Category3 and use it systematically. Make other minor improvements throughout. (revision a51840d36867)
[2020-07-10]: Added new material, mostly centered around cartesian categories. (revision 06640f317a79)
[2020-11-04]: Minor modifications and extensions made in conjunction with the addition of new material to Bicategory. (revision 472cb2268826)
[2021-07-22]: Minor changes to sublocale declarations related to functor/natural transformation to avoid issues with global interpretations reported 2/2/2021 by Filip Smola. (revision 49d3aa43c180)
[MonoidalCategory] title = Monoidal Categories author = Eugene W. Stark topic = Mathematics/Category theory date = 2017-05-04 notify = stark@cs.stonybrook.edu abstract =

Building on the formalization of basic category theory set out in the author's previous AFP article, the present article formalizes some basic aspects of the theory of monoidal categories. Among the notions defined here are monoidal category, monoidal functor, and equivalence of monoidal categories. The main theorems formalized are MacLane's coherence theorem and the constructions of the free monoidal category and free strict monoidal category generated by a given category. The coherence theorem is proved syntactically, using a structurally recursive approach to reduction of terms that might have some novel aspects. We also give proofs of some results given by Etingof et al, which may prove useful in a formal setting. In particular, we show that the left and right unitors need not be taken as given data in the definition of monoidal category, nor does the definition of monoidal functor need to take as given a specific isomorphism expressing the preservation of the unit object. Our definitions of monoidal category and monoidal functor are stated so as to take advantage of the economy afforded by these facts.

Revisions made subsequent to the first version of this article added material on cartesian monoidal categories; showing that the underlying category of a cartesian monoidal category is a cartesian category, and that every cartesian category extends to a cartesian monoidal category.

extra-history = Change history: [2017-05-18]: Integrated material from MonoidalCategory/Category3Adapter into Category3/ and deleted adapter. (revision 015543cdd069)
[2018-05-29]: Modifications required due to 'Category3' changes. Introduced notation for "in hom". (revision 8318366d4575)
[2020-02-15]: Cosmetic improvements. (revision a51840d36867)
[2020-07-10]: Added new material on cartesian monoidal categories. (revision 06640f317a79)
[Card_Multisets] title = Cardinality of Multisets author = Lukas Bulwahn notify = lukas.bulwahn@gmail.com date = 2016-06-26 topic = Mathematics/Combinatorics abstract =

This entry provides three lemmas to count the number of multisets of a given size and finite carrier set. The first lemma provides a cardinality formula assuming that the multiset's elements are chosen from the given carrier set. The latter two lemmas provide formulas assuming that the multiset's elements also cover the given carrier set, i.e., each element of the carrier set occurs in the multiset at least once.

The proof of the first lemma uses the argument of the recurrence relation for counting multisets. The proof of the second lemma is straightforward, and the proof of the third lemma is easily obtained using the first cardinality lemma. A challenge for the formalization is the derivation of the required induction rule, which is a special combination of the induction rules for finite sets and natural numbers. The induction rule is derived by defining a suitable inductive predicate and transforming the predicate's induction rule.

[Posix-Lexing] title = POSIX Lexing with Derivatives of Regular Expressions author = Fahad Ausaf , Roy Dyckhoff , Christian Urban notify = christian.urban@kcl.ac.uk date = 2016-05-24 topic = Computer science/Automata and formal languages abstract = Brzozowski introduced the notion of derivatives for regular expressions. They can be used for a very simple regular expression matching algorithm. Sulzmann and Lu cleverly extended this algorithm in order to deal with POSIX matching, which is the underlying disambiguation strategy for regular expressions needed in lexers. In this entry we give our inductive definition of what a POSIX value is and show (i) that such a value is unique (for given regular expression and string being matched) and (ii) that Sulzmann and Lu's algorithm always generates such a value (provided that the regular expression matches the string). We also prove the correctness of an optimised version of the POSIX matching algorithm. [LocalLexing] title = Local Lexing author = Steven Obua topic = Computer science/Automata and formal languages date = 2017-04-28 notify = steven@recursivemind.com abstract = This formalisation accompanies the paper Local Lexing which introduces a novel parsing concept of the same name. The paper also gives a high-level algorithm for local lexing as an extension of Earley's algorithm. This formalisation proves the algorithm to be correct with respect to its local lexing semantics. As a special case, this formalisation thus also contains a proof of the correctness of Earley's algorithm. The paper contains a short outline of how this formalisation is organised. [MFMC_Countable] title = A Formal Proof of the Max-Flow Min-Cut Theorem for Countable Networks author = Andreas Lochbihler date = 2016-05-09 topic = Mathematics/Graph theory abstract = This article formalises a proof of the maximum-flow minimal-cut theorem for networks with countably many edges. A network is a directed graph with non-negative real-valued edge labels and two dedicated vertices, the source and the sink. A flow in a network assigns non-negative real numbers to the edges such that for all vertices except for the source and the sink, the sum of values on incoming edges equals the sum of values on outgoing edges. A cut is a subset of the vertices which contains the source, but not the sink. Our theorem states that in every network, there is a flow and a cut such that the flow saturates all the edges going out of the cut and is zero on all the incoming edges. The proof is based on the paper The Max-Flow Min-Cut theorem for countable networks by Aharoni et al. Additionally, we prove a characterisation of the lifting operation for relations on discrete probability distributions, which leads to a concise proof of its distributivity over relation composition. notify = mail@andreas-lochbihler.de extra-history = Change history: [2017-09-06]: derive characterisation for the lifting operation on discrete distributions from finite version of the max-flow min-cut theorem (revision a7a198f5bab0)
[2020-12-19]: simpler proof of linkability for bounded unhindered bipartite webs, leading to a simpler proof for networks with bounded out-capacities (revision 93ca33f4d915)
[2021-08-13]: generalize the derivation of the characterisation for the relator of discrete probability distributions to work for the bounded and unbounded MFMC theorem (revision 3c85bb52bbe6)
[Liouville_Numbers] title = Liouville numbers author = Manuel Eberl date = 2015-12-28 topic = Mathematics/Analysis, Mathematics/Number theory abstract =

Liouville numbers are a class of transcendental numbers that can be approximated particularly well with rational numbers. Historically, they were the first numbers whose transcendence was proven.

In this entry, we define the concept of Liouville numbers as well as the standard construction to obtain Liouville numbers (including Liouville's constant) and we prove their most important properties: irrationality and transcendence.

The proof is very elementary and requires only standard arithmetic, the Mean Value Theorem for polynomials, and the boundedness of polynomials on compact intervals.

notify = manuel@pruvisto.org [Triangle] title = Basic Geometric Properties of Triangles author = Manuel Eberl date = 2015-12-28 topic = Mathematics/Geometry abstract =

This entry contains a definition of angles between vectors and between three points. Building on this, we prove basic geometric properties of triangles, such as the Isosceles Triangle Theorem, the Law of Sines and the Law of Cosines, that the sum of the angles of a triangle is π, and the congruence theorems for triangles.

The definitions and proofs were developed following those by John Harrison in HOL Light. However, due to Isabelle's type class system, all definitions and theorems in the Isabelle formalisation hold for all real inner product spaces.

notify = manuel@pruvisto.org [Prime_Harmonic_Series] title = The Divergence of the Prime Harmonic Series author = Manuel Eberl date = 2015-12-28 topic = Mathematics/Number theory abstract =

In this work, we prove the lower bound ln(H_n) - ln(5/3) for the partial sum of the Prime Harmonic series and, based on this, the divergence of the Prime Harmonic Series ∑[p prime] · 1/p.

The proof relies on the unique squarefree decomposition of natural numbers. This is similar to Euler's original proof (which was highly informal and morally questionable). Its advantage over proofs by contradiction, like the famous one by Paul Erdős, is that it provides a relatively good lower bound for the partial sums.

notify = manuel@pruvisto.org [Descartes_Sign_Rule] title = Descartes' Rule of Signs author = Manuel Eberl date = 2015-12-28 topic = Mathematics/Analysis abstract =

Descartes' Rule of Signs relates the number of positive real roots of a polynomial with the number of sign changes in its coefficient sequence.

Our proof follows the simple inductive proof given by Rob Arthan, which was also used by John Harrison in his HOL Light formalisation. We proved most of the lemmas for arbitrary linearly-ordered integrity domains (e.g. integers, rationals, reals); the main result, however, requires the intermediate value theorem and was therefore only proven for real polynomials.

notify = manuel@pruvisto.org [Euler_MacLaurin] title = The Euler–MacLaurin Formula author = Manuel Eberl topic = Mathematics/Analysis date = 2017-03-10 notify = manuel@pruvisto.org abstract =

The Euler-MacLaurin formula relates the value of a discrete sum to that of the corresponding integral in terms of the derivatives at the borders of the summation and a remainder term. Since the remainder term is often very small as the summation bounds grow, this can be used to compute asymptotic expansions for sums.

This entry contains a proof of this formula for functions from the reals to an arbitrary Banach space. Two variants of the formula are given: the standard textbook version and a variant outlined in Concrete Mathematics that is more useful for deriving asymptotic estimates.

As example applications, we use that formula to derive the full asymptotic expansion of the harmonic numbers and the sum of inverse squares.

[Card_Partitions] title = Cardinality of Set Partitions author = Lukas Bulwahn date = 2015-12-12 topic = Mathematics/Combinatorics abstract = The theory's main theorem states that the cardinality of set partitions of size k on a carrier set of size n is expressed by Stirling numbers of the second kind. In Isabelle, Stirling numbers of the second kind are defined in the AFP entry `Discrete Summation` through their well-known recurrence relation. The main theorem relates them to the alternative definition as cardinality of set partitions. The proof follows the simple and short explanation in Richard P. Stanley's `Enumerative Combinatorics: Volume 1` and Wikipedia, and unravels the full details and implicit reasoning steps of these explanations. notify = lukas.bulwahn@gmail.com [Card_Number_Partitions] title = Cardinality of Number Partitions author = Lukas Bulwahn date = 2016-01-14 topic = Mathematics/Combinatorics abstract = This entry provides a basic library for number partitions, defines the two-argument partition function through its recurrence relation and relates this partition function to the cardinality of number partitions. The main proof shows that the recursively-defined partition function with arguments n and k equals the cardinality of number partitions of n with exactly k parts. The combinatorial proof follows the proof sketch of Theorem 2.4.1 in Mazur's textbook `Combinatorics: A Guided Tour`. This entry can serve as starting point for various more intrinsic properties about number partitions, the partition function and related recurrence relations. notify = lukas.bulwahn@gmail.com [Multirelations] title = Binary Multirelations author = Hitoshi Furusawa , Georg Struth date = 2015-06-11 topic = Mathematics/Algebra abstract = Binary multirelations associate elements of a set with its subsets; hence they are binary relations from a set to its power set. Applications include alternating automata, models and logics for games, program semantics with dual demonic and angelic nondeterministic choices and concurrent dynamic logics. This proof document supports an arXiv article that formalises the basic algebra of multirelations and proposes axiom systems for them, ranging from weak bi-monoids to weak bi-quantales. notify = [Noninterference_Generic_Unwinding] title = The Generic Unwinding Theorem for CSP Noninterference Security author = Pasquale Noce date = 2015-06-11 topic = Computer science/Security, Computer science/Concurrency/Process calculi abstract =

The classical definition of noninterference security for a deterministic state machine with outputs requires to consider the outputs produced by machine actions after any trace, i.e. any indefinitely long sequence of actions, of the machine. In order to render the verification of the security of such a machine more straightforward, there is a need of some sufficient condition for security such that just individual actions, rather than unbounded sequences of actions, have to be considered.

By extending previous results applying to transitive noninterference policies, Rushby has proven an unwinding theorem that provides a sufficient condition of this kind in the general case of a possibly intransitive policy. This condition has to be satisfied by a generic function mapping security domains into equivalence relations over machine states.

An analogous problem arises for CSP noninterference security, whose definition requires to consider any possible future, i.e. any indefinitely long sequence of subsequent events and any indefinitely large set of refused events associated to that sequence, for each process trace.

This paper provides a sufficient condition for CSP noninterference security, which indeed requires to just consider individual accepted and refused events and applies to the general case of a possibly intransitive policy. This condition follows Rushby's one for classical noninterference security, and has to be satisfied by a generic function mapping security domains into equivalence relations over process traces; hence its name, Generic Unwinding Theorem. Variants of this theorem applying to deterministic processes and trace set processes are also proven. Finally, the sufficient condition for security expressed by the theorem is shown not to be a necessary condition as well, viz. there exists a secure process such that no domain-relation map satisfying the condition exists.

notify = [Noninterference_Ipurge_Unwinding] title = The Ipurge Unwinding Theorem for CSP Noninterference Security author = Pasquale Noce date = 2015-06-11 topic = Computer science/Security abstract =

The definition of noninterference security for Communicating Sequential Processes requires to consider any possible future, i.e. any indefinitely long sequence of subsequent events and any indefinitely large set of refused events associated to that sequence, for each process trace. In order to render the verification of the security of a process more straightforward, there is a need of some sufficient condition for security such that just individual accepted and refused events, rather than unbounded sequences and sets of events, have to be considered.

Of course, if such a sufficient condition were necessary as well, it would be even more valuable, since it would permit to prove not only that a process is secure by verifying that the condition holds, but also that a process is not secure by verifying that the condition fails to hold.

This paper provides a necessary and sufficient condition for CSP noninterference security, which indeed requires to just consider individual accepted and refused events and applies to the general case of a possibly intransitive policy. This condition follows Rushby's output consistency for deterministic state machines with outputs, and has to be satisfied by a specific function mapping security domains into equivalence relations over process traces. The definition of this function makes use of an intransitive purge function following Rushby's one; hence the name given to the condition, Ipurge Unwinding Theorem.

Furthermore, in accordance with Hoare's formal definition of deterministic processes, it is shown that a process is deterministic just in case it is a trace set process, i.e. it may be identified by means of a trace set alone, matching the set of its traces, in place of a failures-divergences pair. Then, variants of the Ipurge Unwinding Theorem are proven for deterministic processes and trace set processes.

notify = [Relational_Method] title = The Relational Method with Message Anonymity for the Verification of Cryptographic Protocols author = Pasquale Noce topic = Computer science/Security date = 2020-12-05 notify = pasquale.noce.lavoro@gmail.com abstract = This paper introduces a new method for the formal verification of cryptographic protocols, the relational method, derived from Paulson's inductive method by means of some enhancements aimed at streamlining formal definitions and proofs, specially for protocols using public key cryptography. Moreover, this paper proposes a method to formalize a further security property, message anonymity, in addition to message confidentiality and authenticity. The relational method, including message anonymity, is then applied to the verification of a sample authentication protocol, comprising Password Authenticated Connection Establishment (PACE) with Chip Authentication Mapping followed by the explicit verification of an additional password over the PACE secure channel. [List_Interleaving] title = Reasoning about Lists via List Interleaving author = Pasquale Noce date = 2015-06-11 topic = Computer science/Data structures abstract =

Among the various mathematical tools introduced in his outstanding work on Communicating Sequential Processes, Hoare has defined "interleaves" as the predicate satisfied by any three lists such that the first list may be split into sublists alternately extracted from the other two ones, whatever is the criterion for extracting an item from either one list or the other in each step.

This paper enriches Hoare's definition by identifying such criterion with the truth value of a predicate taking as inputs the head and the tail of the first list. This enhanced "interleaves" predicate turns out to permit the proof of equalities between lists without the need of an induction. Some rules that allow to infer "interleaves" statements without induction, particularly applying to the addition or removal of a prefix to the input lists, are also proven. Finally, a stronger version of the predicate, named "Interleaves", is shown to fulfil further rules applying to the addition or removal of a suffix to the input lists.

notify = [Residuated_Lattices] title = Residuated Lattices author = Victor B. F. Gomes , Georg Struth date = 2015-04-15 topic = Mathematics/Algebra abstract = The theory of residuated lattices, first proposed by Ward and Dilworth, is formalised in Isabelle/HOL. This includes concepts of residuated functions; their adjoints and conjugates. It also contains necessary and sufficient conditions for the existence of these operations in an arbitrary lattice. The mathematical components for residuated lattices are linked to the AFP entry for relation algebra. In particular, we prove Jonsson and Tsinakis conditions for a residuated boolean algebra to form a relation algebra. notify = g.struth@sheffield.ac.uk [ConcurrentGC] title = Relaxing Safely: Verified On-the-Fly Garbage Collection for x86-TSO author = Peter Gammie , Tony Hosking , Kai Engelhardt <> date = 2015-04-13 topic = Computer science/Algorithms/Concurrent abstract =

We use ConcurrentIMP to model Schism, a state-of-the-art real-time garbage collection scheme for weak memory, and show that it is safe on x86-TSO.

This development accompanies the PLDI 2015 paper of the same name.

notify = peteg42@gmail.com [List_Update] title = Analysis of List Update Algorithms author = Maximilian P.L. Haslbeck , Tobias Nipkow date = 2016-02-17 topic = Computer science/Algorithms/Online abstract =

These theories formalize the quantitative analysis of a number of classical algorithms for the list update problem: 2-competitiveness of move-to-front, the lower bound of 2 for the competitiveness of deterministic list update algorithms and 1.6-competitiveness of the randomized COMB algorithm, the best randomized list update algorithm known to date. The material is based on the first two chapters of Online Computation and Competitive Analysis by Borodin and El-Yaniv.

For an informal description see the FSTTCS 2016 publication Verified Analysis of List Update Algorithms by Haslbeck and Nipkow.

notify = nipkow@in.tum.de [ConcurrentIMP] title = Concurrent IMP author = Peter Gammie date = 2015-04-13 topic = Computer science/Programming languages/Logics abstract = ConcurrentIMP extends the small imperative language IMP with control non-determinism and constructs for synchronous message passing. notify = peteg42@gmail.com [TortoiseHare] title = The Tortoise and Hare Algorithm author = Peter Gammie date = 2015-11-18 topic = Computer science/Algorithms abstract = We formalize the Tortoise and Hare cycle-finding algorithm ascribed to Floyd by Knuth, and an improved version due to Brent. notify = peteg42@gmail.com [UPF] title = The Unified Policy Framework (UPF) author = Achim D. Brucker , Lukas Brügger , Burkhart Wolff date = 2014-11-28 topic = Computer science/Security abstract = We present the Unified Policy Framework (UPF), a generic framework for modelling security (access-control) policies. UPF emphasizes the view that a policy is a policy decision function that grants or denies access to resources, permissions, etc. In other words, instead of modelling the relations of permitted or prohibited requests directly, we model the concrete function that implements the policy decision point in a system. In more detail, UPF is based on the following four principles: 1) Functional representation of policies, 2) No conflicts are possible, 3) Three-valued decision type (allow, deny, undefined), 4) Output type not containing the decision only. notify = adbrucker@0x5f.org, wolff@lri.fr, lukas.a.bruegger@gmail.com [UPF_Firewall] title = Formal Network Models and Their Application to Firewall Policies author = Achim D. Brucker , Lukas Brügger<>, Burkhart Wolff topic = Computer science/Security, Computer science/Networks date = 2017-01-08 notify = adbrucker@0x5f.org abstract = We present a formal model of network protocols and their application to modeling firewall policies. The formalization is based on the Unified Policy Framework (UPF). The formalization was originally developed with for generating test cases for testing the security configuration actual firewall and router (middle-boxes) using HOL-TestGen. Our work focuses on modeling application level protocols on top of tcp/ip. [AODV] title = Loop freedom of the (untimed) AODV routing protocol author = Timothy Bourke , Peter Höfner date = 2014-10-23 topic = Computer science/Concurrency/Process calculi abstract =

The Ad hoc On-demand Distance Vector (AODV) routing protocol allows the nodes in a Mobile Ad hoc Network (MANET) or a Wireless Mesh Network (WMN) to know where to forward data packets. Such a protocol is ‘loop free’ if it never leads to routing decisions that forward packets in circles.

This development mechanises an existing pen-and-paper proof of loop freedom of AODV. The protocol is modelled in the Algebra of Wireless Networks (AWN), which is the subject of an earlier paper and AFP mechanization. The proof relies on a novel compositional approach for lifting invariants to networks of nodes.

We exploit the mechanization to analyse several variants of AODV and show that Isabelle/HOL can re-establish most proof obligations automatically and identify exactly the steps that are no longer valid.

notify = tim@tbrk.org [Show] title = Haskell's Show Class in Isabelle/HOL author = Christian Sternagel , René Thiemann date = 2014-07-29 topic = Computer science/Functional programming license = LGPL abstract = We implemented a type class for "to-string" functions, similar to Haskell's Show class. Moreover, we provide instantiations for Isabelle/HOL's standard types like bool, prod, sum, nats, ints, and rats. It is further possible, to automatically derive show functions for arbitrary user defined datatypes similar to Haskell's "deriving Show". extra-history = Change history: [2015-03-11]: Adapted development to new-style (BNF-based) datatypes.
[2015-04-10]: Moved development for old-style datatypes into subdirectory "Old_Datatype".
notify = christian.sternagel@uibk.ac.at, rene.thiemann@uibk.ac.at [Certification_Monads] title = Certification Monads author = Christian Sternagel , René Thiemann date = 2014-10-03 topic = Computer science/Functional programming abstract = This entry provides several monads intended for the development of stand-alone certifiers via code generation from Isabelle/HOL. More specifically, there are three flavors of error monads (the sum type, for the case where all monadic functions are total; an instance of the former, the so called check monad, yielding either success without any further information or an error message; as well as a variant of the sum type that accommodates partial functions by providing an explicit bottom element) and a parser monad built on top. All of this monads are heavily used in the IsaFoR/CeTA project which thus provides many examples of their usage. notify = c.sternagel@gmail.com, rene.thiemann@uibk.ac.at [CISC-Kernel] title = Formal Specification of a Generic Separation Kernel author = Freek Verbeek , Sergey Tverdyshev , Oto Havle , Holger Blasum , Bruno Langenstein , Werner Stephan , Yakoub Nemouchi , Abderrahmane Feliachi , Burkhart Wolff , Julien Schmaltz date = 2014-07-18 topic = Computer science/Security abstract =

Intransitive noninterference has been a widely studied topic in the last few decades. Several well-established methodologies apply interactive theorem proving to formulate a noninterference theorem over abstract academic models. In joint work with several industrial and academic partners throughout Europe, we are helping in the certification process of PikeOS, an industrial separation kernel developed at SYSGO. In this process, established theories could not be applied. We present a new generic model of separation kernels and a new theory of intransitive noninterference. The model is rich in detail, making it suitable for formal verification of realistic and industrial systems such as PikeOS. Using a refinement-based theorem proving approach, we ensure that proofs remain manageable.

This document corresponds to the deliverable D31.1 of the EURO-MILS Project http://www.euromils.eu.

notify = [pGCL] title = pGCL for Isabelle author = David Cock date = 2014-07-13 topic = Computer science/Programming languages/Language definitions abstract =

pGCL is both a programming language and a specification language that incorporates both probabilistic and nondeterministic choice, in a unified manner. Program verification is by refinement or annotation (or both), using either Hoare triples, or weakest-precondition entailment, in the style of GCL.

This package provides both a shallow embedding of the language primitives, and an annotation and refinement framework. The generated document includes a brief tutorial.

notify = [Noninterference_CSP] title = Noninterference Security in Communicating Sequential Processes author = Pasquale Noce date = 2014-05-23 topic = Computer science/Security abstract =

An extension of classical noninterference security for deterministic state machines, as introduced by Goguen and Meseguer and elegantly formalized by Rushby, to nondeterministic systems should satisfy two fundamental requirements: it should be based on a mathematically precise theory of nondeterminism, and should be equivalent to (or at least not weaker than) the classical notion in the degenerate deterministic case.

This paper proposes a definition of noninterference security applying to Hoare's Communicating Sequential Processes (CSP) in the general case of a possibly intransitive noninterference policy, and proves the equivalence of this security property to classical noninterference security for processes representing deterministic state machines.

Furthermore, McCullough's generalized noninterference security is shown to be weaker than both the proposed notion of CSP noninterference security for a generic process, and classical noninterference security for processes representing deterministic state machines. This renders CSP noninterference security preferable as an extension of classical noninterference security to nondeterministic systems.

notify = pasquale.noce.lavoro@gmail.com [Floyd_Warshall] title = The Floyd-Warshall Algorithm for Shortest Paths author = Simon Wimmer , Peter Lammich topic = Computer science/Algorithms/Graph date = 2017-05-08 notify = wimmers@in.tum.de abstract = The Floyd-Warshall algorithm [Flo62, Roy59, War62] is a classic dynamic programming algorithm to compute the length of all shortest paths between any two vertices in a graph (i.e. to solve the all-pairs shortest path problem, or APSP for short). Given a representation of the graph as a matrix of weights M, it computes another matrix M' which represents a graph with the same path lengths and contains the length of the shortest path between any two vertices i and j. This is only possible if the graph does not contain any negative cycles. However, in this case the Floyd-Warshall algorithm will detect the situation by calculating a negative diagonal entry. This entry includes a formalization of the algorithm and of these key properties. The algorithm is refined to an efficient imperative version using the Imperative Refinement Framework. [Roy_Floyd_Warshall] title = Transitive closure according to Roy-Floyd-Warshall author = Makarius Wenzel <> date = 2014-05-23 topic = Computer science/Algorithms/Graph abstract = This formulation of the Roy-Floyd-Warshall algorithm for the transitive closure bypasses matrices and arrays, but uses a more direct mathematical model with adjacency functions for immediate predecessors and successors. This can be implemented efficiently in functional programming languages and is particularly adequate for sparse relations. notify = [GPU_Kernel_PL] title = Syntax and semantics of a GPU kernel programming language author = John Wickerson date = 2014-04-03 topic = Computer science/Programming languages/Language definitions abstract = This document accompanies the article "The Design and Implementation of a Verification Technique for GPU Kernels" by Adam Betts, Nathan Chong, Alastair F. Donaldson, Jeroen Ketema, Shaz Qadeer, Paul Thomson and John Wickerson. It formalises all of the definitions provided in Sections 3 and 4 of the article. notify = [AWN] title = Mechanization of the Algebra for Wireless Networks (AWN) author = Timothy Bourke date = 2014-03-08 topic = Computer science/Concurrency/Process calculi abstract =

AWN is a process algebra developed for modelling and analysing protocols for Mobile Ad hoc Networks (MANETs) and Wireless Mesh Networks (WMNs). AWN models comprise five distinct layers: sequential processes, local parallel compositions, nodes, partial networks, and complete networks.

This development mechanises the original operational semantics of AWN and introduces a variant 'open' operational semantics that enables the compositional statement and proof of invariants across distinct network nodes. It supports labels (for weakening invariants) and (abstract) data state manipulations. A framework for compositional invariant proofs is developed, including a tactic (inv_cterms) for inductive invariant proofs of sequential processes, lifting rules for the open versions of the higher layers, and a rule for transferring lifted properties back to the standard semantics. A notion of 'control terms' reduces proof obligations to the subset of subterms that act directly (in contrast to operators for combining terms and joining processes).

notify = tim@tbrk.org [Selection_Heap_Sort] title = Verification of Selection and Heap Sort Using Locales author = Danijela Petrovic date = 2014-02-11 topic = Computer science/Algorithms abstract = Stepwise program refinement techniques can be used to simplify program verification. Programs are better understood since their main properties are clearly stated, and verification of rather complex algorithms is reduced to proving simple statements connecting successive program specifications. Additionally, it is easy to analyze similar algorithms and to compare their properties within a single formalization. Usually, formal analysis is not done in educational setting due to complexity of verification and a lack of tools and procedures to make comparison easy. Verification of an algorithm should not only give correctness proof, but also better understanding of an algorithm. If the verification is based on small step program refinement, it can become simple enough to be demonstrated within the university-level computer science curriculum. In this paper we demonstrate this and give a formal analysis of two well known algorithms (Selection Sort and Heap Sort) using proof assistant Isabelle/HOL and program refinement techniques. notify = [Real_Impl] title = Implementing field extensions of the form Q[sqrt(b)] author = René Thiemann date = 2014-02-06 license = LGPL topic = Mathematics/Analysis abstract = We apply data refinement to implement the real numbers, where we support all numbers in the field extension Q[sqrt(b)], i.e., all numbers of the form p + q * sqrt(b) for rational numbers p and q and some fixed natural number b. To this end, we also developed algorithms to precisely compute roots of a rational number, and to perform a factorization of natural numbers which eliminates duplicate prime factors.

Our results have been used to certify termination proofs which involve polynomial interpretations over the reals. extra-history = Change history: [2014-07-11]: Moved NthRoot_Impl to Sqrt-Babylonian. notify = rene.thiemann@uibk.ac.at [ShortestPath] title = An Axiomatic Characterization of the Single-Source Shortest Path Problem author = Christine Rizkallah date = 2013-05-22 topic = Mathematics/Graph theory abstract = This theory is split into two sections. In the first section, we give a formal proof that a well-known axiomatic characterization of the single-source shortest path problem is correct. Namely, we prove that in a directed graph with a non-negative cost function on the edges the single-source shortest path function is the only function that satisfies a set of four axioms. In the second section, we give a formal proof of the correctness of an axiomatic characterization of the single-source shortest path problem for directed graphs with general cost functions. The axioms here are more involved because we have to account for potential negative cycles in the graph. The axioms are summarized in three Isabelle locales. notify = [Launchbury] title = The Correctness of Launchbury's Natural Semantics for Lazy Evaluation author = Joachim Breitner date = 2013-01-31 topic = Computer science/Programming languages/Lambda calculi, Computer science/Semantics abstract = In his seminal paper "Natural Semantics for Lazy Evaluation", John Launchbury proves his semantics correct with respect to a denotational semantics, and outlines an adequacy proof. We have formalized both semantics and machine-checked the correctness proof, clarifying some details. Furthermore, we provide a new and more direct adequacy proof that does not require intermediate operational semantics. extra-history = Change history: [2014-05-24]: Added the proof of adequacy, as well as simplified and improved the existing proofs. Adjusted abstract accordingly. [2015-03-16]: Booleans and if-then-else added to syntax and semantics, making this entry suitable to be used by the entry "Call_Arity". notify = [Call_Arity] title = The Safety of Call Arity author = Joachim Breitner date = 2015-02-20 topic = Computer science/Programming languages/Transformations abstract = We formalize the Call Arity analysis, as implemented in GHC, and prove both functional correctness and, more interestingly, safety (i.e. the transformation does not increase allocation).

We use syntax and the denotational semantics from the entry "Launchbury", where we formalized Launchbury's natural semantics for lazy evaluation.

The functional correctness of Call Arity is proved with regard to that denotational semantics. The operational properties are shown with regard to a small-step semantics akin to Sestoft's mark 1 machine, which we prove to be equivalent to Launchbury's semantics.

We use Christian Urban's Nominal2 package to define our terms and make use of Brian Huffman's HOLCF package for the domain-theoretical aspects of the development. extra-history = Change history: [2015-03-16]: This entry now builds on top of the Launchbury entry, and the equivalency proof of the natural and the small-step semantics was added. notify = [CCS] title = CCS in nominal logic author = Jesper Bengtson date = 2012-05-29 topic = Computer science/Concurrency/Process calculi abstract = We formalise a large portion of CCS as described in Milner's book 'Communication and Concurrency' using the nominal datatype package in Isabelle. Our results include many of the standard theorems of bisimulation equivalence and congruence, for both weak and strong versions. One main goal of this formalisation is to keep the machine-checked proofs as close to their pen-and-paper counterpart as possible.

This entry is described in detail in Bengtson's thesis. notify = [Pi_Calculus] title = The pi-calculus in nominal logic author = Jesper Bengtson date = 2012-05-29 topic = Computer science/Concurrency/Process calculi abstract = We formalise the pi-calculus using the nominal datatype package, based on ideas from the nominal logic by Pitts et al., and demonstrate an implementation in Isabelle/HOL. The purpose is to derive powerful induction rules for the semantics in order to conduct machine checkable proofs, closely following the intuitive arguments found in manual proofs. In this way we have covered many of the standard theorems of bisimulation equivalence and congruence, both late and early, and both strong and weak in a uniform manner. We thus provide one of the most extensive formalisations of a the pi-calculus ever done inside a theorem prover.

A significant gain in our formulation is that agents are identified up to alpha-equivalence, thereby greatly reducing the arguments about bound names. This is a normal strategy for manual proofs about the pi-calculus, but that kind of hand waving has previously been difficult to incorporate smoothly in an interactive theorem prover. We show how the nominal logic formalism and its support in Isabelle accomplishes this and thus significantly reduces the tedium of conducting completely formal proofs. This improves on previous work using weak higher order abstract syntax since we do not need extra assumptions to filter out exotic terms and can keep all arguments within a familiar first-order logic.

This entry is described in detail in Bengtson's thesis. notify = [Psi_Calculi] title = Psi-calculi in Isabelle author = Jesper Bengtson date = 2012-05-29 topic = Computer science/Concurrency/Process calculi abstract = Psi-calculi are extensions of the pi-calculus, accommodating arbitrary nominal datatypes to represent not only data but also communication channels, assertions and conditions, giving it an expressive power beyond the applied pi-calculus and the concurrent constraint pi-calculus.

We have formalised psi-calculi in the interactive theorem prover Isabelle using its nominal datatype package. One distinctive feature is that the framework needs to treat binding sequences, as opposed to single binders, in an efficient way. While different methods for formalising single binder calculi have been proposed over the last decades, representations for such binding sequences are not very well explored.

The main effort in the formalisation is to keep the machine checked proofs as close to their pen-and-paper counterparts as possible. This includes treating all binding sequences as atomic elements, and creating custom induction and inversion rules that to remove the bulk of manual alpha-conversions.

This entry is described in detail in Bengtson's thesis. notify = [Encodability_Process_Calculi] title = Analysing and Comparing Encodability Criteria for Process Calculi author = Kirstin Peters , Rob van Glabbeek date = 2015-08-10 topic = Computer science/Concurrency/Process calculi abstract = Encodings or the proof of their absence are the main way to compare process calculi. To analyse the quality of encodings and to rule out trivial or meaningless encodings, they are augmented with quality criteria. There exists a bunch of different criteria and different variants of criteria in order to reason in different settings. This leads to incomparable results. Moreover it is not always clear whether the criteria used to obtain a result in a particular setting do indeed fit to this setting. We show how to formally reason about and compare encodability criteria by mapping them on requirements on a relation between source and target terms that is induced by the encoding function. In particular we analyse the common criteria full abstraction, operational correspondence, divergence reflection, success sensitiveness, and respect of barbs; e.g. we analyse the exact nature of the simulation relation (coupled simulation versus bisimulation) that is induced by different variants of operational correspondence. This way we reduce the problem of analysing or comparing encodability criteria to the better understood problem of comparing relations on processes. notify = kirstin.peters@tu-berlin.de [Circus] title = Isabelle/Circus author = Abderrahmane Feliachi , Burkhart Wolff , Marie-Claude Gaudel contributors = Makarius Wenzel date = 2012-05-27 topic = Computer science/Concurrency/Process calculi, Computer science/System description languages abstract = The Circus specification language combines elements for complex data and behavior specifications, using an integration of Z and CSP with a refinement calculus. Its semantics is based on Hoare and He's Unifying Theories of Programming (UTP). Isabelle/Circus is a formalization of the UTP and the Circus language in Isabelle/HOL. It contains proof rules and tactic support that allows for proofs of refinement for Circus processes (involving both data and behavioral aspects).

The Isabelle/Circus environment supports a syntax for the semantic definitions which is close to textbook presentations of Circus. This article contains an extended version of corresponding VSTTE Paper together with the complete formal development of its underlying commented theories. extra-history = Change history: [2014-06-05]: More polishing, shorter proofs, added Circus syntax, added Makarius Wenzel as contributor. notify = [Dijkstra_Shortest_Path] title = Dijkstra's Shortest Path Algorithm author = Benedikt Nordhoff , Peter Lammich topic = Computer science/Algorithms/Graph date = 2012-01-30 abstract = We implement and prove correct Dijkstra's algorithm for the single source shortest path problem, conceived in 1956 by E. Dijkstra. The algorithm is implemented using the data refinement framework for monadic, nondeterministic programs. An efficient implementation is derived using data structures from the Isabelle Collection Framework. notify = lammich@in.tum.de [Refine_Monadic] title = Refinement for Monadic Programs author = Peter Lammich topic = Computer science/Programming languages/Logics date = 2012-01-30 abstract = We provide a framework for program and data refinement in Isabelle/HOL. The framework is based on a nondeterminism-monad with assertions, i.e., the monad carries a set of results or an assertion failure. Recursion is expressed by fixed points. For convenience, we also provide while and foreach combinators.

The framework provides tools to automatize canonical tasks, such as verification condition generation, finding appropriate data refinement relations, and refine an executable program to a form that is accepted by the Isabelle/HOL code generator.

This submission comes with a collection of examples and a user-guide, illustrating the usage of the framework. extra-history = Change history: [2012-04-23] Introduced ordered FOREACH loops
[2012-06] New features: REC_rule_arb and RECT_rule_arb allow for generalizing over variables. prepare_code_thms - command extracts code equations for recursion combinators.
[2012-07] New example: Nested DFS for emptiness check of Buchi-automata with witness.
New feature: fo_rule method to apply resolution using first-order matching. Useful for arg_conf, fun_cong.
[2012-08] Adaptation to ICF v2.
[2012-10-05] Adaptations to include support for Automatic Refinement Framework.
[2013-09] This entry now depends on Automatic Refinement
[2014-06] New feature: vc_solve method to solve verification conditions. Maintenace changes: VCG-rules for nfoldli, improved setup for FOREACH-loops.
[2014-07] Now defining recursion via flat domain. Dropped many single-valued prerequisites. Changed notion of data refinement. In single-valued case, this matches the old notion. In non-single valued case, the new notion allows for more convenient rules. In particular, the new definitions allow for projecting away ghost variables as a refinement step.
[2014-11] New features: le-or-fail relation (leof), modular reasoning about loop invariants. notify = lammich@in.tum.de [Refine_Imperative_HOL] title = The Imperative Refinement Framework author = Peter Lammich notify = lammich@in.tum.de date = 2016-08-08 topic = Computer science/Programming languages/Transformations,Computer science/Data structures abstract = We present the Imperative Refinement Framework (IRF), a tool that supports a stepwise refinement based approach to imperative programs. This entry is based on the material we presented in [ITP-2015, CPP-2016]. It uses the Monadic Refinement Framework as a frontend for the specification of the abstract programs, and Imperative/HOL as a backend to generate executable imperative programs. The IRF comes with tool support to synthesize imperative programs from more abstract, functional ones, using efficient imperative implementations for the abstract data structures. This entry also includes the Imperative Isabelle Collection Framework (IICF), which provides a library of re-usable imperative collection data structures. Moreover, this entry contains a quickstart guide and a reference manual, which provide an introduction to using the IRF for Isabelle/HOL experts. It also provids a collection of (partly commented) practical examples, some highlights being Dijkstra's Algorithm, Nested-DFS, and a generic worklist algorithm with subsumption. Finally, this entry contains benchmark scripts that compare the runtime of some examples against reference implementations of the algorithms in Java and C++. [ITP-2015] Peter Lammich: Refinement to Imperative/HOL. ITP 2015: 253--269 [CPP-2016] Peter Lammich: Refinement based verification of imperative data structures. CPP 2016: 27--36 [Automatic_Refinement] title = Automatic Data Refinement author = Peter Lammich topic = Computer science/Programming languages/Logics date = 2013-10-02 abstract = We present the Autoref tool for Isabelle/HOL, which automatically refines algorithms specified over abstract concepts like maps and sets to algorithms over concrete implementations like red-black-trees, and produces a refinement theorem. It is based on ideas borrowed from relational parametricity due to Reynolds and Wadler. The tool allows for rapid prototyping of verified, executable algorithms. Moreover, it can be configured to fine-tune the result to the user~s needs. Our tool is able to automatically instantiate generic algorithms, which greatly simplifies the implementation of executable data structures.

This AFP-entry provides the basic tool, which is then used by the Refinement and Collection Framework to provide automatic data refinement for the nondeterminism monad and various collection datastructures. notify = lammich@in.tum.de [EdmondsKarp_Maxflow] title = Formalizing the Edmonds-Karp Algorithm author = Peter Lammich , S. Reza Sefidgar<> notify = lammich@in.tum.de date = 2016-08-12 topic = Computer science/Algorithms/Graph abstract = We present a formalization of the Ford-Fulkerson method for computing the maximum flow in a network. Our formal proof closely follows a standard textbook proof, and is accessible even without being an expert in Isabelle/HOL--- the interactive theorem prover used for the formalization. We then use stepwise refinement to obtain the Edmonds-Karp algorithm, and formally prove a bound on its complexity. Further refinement yields a verified implementation, whose execution time compares well to an unverified reference implementation in Java. This entry is based on our ITP-2016 paper with the same title. [VerifyThis2018] title = VerifyThis 2018 - Polished Isabelle Solutions author = Peter Lammich , Simon Wimmer topic = Computer science/Algorithms date = 2018-04-27 notify = lammich@in.tum.de abstract = VerifyThis 2018 was a program verification competition associated with ETAPS 2018. It was the 7th event in the VerifyThis competition series. In this entry, we present polished and completed versions of our solutions that we created during the competition. [PseudoHoops] title = Pseudo Hoops author = George Georgescu <>, Laurentiu Leustean <>, Viorel Preoteasa topic = Mathematics/Algebra date = 2011-09-22 abstract = Pseudo-hoops are algebraic structures introduced by B. Bosbach under the name of complementary semigroups. In this formalization we prove some properties of pseudo-hoops and we define the basic concepts of filter and normal filter. The lattice of normal filters is isomorphic with the lattice of congruences of a pseudo-hoop. We also study some important classes of pseudo-hoops. Bounded Wajsberg pseudo-hoops are equivalent to pseudo-Wajsberg algebras and bounded basic pseudo-hoops are equivalent to pseudo-BL algebras. Some examples of pseudo-hoops are given in the last section of the formalization. notify = viorel.preoteasa@aalto.fi [MonoBoolTranAlgebra] title = Algebra of Monotonic Boolean Transformers author = Viorel Preoteasa topic = Computer science/Programming languages/Logics date = 2011-09-22 abstract = Algebras of imperative programming languages have been successful in reasoning about programs. In general an algebra of programs is an algebraic structure with programs as elements and with program compositions (sequential composition, choice, skip) as algebra operations. Various versions of these algebras were introduced to model partial correctness, total correctness, refinement, demonic choice, and other aspects. We formalize here an algebra which can be used to model total correctness, refinement, demonic and angelic choice. The basic model of this algebra are monotonic Boolean transformers (monotonic functions from a Boolean algebra to itself). notify = viorel.preoteasa@aalto.fi [LatticeProperties] title = Lattice Properties author = Viorel Preoteasa topic = Mathematics/Order date = 2011-09-22 abstract = This formalization introduces and collects some algebraic structures based on lattices and complete lattices for use in other developments. The structures introduced are modular, and lattice ordered groups. In addition to the results proved for the new lattices, this formalization also introduces theorems about latices and complete lattices in general. extra-history = Change history: [2012-01-05]: Removed the theory about distributive complete lattices which is in the standard library now. Added a theory about well founded and transitive relations and a result about fixpoints in complete lattices and well founded relations. Moved the results about conjunctive and disjunctive functions to a new theory. Removed the syntactic classes for inf and sup which are in the standard library now. notify = viorel.preoteasa@aalto.fi [Impossible_Geometry] title = Proving the Impossibility of Trisecting an Angle and Doubling the Cube author = Ralph Romanos , Lawrence C. Paulson topic = Mathematics/Algebra, Mathematics/Geometry date = 2012-08-05 abstract = Squaring the circle, doubling the cube and trisecting an angle, using a compass and straightedge alone, are classic unsolved problems first posed by the ancient Greeks. All three problems were proved to be impossible in the 19th century. The following document presents the proof of the impossibility of solving the latter two problems using Isabelle/HOL, following a proof by Carrega. The proof uses elementary methods: no Galois theory or field extensions. The set of points constructible using a compass and straightedge is defined inductively. Radical expressions, which involve only square roots and arithmetic of rational numbers, are defined, and we find that all constructive points have radical coordinates. Finally, doubling the cube and trisecting certain angles requires solving certain cubic equations that can be proved to have no rational roots. The Isabelle proofs require a great many detailed calculations. notify = ralph.romanos@student.ecp.fr, lp15@cam.ac.uk [IP_Addresses] title = IP Addresses author = Cornelius Diekmann , Julius Michaelis , Lars Hupel notify = diekmann@net.in.tum.de date = 2016-06-28 topic = Computer science/Networks abstract = This entry contains a definition of IP addresses and a library to work with them. Generic IP addresses are modeled as machine words of arbitrary length. Derived from this generic definition, IPv4 addresses are 32bit machine words, IPv6 addresses are 128bit words. Additionally, IPv4 addresses can be represented in dot-decimal notation and IPv6 addresses in (compressed) colon-separated notation. We support toString functions and parsers for both notations. Sets of IP addresses can be represented with a netmask (e.g. 192.168.0.0/255.255.0.0) or in CIDR notation (e.g. 192.168.0.0/16). To provide executable code for set operations on IP address ranges, the library includes a datatype to work on arbitrary intervals of machine words. [Simple_Firewall] title = Simple Firewall author = Cornelius Diekmann , Julius Michaelis , Maximilian Haslbeck notify = diekmann@net.in.tum.de, max.haslbeck@gmx.de date = 2016-08-24 topic = Computer science/Networks abstract = We present a simple model of a firewall. The firewall can accept or drop a packet and can match on interfaces, IP addresses, protocol, and ports. It was designed to feature nice mathematical properties: The type of match expressions was carefully crafted such that the conjunction of two match expressions is only one match expression. This model is too simplistic to mirror all aspects of the real world. In the upcoming entry "Iptables Semantics", we will translate the Linux firewall iptables to this model. For a fixed service (e.g. ssh, http), we provide an algorithm to compute an overview of the firewall's filtering behavior. The algorithm computes minimal service matrices, i.e. graphs which partition the complete IPv4 and IPv6 address space and visualize the allowed accesses between partitions. For a detailed description, see Verified iptables Firewall Analysis, IFIP Networking 2016. [Iptables_Semantics] title = Iptables Semantics author = Cornelius Diekmann , Lars Hupel notify = diekmann@net.in.tum.de, hupel@in.tum.de date = 2016-09-09 topic = Computer science/Networks abstract = We present a big step semantics of the filtering behavior of the Linux/netfilter iptables firewall. We provide algorithms to simplify complex iptables rulests to a simple firewall model (c.f. AFP entry Simple_Firewall) and to verify spoofing protection of a ruleset. Internally, we embed our semantics into ternary logic, ultimately supporting every iptables match condition by abstracting over unknowns. Using this AFP entry and all entries it depends on, we created an easy-to-use, stand-alone haskell tool called fffuu. The tool does not require any input —except for the iptables-save dump of the analyzed firewall— and presents interesting results about the user's ruleset. Real-Word firewall errors have been uncovered, and the correctness of rulesets has been proved, with the help of our tool. [Routing] title = Routing author = Julius Michaelis , Cornelius Diekmann notify = afp@liftm.de date = 2016-08-31 topic = Computer science/Networks abstract = This entry contains definitions for routing with routing tables/longest prefix matching. A routing table entry is modelled as a record of a prefix match, a metric, an output port, and an optional next hop. A routing table is a list of entries, sorted by prefix length and metric. Additionally, a parser and serializer for the output of the ip-route command, a function to create a relation from output port to corresponding destination IP space, and a model of a Linux-style router are included. [KBPs] title = Knowledge-based programs author = Peter Gammie topic = Computer science/Automata and formal languages date = 2011-05-17 abstract = Knowledge-based programs (KBPs) are a formalism for directly relating agents' knowledge and behaviour. Here we present a general scheme for compiling KBPs to executable automata with a proof of correctness in Isabelle/HOL. We develop the algorithm top-down, using Isabelle's locale mechanism to structure these proofs, and show that two classic examples can be synthesised using Isabelle's code generator. extra-history = Change history: [2012-03-06]: Add some more views and revive the code generation. notify = kleing@cse.unsw.edu.au [Tarskis_Geometry] title = The independence of Tarski's Euclidean axiom author = T. J. M. Makarios topic = Mathematics/Geometry date = 2012-10-30 abstract = Tarski's axioms of plane geometry are formalized and, using the standard real Cartesian model, shown to be consistent. A substantial theory of the projective plane is developed. Building on this theory, the Klein-Beltrami model of the hyperbolic plane is defined and shown to satisfy all of Tarski's axioms except his Euclidean axiom; thus Tarski's Euclidean axiom is shown to be independent of his other axioms of plane geometry.

An earlier version of this work was the subject of the author's MSc thesis, which contains natural-language explanations of some of the more interesting proofs. notify = tjm1983@gmail.com [IsaGeoCoq] title = Tarski's Parallel Postulate implies the 5th Postulate of Euclid, the Postulate of Playfair and the original Parallel Postulate of Euclid author = Roland Coghetto topic = Mathematics/Geometry license = LGPL date = 2021-01-31 notify = roland_coghetto@hotmail.com abstract =

The GeoCoq library contains a formalization of geometry using the Coq proof assistant. It contains both proofs about the foundations of geometry and high-level proofs in the same style as in high school. We port a part of the GeoCoq 2.4.0 library to Isabelle/HOL: more precisely, the files Chap02.v to Chap13_3.v, suma.v as well as the associated definitions and some useful files for the demonstration of certain parallel postulates. The synthetic approach of the demonstrations is directly inspired by those contained in GeoCoq. The names of the lemmas and theorems used are kept as far as possible as well as the definitions.

It should be noted that T.J.M. Makarios has done some proofs in Tarski's Geometry. It uses a definition that does not quite coincide with the definition used in Geocoq and here. Furthermore, corresponding definitions in the Poincaré Disc Model development are not identical to those defined in GeoCoq.

In the last part, it is formalized that, in the neutral/absolute space, the axiom of the parallels of Tarski's system implies the Playfair axiom, the 5th postulate of Euclid and Euclid's original parallel postulate. These proofs, which are not constructive, are directly inspired by Pierre Boutry, Charly Gries, Julien Narboux and Pascal Schreck.

[General-Triangle] title = The General Triangle Is Unique author = Joachim Breitner topic = Mathematics/Geometry date = 2011-04-01 abstract = Some acute-angled triangles are special, e.g. right-angled or isoscele triangles. Some are not of this kind, but, without measuring angles, look as if they were. In that sense, there is exactly one general triangle. This well-known fact is proven here formally. notify = mail@joachim-breitner.de [LightweightJava] title = Lightweight Java author = Rok Strniša , Matthew Parkinson topic = Computer science/Programming languages/Language definitions date = 2011-02-07 abstract = A fully-formalized and extensible minimal imperative fragment of Java. notify = rok@strnisa.com [Lower_Semicontinuous] title = Lower Semicontinuous Functions author = Bogdan Grechuk topic = Mathematics/Analysis date = 2011-01-08 abstract = We define the notions of lower and upper semicontinuity for functions from a metric space to the extended real line. We prove that a function is both lower and upper semicontinuous if and only if it is continuous. We also give several equivalent characterizations of lower semicontinuity. In particular, we prove that a function is lower semicontinuous if and only if its epigraph is a closed set. Also, we introduce the notion of the lower semicontinuous hull of an arbitrary function and prove its basic properties. notify = hoelzl@in.tum.de [RIPEMD-160-SPARK] title = RIPEMD-160 author = Fabian Immler topic = Computer science/Programming languages/Static analysis date = 2011-01-10 abstract = This work presents a verification of an implementation in SPARK/ADA of the cryptographic hash-function RIPEMD-160. A functional specification of RIPEMD-160 is given in Isabelle/HOL. Proofs for the verification conditions generated by the static-analysis toolset of SPARK certify the functional correctness of the implementation. extra-history = Change history: [2015-11-09]: Entry is now obsolete, moved to Isabelle distribution. notify = immler@in.tum.de [Regular-Sets] title = Regular Sets and Expressions author = Alexander Krauss , Tobias Nipkow contributors = Manuel Eberl topic = Computer science/Automata and formal languages date = 2010-05-12 abstract = This is a library of constructions on regular expressions and languages. It provides the operations of concatenation, Kleene star and derivative on languages. Regular expressions and their meaning are defined. An executable equivalence checker for regular expressions is verified; it does not need automata but works directly on regular expressions. By mapping regular expressions to binary relations, an automatic and complete proof method for (in)equalities of binary relations over union, concatenation and (reflexive) transitive closure is obtained.

Extended regular expressions with complement and intersection are also defined and an equivalence checker is provided. extra-history = Change history: [2011-08-26]: Christian Urban added a theory about derivatives and partial derivatives of regular expressions
[2012-05-10]: Tobias Nipkow added extended regular expressions
[2012-05-10]: Tobias Nipkow added equivalence checking with partial derivatives notify = nipkow@in.tum.de, krauss@in.tum.de, christian.urban@kcl.ac.uk [Regex_Equivalence] title = Unified Decision Procedures for Regular Expression Equivalence author = Tobias Nipkow , Dmitriy Traytel topic = Computer science/Automata and formal languages date = 2014-01-30 abstract = We formalize a unified framework for verified decision procedures for regular expression equivalence. Five recently published formalizations of such decision procedures (three based on derivatives, two on marked regular expressions) can be obtained as instances of the framework. We discover that the two approaches based on marked regular expressions, which were previously thought to be the same, are different, and one seems to produce uniformly smaller automata. The common framework makes it possible to compare the performance of the different decision procedures in a meaningful way. The formalization is described in a paper of the same name presented at Interactive Theorem Proving 2014. notify = nipkow@in.tum.de, traytel@in.tum.de [MSO_Regex_Equivalence] title = Decision Procedures for MSO on Words Based on Derivatives of Regular Expressions author = Dmitriy Traytel , Tobias Nipkow topic = Computer science/Automata and formal languages, Logic/General logic/Decidability of theories date = 2014-06-12 abstract = Monadic second-order logic on finite words (MSO) is a decidable yet expressive logic into which many decision problems can be encoded. Since MSO formulas correspond to regular languages, equivalence of MSO formulas can be reduced to the equivalence of some regular structures (e.g. automata). We verify an executable decision procedure for MSO formulas that is not based on automata but on regular expressions.

Decision procedures for regular expression equivalence have been formalized before, usually based on Brzozowski derivatives. Yet, for a straightforward embedding of MSO formulas into regular expressions an extension of regular expressions with a projection operation is required. We prove total correctness and completeness of an equivalence checker for regular expressions extended in that way. We also define a language-preserving translation of formulas into regular expressions with respect to two different semantics of MSO.

The formalization is described in this ICFP 2013 functional pearl. notify = traytel@in.tum.de, nipkow@in.tum.de [Formula_Derivatives] title = Derivatives of Logical Formulas author = Dmitriy Traytel topic = Computer science/Automata and formal languages, Logic/General logic/Decidability of theories date = 2015-05-28 abstract = We formalize new decision procedures for WS1S, M2L(Str), and Presburger Arithmetics. Formulas of these logics denote regular languages. Unlike traditional decision procedures, we do not translate formulas into automata (nor into regular expressions), at least not explicitly. Instead we devise notions of derivatives (inspired by Brzozowski derivatives for regular expressions) that operate on formulas directly and compute a syntactic bisimulation using these derivatives. The treatment of Boolean connectives and quantifiers is uniform for all mentioned logics and is abstracted into a locale. This locale is then instantiated by different atomic formulas and their derivatives (which may differ even for the same logic under different encodings of interpretations as formal words).

The WS1S instance is described in the draft paper A Coalgebraic Decision Procedure for WS1S by the author. notify = traytel@in.tum.de [Myhill-Nerode] title = The Myhill-Nerode Theorem Based on Regular Expressions author = Chunhan Wu <>, Xingyuan Zhang <>, Christian Urban contributors = Manuel Eberl topic = Computer science/Automata and formal languages date = 2011-08-26 abstract = There are many proofs of the Myhill-Nerode theorem using automata. In this library we give a proof entirely based on regular expressions, since regularity of languages can be conveniently defined using regular expressions (it is more painful in HOL to define regularity in terms of automata). We prove the first direction of the Myhill-Nerode theorem by solving equational systems that involve regular expressions. For the second direction we give two proofs: one using tagging-functions and another using partial derivatives. We also establish various closure properties of regular languages. Most details of the theories are described in our ITP 2011 paper. notify = christian.urban@kcl.ac.uk [Universal_Turing_Machine] title = Universal Turing Machine author = Jian Xu<>, Xingyuan Zhang<>, Christian Urban , Sebastiaan J. C. Joosten topic = Logic/Computability, Computer science/Automata and formal languages date = 2019-02-08 notify = sjcjoosten@gmail.com, christian.urban@kcl.ac.uk abstract = We formalise results from computability theory: recursive functions, undecidability of the halting problem, and the existence of a universal Turing machine. This formalisation is the AFP entry corresponding to the paper Mechanising Turing Machines and Computability Theory in Isabelle/HOL, ITP 2013. [CYK] title = A formalisation of the Cocke-Younger-Kasami algorithm author = Maksym Bortin date = 2016-04-27 topic = Computer science/Algorithms, Computer science/Automata and formal languages abstract = The theory provides a formalisation of the Cocke-Younger-Kasami algorithm (CYK for short), an approach to solving the word problem for context-free languages. CYK decides if a word is in the languages generated by a context-free grammar in Chomsky normal form. The formalized algorithm is executable. notify = maksym.bortin@nicta.com.au [Boolean_Expression_Checkers] title = Boolean Expression Checkers author = Tobias Nipkow date = 2014-06-08 topic = Computer science/Algorithms, Logic/General logic/Mechanization of proofs abstract = This entry provides executable checkers for the following properties of boolean expressions: satisfiability, tautology and equivalence. Internally, the checkers operate on binary decision trees and are reasonably efficient (for purely functional algorithms). extra-history = Change history: [2015-09-23]: Salomon Sickert added an interface that does not require the usage of the Boolean formula datatype. Furthermore the general Mapping type is used instead of an association list. notify = nipkow@in.tum.de [Presburger-Automata] title = Formalizing the Logic-Automaton Connection author = Stefan Berghofer , Markus Reiter <> date = 2009-12-03 topic = Computer science/Automata and formal languages, Logic/General logic/Decidability of theories abstract = This work presents a formalization of a library for automata on bit strings. It forms the basis of a reflection-based decision procedure for Presburger arithmetic, which is efficiently executable thanks to Isabelle's code generator. With this work, we therefore provide a mechanized proof of a well-known connection between logic and automata theory. The formalization is also described in a publication [TPHOLs 2009]. notify = berghofe@in.tum.de [Functional-Automata] title = Functional Automata author = Tobias Nipkow date = 2004-03-30 topic = Computer science/Automata and formal languages abstract = This theory defines deterministic and nondeterministic automata in a functional representation: the transition function/relation and the finality predicate are just functions. Hence the state space may be infinite. It is shown how to convert regular expressions into such automata. A scanner (generator) is implemented with the help of functional automata: the scanner chops the input up into longest recognized substrings. Finally we also show how to convert a certain subclass of functional automata (essentially the finite deterministic ones) into regular sets. notify = nipkow@in.tum.de [Statecharts] title = Formalizing Statecharts using Hierarchical Automata author = Steffen Helke , Florian Kammüller topic = Computer science/Automata and formal languages date = 2010-08-08 abstract = We formalize in Isabelle/HOL the abtract syntax and a synchronous step semantics for the specification language Statecharts. The formalization is based on Hierarchical Automata which allow a structural decomposition of Statecharts into Sequential Automata. To support the composition of Statecharts, we introduce calculating operators to construct a Hierarchical Automaton in a stepwise manner. Furthermore, we present a complete semantics of Statecharts including a theory of data spaces, which enables the modelling of racing effects. We also adapt CTL for Statecharts to build a bridge for future combinations with model checking. However the main motivation of this work is to provide a sound and complete basis for reasoning on Statecharts. As a central meta theorem we prove that the well-formedness of a Statechart is preserved by the semantics. notify = nipkow@in.tum.de [Stuttering_Equivalence] title = Stuttering Equivalence author = Stephan Merz topic = Computer science/Automata and formal languages date = 2012-05-07 abstract =

Two omega-sequences are stuttering equivalent if they differ only by finite repetitions of elements. Stuttering equivalence is a fundamental concept in the theory of concurrent and distributed systems. Notably, Lamport argues that refinement notions for such systems should be insensitive to finite stuttering. Peled and Wilke showed that all PLTL (propositional linear-time temporal logic) properties that are insensitive to stuttering equivalence can be expressed without the next-time operator. Stuttering equivalence is also important for certain verification techniques such as partial-order reduction for model checking.

We formalize stuttering equivalence in Isabelle/HOL. Our development relies on the notion of stuttering sampling functions that may skip blocks of identical sequence elements. We also encode PLTL and prove the theorem due to Peled and Wilke.

extra-history = Change history: [2013-01-31]: Added encoding of PLTL and proved Peled and Wilke's theorem. Adjusted abstract accordingly. notify = Stephan.Merz@loria.fr [Coinductive_Languages] title = A Codatatype of Formal Languages author = Dmitriy Traytel topic = Computer science/Automata and formal languages date = 2013-11-15 abstract =

We define formal languages as a codataype of infinite trees branching over the alphabet. Each node in such a tree indicates whether the path to this node constitutes a word inside or outside of the language. This codatatype is isormorphic to the set of lists representation of languages, but caters for definitions by corecursion and proofs by coinduction.

Regular operations on languages are then defined by primitive corecursion. A difficulty arises here, since the standard definitions of concatenation and iteration from the coalgebraic literature are not primitively corecursive-they require guardedness up-to union/concatenation. Without support for up-to corecursion, these operation must be defined as a composition of primitive ones (and proved being equal to the standard definitions). As an exercise in coinduction we also prove the axioms of Kleene algebra for the defined regular operations.

Furthermore, a language for context-free grammars given by productions in Greibach normal form and an initial nonterminal is constructed by primitive corecursion, yielding an executable decision procedure for the word problem without further ado.

notify = traytel@in.tum.de [Tree-Automata] title = Tree Automata author = Peter Lammich date = 2009-11-25 topic = Computer science/Automata and formal languages abstract = This work presents a machine-checked tree automata library for Standard-ML, OCaml and Haskell. The algorithms are efficient by using appropriate data structures like RB-trees. The available algorithms for non-deterministic automata include membership query, reduction, intersection, union, and emptiness check with computation of a witness for non-emptiness. The executable algorithms are derived from less-concrete, non-executable algorithms using data-refinement techniques. The concrete data structures are from the Isabelle Collections Framework. Moreover, this work contains a formalization of the class of tree-regular languages and its closure properties under set operations. notify = peter.lammich@uni-muenster.de, nipkow@in.tum.de [Depth-First-Search] title = Depth First Search author = Toshiaki Nishihara <>, Yasuhiko Minamide <> date = 2004-06-24 topic = Computer science/Algorithms/Graph abstract = Depth-first search of a graph is formalized with recdef. It is shown that it visits all of the reachable nodes from a given list of nodes. Executable ML code of depth-first search is obtained using the code generation feature of Isabelle/HOL. notify = lp15@cam.ac.uk, krauss@in.tum.de [FFT] title = Fast Fourier Transform author = Clemens Ballarin date = 2005-10-12 topic = Computer science/Algorithms/Mathematical abstract = We formalise a functional implementation of the FFT algorithm over the complex numbers, and its inverse. Both are shown equivalent to the usual definitions of these operations through Vandermonde matrices. They are also shown to be inverse to each other, more precisely, that composition of the inverse and the transformation yield the identity up to a scalar. notify = ballarin@in.tum.de [Gauss-Jordan-Elim-Fun] title = Gauss-Jordan Elimination for Matrices Represented as Functions author = Tobias Nipkow date = 2011-08-19 topic = Computer science/Algorithms/Mathematical, Mathematics/Algebra abstract = This theory provides a compact formulation of Gauss-Jordan elimination for matrices represented as functions. Its distinctive feature is succinctness. It is not meant for large computations. notify = nipkow@in.tum.de [UpDown_Scheme] title = Verification of the UpDown Scheme author = Johannes Hölzl date = 2015-01-28 topic = Computer science/Algorithms/Mathematical abstract = The UpDown scheme is a recursive scheme used to compute the stiffness matrix on a special form of sparse grids. Usually, when discretizing a Euclidean space of dimension d we need O(n^d) points, for n points along each dimension. Sparse grids are a hierarchical representation where the number of points is reduced to O(n * log(n)^d). One disadvantage of such sparse grids is that the algorithm now operate recursively in the dimensions and levels of the sparse grid.

The UpDown scheme allows us to compute the stiffness matrix on such a sparse grid. The stiffness matrix represents the influence of each representation function on the L^2 scalar product. For a detailed description see Dirk Pflüger's PhD thesis. This formalization was developed as an interdisciplinary project (IDP) at the Technische Universität München. notify = hoelzl@in.tum.de [GraphMarkingIBP] title = Verification of the Deutsch-Schorr-Waite Graph Marking Algorithm using Data Refinement author = Viorel Preoteasa , Ralph-Johan Back date = 2010-05-28 topic = Computer science/Algorithms/Graph abstract = The verification of the Deutsch-Schorr-Waite graph marking algorithm is used as a benchmark in many formalizations of pointer programs. The main purpose of this mechanization is to show how data refinement of invariant based programs can be used in verifying practical algorithms. The verification starts with an abstract algorithm working on a graph given by a relation next on nodes. Gradually the abstract program is refined into Deutsch-Schorr-Waite graph marking algorithm where only one bit per graph node of additional memory is used for marking. extra-history = Change history: [2012-01-05]: Updated for the new definition of data refinement and the new syntax for demonic and angelic update statements notify = viorel.preoteasa@aalto.fi [Efficient-Mergesort] title = Efficient Mergesort topic = Computer science/Algorithms date = 2011-11-09 author = Christian Sternagel abstract = We provide a formalization of the mergesort algorithm as used in GHC's Data.List module, proving correctness and stability. Furthermore, experimental data suggests that generated (Haskell-)code for this algorithm is much faster than for previous algorithms available in the Isabelle distribution. extra-history = Change history: [2012-10-24]: Added reference to journal article.
[2018-09-17]: Added theory Efficient_Mergesort that works exclusively with the mutual induction schemas generated by the function package.
[2018-09-19]: Added theory Mergesort_Complexity that proves an upper bound on the number of comparisons that are required by mergesort.
[2018-09-19]: Theory Efficient_Mergesort replaces theory Efficient_Sort but keeping the old name Efficient_Sort. [2020-11-20]: Additional theory Natural_Mergesort that developes an efficient mergesort algorithm without key-functions for educational purposes. notify = c.sternagel@gmail.com [SATSolverVerification] title = Formal Verification of Modern SAT Solvers author = Filip Marić date = 2008-07-23 topic = Computer science/Algorithms abstract = This document contains formal correctness proofs of modern SAT solvers. Following (Krstic et al, 2007) and (Nieuwenhuis et al., 2006), solvers are described using state-transition systems. Several different SAT solver descriptions are given and their partial correctness and termination is proved. These include:

  • a solver based on classical DPLL procedure (using only a backtrack-search with unit propagation),
  • a very general solver with backjumping and learning (similar to the description given in (Nieuwenhuis et al., 2006)), and
  • a solver with a specific conflict analysis algorithm (similar to the description given in (Krstic et al., 2007)).
Within the SAT solver correctness proofs, a large number of lemmas about propositional logic and CNF formulae are proved. This theory is self-contained and could be used for further exploring of properties of CNF based SAT algorithms. notify = [Transitive-Closure] title = Executable Transitive Closures of Finite Relations topic = Computer science/Algorithms/Graph date = 2011-03-14 author = Christian Sternagel , René Thiemann license = LGPL abstract = We provide a generic work-list algorithm to compute the transitive closure of finite relations where only successors of newly detected states are generated. This algorithm is then instantiated for lists over arbitrary carriers and red black trees (which are faster but require a linear order on the carrier), respectively. Our formalization was performed as part of the IsaFoR/CeTA project where reflexive transitive closures of large tree automata have to be computed. extra-history = Change history: [2014-09-04] added example simprocs in Finite_Transitive_Closure_Simprocs notify = c.sternagel@gmail.com, rene.thiemann@uibk.ac.at [Transitive-Closure-II] title = Executable Transitive Closures topic = Computer science/Algorithms/Graph date = 2012-02-29 author = René Thiemann license = LGPL abstract =

We provide a generic work-list algorithm to compute the (reflexive-)transitive closure of relations where only successors of newly detected states are generated. In contrast to our previous work, the relations do not have to be finite, but each element must only have finitely many (indirect) successors. Moreover, a subsumption relation can be used instead of pure equality. An executable variant of the algorithm is available where the generic operations are instantiated with list operations.

This formalization was performed as part of the IsaFoR/CeTA project, and it has been used to certify size-change termination proofs where large transitive closures have to be computed.

notify = rene.thiemann@uibk.ac.at [MuchAdoAboutTwo] title = Much Ado About Two author = Sascha Böhme date = 2007-11-06 topic = Computer science/Algorithms abstract = This article is an Isabelle formalisation of a paper with the same title. In a similar way as Knuth's 0-1-principle for sorting algorithms, that paper develops a 0-1-2-principle for parallel prefix computations. notify = boehmes@in.tum.de [DiskPaxos] title = Proving the Correctness of Disk Paxos date = 2005-06-22 author = Mauro Jaskelioff , Stephan Merz topic = Computer science/Algorithms/Distributed abstract = Disk Paxos is an algorithm for building arbitrary fault-tolerant distributed systems. The specification of Disk Paxos has been proved correct informally and tested using the TLC model checker, but up to now, it has never been fully formally verified. In this work we have formally verified its correctness using the Isabelle theorem prover and the HOL logic system, showing that Isabelle is a practical tool for verifying properties of TLA+ specifications. notify = kleing@cse.unsw.edu.au [GenClock] title = Formalization of a Generalized Protocol for Clock Synchronization author = Alwen Tiu date = 2005-06-24 topic = Computer science/Algorithms/Distributed abstract = We formalize the generalized Byzantine fault-tolerant clock synchronization protocol of Schneider. This protocol abstracts from particular algorithms or implementations for clock synchronization. This abstraction includes several assumptions on the behaviors of physical clocks and on general properties of concrete algorithms/implementations. Based on these assumptions the correctness of the protocol is proved by Schneider. His proof was later verified by Shankar using the theorem prover EHDM (precursor to PVS). Our formalization in Isabelle/HOL is based on Shankar's formalization. notify = kleing@cse.unsw.edu.au [ClockSynchInst] title = Instances of Schneider's generalized protocol of clock synchronization author = Damián Barsotti date = 2006-03-15 topic = Computer science/Algorithms/Distributed abstract = F. B. Schneider ("Understanding protocols for Byzantine clock synchronization") generalizes a number of protocols for Byzantine fault-tolerant clock synchronization and presents a uniform proof for their correctness. In Schneider's schema, each processor maintains a local clock by periodically adjusting each value to one computed by a convergence function applied to the readings of all the clocks. Then, correctness of an algorithm, i.e. that the readings of two clocks at any time are within a fixed bound of each other, is based upon some conditions on the convergence function. To prove that a particular clock synchronization algorithm is correct it suffices to show that the convergence function used by the algorithm meets Schneider's conditions. Using the theorem prover Isabelle, we formalize the proofs that the convergence functions of two algorithms, namely, the Interactive Convergence Algorithm (ICA) of Lamport and Melliar-Smith and the Fault-tolerant Midpoint algorithm of Lundelius-Lynch, meet Schneider's conditions. Furthermore, we experiment on handling some parts of the proofs with fully automatic tools like ICS and CVC-lite. These theories are part of a joint work with Alwen Tiu and Leonor P. Nieto "Verification of Clock Synchronization Algorithms: Experiments on a combination of deductive tools" in proceedings of AVOCS 2005. In this work the correctness of Schneider schema was also verified using Isabelle (entry GenClock in AFP). notify = kleing@cse.unsw.edu.au [Heard_Of] title = Verifying Fault-Tolerant Distributed Algorithms in the Heard-Of Model date = 2012-07-27 author = Henri Debrat , Stephan Merz topic = Computer science/Algorithms/Distributed abstract = Distributed computing is inherently based on replication, promising increased tolerance to failures of individual computing nodes or communication channels. Realizing this promise, however, involves quite subtle algorithmic mechanisms, and requires precise statements about the kinds and numbers of faults that an algorithm tolerates (such as process crashes, communication faults or corrupted values). The landmark theorem due to Fischer, Lynch, and Paterson shows that it is impossible to achieve Consensus among N asynchronously communicating nodes in the presence of even a single permanent failure. Existing solutions must rely on assumptions of "partial synchrony".

Indeed, there have been numerous misunderstandings on what exactly a given algorithm is supposed to realize in what kinds of environments. Moreover, the abundance of subtly different computational models complicates comparisons between different algorithms. Charron-Bost and Schiper introduced the Heard-Of model for representing algorithms and failure assumptions in a uniform framework, simplifying comparisons between algorithms.

In this contribution, we represent the Heard-Of model in Isabelle/HOL. We define two semantics of runs of algorithms with different unit of atomicity and relate these through a reduction theorem that allows us to verify algorithms in the coarse-grained semantics (where proofs are easier) and infer their correctness for the fine-grained one (which corresponds to actual executions). We instantiate the framework by verifying six Consensus algorithms that differ in the underlying algorithmic mechanisms and the kinds of faults they tolerate. notify = Stephan.Merz@loria.fr [Consensus_Refined] title = Consensus Refined date = 2015-03-18 author = Ognjen Maric <>, Christoph Sprenger topic = Computer science/Algorithms/Distributed abstract = Algorithms for solving the consensus problem are fundamental to distributed computing. Despite their brevity, their ability to operate in concurrent, asynchronous and failure-prone environments comes at the cost of complex and subtle behaviors. Accordingly, understanding how they work and proving their correctness is a non-trivial endeavor where abstraction is immensely helpful. Moreover, research on consensus has yielded a large number of algorithms, many of which appear to share common algorithmic ideas. A natural question is whether and how these similarities can be distilled and described in a precise, unified way. In this work, we combine stepwise refinement and lockstep models to provide an abstract and unified view of a sizeable family of consensus algorithms. Our models provide insights into the design choices underlying the different algorithms, and classify them based on those choices. notify = sprenger@inf.ethz.ch [Key_Agreement_Strong_Adversaries] title = Refining Authenticated Key Agreement with Strong Adversaries author = Joseph Lallemand , Christoph Sprenger topic = Computer science/Security license = LGPL date = 2017-01-31 notify = joseph.lallemand@loria.fr, sprenger@inf.ethz.ch abstract = We develop a family of key agreement protocols that are correct by construction. Our work substantially extends prior work on developing security protocols by refinement. First, we strengthen the adversary by allowing him to compromise different resources of protocol participants, such as their long-term keys or their session keys. This enables the systematic development of protocols that ensure strong properties such as perfect forward secrecy. Second, we broaden the class of protocols supported to include those with non-atomic keys and equationally defined cryptographic operators. We use these extensions to develop key agreement protocols including signed Diffie-Hellman and the core of IKEv1 and SKEME. [Security_Protocol_Refinement] title = Developing Security Protocols by Refinement author = Christoph Sprenger , Ivano Somaini<> topic = Computer science/Security license = LGPL date = 2017-05-24 notify = sprenger@inf.ethz.ch abstract = We propose a development method for security protocols based on stepwise refinement. Our refinement strategy transforms abstract security goals into protocols that are secure when operating over an insecure channel controlled by a Dolev-Yao-style intruder. As intermediate levels of abstraction, we employ messageless guard protocols and channel protocols communicating over channels with security properties. These abstractions provide insights on why protocols are secure and foster the development of families of protocols sharing common structure and properties. We have implemented our method in Isabelle/HOL and used it to develop different entity authentication and key establishment protocols, including realistic features such as key confirmation, replay caches, and encrypted tickets. Our development highlights that guard protocols and channel protocols provide fundamental abstractions for bridging the gap between security properties and standard protocol descriptions based on cryptographic messages. It also shows that our refinement approach scales to protocols of nontrivial size and complexity. [Abortable_Linearizable_Modules] title = Abortable Linearizable Modules author = Rachid Guerraoui , Viktor Kuncak , Giuliano Losa date = 2012-03-01 topic = Computer science/Algorithms/Distributed abstract = We define the Abortable Linearizable Module automaton (ALM for short) and prove its key composition property using the IOA theory of HOLCF. The ALM is at the heart of the Speculative Linearizability framework. This framework simplifies devising correct speculative algorithms by enabling their decomposition into independent modules that can be analyzed and proved correct in isolation. It is particularly useful when working in a distributed environment, where the need to tolerate faults and asynchrony has made current monolithic protocols so intricate that it is no longer tractable to check their correctness. Our theory contains a typical example of a refinement proof in the I/O-automata framework of Lynch and Tuttle. notify = giuliano@losa.fr, nipkow@in.tum.de [Amortized_Complexity] title = Amortized Complexity Verified author = Tobias Nipkow date = 2014-07-07 topic = Computer science/Data structures abstract = A framework for the analysis of the amortized complexity of functional data structures is formalized in Isabelle/HOL and applied to a number of standard examples and to the folowing non-trivial ones: skew heaps, splay trees, splay heaps and pairing heaps.

A preliminary version of this work (without pairing heaps) is described in a paper published in the proceedings of the conference on Interactive Theorem Proving ITP 2015. An extended version of this publication is available here. extra-history = Change history: [2015-03-17]: Added pairing heaps by Hauke Brinkop.
[2016-07-12]: Moved splay heaps from here to Splay_Tree
[2016-07-14]: Moved pairing heaps from here to the new Pairing_Heap notify = nipkow@in.tum.de [Dynamic_Tables] title = Parameterized Dynamic Tables author = Tobias Nipkow date = 2015-06-07 topic = Computer science/Data structures abstract = This article formalizes the amortized analysis of dynamic tables parameterized with their minimal and maximal load factors and the expansion and contraction factors.

A full description is found in a companion paper. notify = nipkow@in.tum.de [AVL-Trees] title = AVL Trees author = Tobias Nipkow , Cornelia Pusch <> date = 2004-03-19 topic = Computer science/Data structures abstract = Two formalizations of AVL trees with room for extensions. The first formalization is monolithic and shorter, the second one in two stages, longer and a bit simpler. The final implementation is the same. If you are interested in developing this further, please contact gerwin.klein@nicta.com.au. extra-history = Change history: [2011-04-11]: Ondrej Kuncar added delete function notify = kleing@cse.unsw.edu.au [BDD] title = BDD Normalisation author = Veronika Ortner <>, Norbert Schirmer <> date = 2008-02-29 topic = Computer science/Data structures abstract = We present the verification of the normalisation of a binary decision diagram (BDD). The normalisation follows the original algorithm presented by Bryant in 1986 and transforms an ordered BDD in a reduced, ordered and shared BDD. The verification is based on Hoare logics. notify = kleing@cse.unsw.edu.au, norbert.schirmer@web.de [BinarySearchTree] title = Binary Search Trees author = Viktor Kuncak date = 2004-04-05 topic = Computer science/Data structures abstract = The correctness is shown of binary search tree operations (lookup, insert and remove) implementing a set. Two versions are given, for both structured and linear (tactic-style) proofs. An implementation of integer-indexed maps is also verified. notify = lp15@cam.ac.uk [Splay_Tree] title = Splay Tree author = Tobias Nipkow notify = nipkow@in.tum.de date = 2014-08-12 topic = Computer science/Data structures abstract = Splay trees are self-adjusting binary search trees which were invented by Sleator and Tarjan [JACM 1985]. This entry provides executable and verified functional splay trees as well as the related splay heaps (due to Okasaki).

The amortized complexity of splay trees and heaps is analyzed in the AFP entry Amortized Complexity. extra-history = Change history: [2016-07-12]: Moved splay heaps here from Amortized_Complexity [Root_Balanced_Tree] title = Root-Balanced Tree author = Tobias Nipkow notify = nipkow@in.tum.de date = 2017-08-20 topic = Computer science/Data structures abstract =

Andersson introduced general balanced trees, search trees based on the design principle of partial rebuilding: perform update operations naively until the tree becomes too unbalanced, at which point a whole subtree is rebalanced. This article defines and analyzes a functional version of general balanced trees, which we call root-balanced trees. Using a lightweight model of execution time, amortized logarithmic complexity is verified in the theorem prover Isabelle.

This is the Isabelle formalization of the material decribed in the APLAS 2017 article Verified Root-Balanced Trees by the same author, which also presents experimental results that show competitiveness of root-balanced with AVL and red-black trees.

[Skew_Heap] title = Skew Heap author = Tobias Nipkow date = 2014-08-13 topic = Computer science/Data structures abstract = Skew heaps are an amazingly simple and lightweight implementation of priority queues. They were invented by Sleator and Tarjan [SIAM 1986] and have logarithmic amortized complexity. This entry provides executable and verified functional skew heaps.

The amortized complexity of skew heaps is analyzed in the AFP entry Amortized Complexity. notify = nipkow@in.tum.de [Pairing_Heap] title = Pairing Heap author = Hauke Brinkop , Tobias Nipkow date = 2016-07-14 topic = Computer science/Data structures abstract = This library defines three different versions of pairing heaps: a functional version of the original design based on binary trees [Fredman et al. 1986], the version by Okasaki [1998] and a modified version of the latter that is free of structural invariants.

The amortized complexity of pairing heaps is analyzed in the AFP article Amortized Complexity. extra-0 = Origin: This library was extracted from Amortized Complexity and extended. notify = nipkow@in.tum.de [Priority_Queue_Braun] title = Priority Queues Based on Braun Trees author = Tobias Nipkow date = 2014-09-04 topic = Computer science/Data structures abstract = This entry verifies priority queues based on Braun trees. Insertion and deletion take logarithmic time and preserve the balanced nature of Braun trees. Two implementations of deletion are provided. notify = nipkow@in.tum.de extra-history = Change history: [2019-12-16]: Added theory Priority_Queue_Braun2 with second version of del_min [Binomial-Queues] title = Functional Binomial Queues author = René Neumann date = 2010-10-28 topic = Computer science/Data structures abstract = Priority queues are an important data structure and efficient implementations of them are crucial. We implement a functional variant of binomial queues in Isabelle/HOL and show its functional correctness. A verification against an abstract reference specification of priority queues has also been attempted, but could not be achieved to the full extent. notify = florian.haftmann@informatik.tu-muenchen.de [Binomial-Heaps] title = Binomial Heaps and Skew Binomial Heaps author = Rene Meis , Finn Nielsen , Peter Lammich date = 2010-10-28 topic = Computer science/Data structures abstract = We implement and prove correct binomial heaps and skew binomial heaps. Both are data-structures for priority queues. While binomial heaps have logarithmic findMin, deleteMin, insert, and meld operations, skew binomial heaps have constant time findMin, insert, and meld operations, and only the deleteMin-operation is logarithmic. This is achieved by using skew links to avoid cascading linking on insert-operations, and data-structural bootstrapping to get constant-time findMin and meld operations. Our implementation follows the paper by Brodal and Okasaki. notify = peter.lammich@uni-muenster.de [Finger-Trees] title = Finger Trees author = Benedikt Nordhoff , Stefan Körner , Peter Lammich date = 2010-10-28 topic = Computer science/Data structures abstract = We implement and prove correct 2-3 finger trees. Finger trees are a general purpose data structure, that can be used to efficiently implement other data structures, such as priority queues. Intuitively, a finger tree is an annotated sequence, where the annotations are elements of a monoid. Apart from operations to access the ends of the sequence, the main operation is to split the sequence at the point where a monotone predicate over the sum of the left part of the sequence becomes true for the first time. The implementation follows the paper of Hinze and Paterson. The code generator can be used to get efficient, verified code. notify = peter.lammich@uni-muenster.de [Trie] title = Trie author = Andreas Lochbihler , Tobias Nipkow date = 2015-03-30 topic = Computer science/Data structures abstract = This article formalizes the ``trie'' data structure invented by Fredkin [CACM 1960]. It also provides a specialization where the entries in the trie are lists. extra-0 = Origin: This article was extracted from existing articles by the authors. notify = nipkow@in.tum.de [FinFun] title = Code Generation for Functions as Data author = Andreas Lochbihler date = 2009-05-06 topic = Computer science/Data structures abstract = FinFuns are total functions that are constant except for a finite set of points, i.e. a generalisation of finite maps. They are formalised as a new type in Isabelle/HOL such that the code generator can handle equality tests and quantification on FinFuns. On the code output level, FinFuns are explicitly represented by constant functions and pointwise updates, similarly to associative lists. Inside the logic, they behave like ordinary functions with extensionality. Via the update/constant pattern, a recursion combinator and an induction rule for FinFuns allow for defining and reasoning about operators on FinFun that are also executable. extra-history = Change history: [2010-08-13]: new concept domain of a FinFun as a FinFun (revision 34b3517cbc09)
[2010-11-04]: new conversion function from FinFun to list of elements in the domain (revision 0c167102e6ed)
[2012-03-07]: replace sets as FinFuns by predicates as FinFuns because the set type constructor has been reintroduced (revision b7aa87989f3a) notify = nipkow@in.tum.de [Collections] title = Collections Framework author = Peter Lammich contributors = Andreas Lochbihler , Thomas Tuerk <> date = 2009-11-25 topic = Computer science/Data structures abstract = This development provides an efficient, extensible, machine checked collections framework. The library adopts the concepts of interface, implementation and generic algorithm from object-oriented programming and implements them in Isabelle/HOL. The framework features the use of data refinement techniques to refine an abstract specification (using high-level concepts like sets) to a more concrete implementation (using collection datastructures, like red-black-trees). The code-generator of Isabelle/HOL can be used to generate efficient code. extra-history = Change history: [2010-10-08]: New Interfaces: OrderedSet, OrderedMap, List. Fifo now implements list-interface: Function names changed: put/get --> enqueue/dequeue. New Implementations: ArrayList, ArrayHashMap, ArrayHashSet, TrieMap, TrieSet. Invariant-free datastructures: Invariant implicitely hidden in typedef. Record-interfaces: All operations of an interface encapsulated as record. Examples moved to examples subdirectory.
[2010-12-01]: New Interfaces: Priority Queues, Annotated Lists. Implemented by finger trees, (skew) binomial queues.
[2011-10-10]: SetSpec: Added operations: sng, isSng, bexists, size_abort, diff, filter, iterate_rule_insertP MapSpec: Added operations: sng, isSng, iterate_rule_insertP, bexists, size, size_abort, restrict, map_image_filter, map_value_image_filter Some maintenance changes
[2012-04-25]: New iterator foundation by Tuerk. Various maintenance changes.
[2012-08]: Collections V2. New features: Polymorphic iterators. Generic algorithm instantiation where required. Naming scheme changed from xx_opname to xx.opname. A compatibility file CollectionsV1 tries to simplify porting of existing theories, by providing old naming scheme and the old monomorphic iterator locales.
[2013-09]: Added Generic Collection Framework based on Autoref. The GenCF provides: Arbitrary nesting, full integration with Autoref.
[2014-06]: Maintenace changes to GenCF: Optimized inj_image on list_set. op_set_cart (Cartesian product). big-Union operation. atLeastLessThan - operation ({a..<b})
notify = lammich@in.tum.de [Containers] title = Light-weight Containers author = Andreas Lochbihler contributors = René Thiemann date = 2013-04-15 topic = Computer science/Data structures abstract = This development provides a framework for container types like sets and maps such that generated code implements these containers with different (efficient) data structures. Thanks to type classes and refinement during code generation, this light-weight approach can seamlessly replace Isabelle's default setup for code generation. Heuristics automatically pick one of the available data structures depending on the type of elements to be stored, but users can also choose on their own. The extensible design permits to add more implementations at any time.

To support arbitrary nesting of sets, we define a linear order on sets based on a linear order of the elements and provide efficient implementations. It even allows to compare complements with non-complements. extra-history = Change history: [2013-07-11]: add pretty printing for sets (revision 7f3f52c5f5fa)
[2013-09-20]: provide generators for canonical type class instantiations (revision 159f4401f4a8 by René Thiemann)
[2014-07-08]: add support for going from partial functions to mappings (revision 7a6fc957e8ed)
[2018-03-05]: add two application examples: depth-first search and 2SAT (revision e5e1a1da2411) notify = mail@andreas-lochbihler.de [FileRefinement] title = File Refinement author = Karen Zee , Viktor Kuncak date = 2004-12-09 topic = Computer science/Data structures abstract = These theories illustrates the verification of basic file operations (file creation, file read and file write) in the Isabelle theorem prover. We describe a file at two levels of abstraction: an abstract file represented as a resizable array, and a concrete file represented using data blocks. notify = kkz@mit.edu [Datatype_Order_Generator] title = Generating linear orders for datatypes author = René Thiemann date = 2012-08-07 topic = Computer science/Data structures abstract = We provide a framework for registering automatic methods to derive class instances of datatypes, as it is possible using Haskell's ``deriving Ord, Show, ...'' feature.

We further implemented such automatic methods to derive (linear) orders or hash-functions which are required in the Isabelle Collection Framework. Moreover, for the tactic of Huffman and Krauss to show that a datatype is countable, we implemented a wrapper so that this tactic becomes accessible in our framework.

Our formalization was performed as part of the IsaFoR/CeTA project. With our new tactic we could completely remove tedious proofs for linear orders of two datatypes.

This development is aimed at datatypes generated by the "old_datatype" command. notify = rene.thiemann@uibk.ac.at [Deriving] title = Deriving class instances for datatypes author = Christian Sternagel , René Thiemann date = 2015-03-11 topic = Computer science/Data structures abstract =

We provide a framework for registering automatic methods to derive class instances of datatypes, as it is possible using Haskell's ``deriving Ord, Show, ...'' feature.

We further implemented such automatic methods to derive comparators, linear orders, parametrizable equality functions, and hash-functions which are required in the Isabelle Collection Framework and the Container Framework. Moreover, for the tactic of Blanchette to show that a datatype is countable, we implemented a wrapper so that this tactic becomes accessible in our framework. All of the generators are based on the infrastructure that is provided by the BNF-based datatype package.

Our formalization was performed as part of the IsaFoR/CeTA project. With our new tactics we could remove several tedious proofs for (conditional) linear orders, and conditional equality operators within IsaFoR and the Container Framework.

notify = rene.thiemann@uibk.ac.at [List-Index] title = List Index date = 2010-02-20 author = Tobias Nipkow topic = Computer science/Data structures abstract = This theory provides functions for finding the index of an element in a list, by predicate and by value. notify = nipkow@in.tum.de [List-Infinite] title = Infinite Lists date = 2011-02-23 author = David Trachtenherz <> topic = Computer science/Data structures abstract = We introduce a theory of infinite lists in HOL formalized as functions over naturals (folder ListInf, theories ListInf and ListInf_Prefix). It also provides additional results for finite lists (theory ListInf/List2), natural numbers (folder CommonArith, esp. division/modulo, naturals with infinity), sets (folder CommonSet, esp. cutting/truncating sets, traversing sets of naturals). notify = nipkow@in.tum.de [Matrix] title = Executable Matrix Operations on Matrices of Arbitrary Dimensions topic = Computer science/Data structures date = 2010-06-17 author = Christian Sternagel , René Thiemann license = LGPL abstract = We provide the operations of matrix addition, multiplication, transposition, and matrix comparisons as executable functions over ordered semirings. Moreover, it is proven that strongly normalizing (monotone) orders can be lifted to strongly normalizing (monotone) orders over matrices. We further show that the standard semirings over the naturals, integers, and rationals, as well as the arctic semirings satisfy the axioms that are required by our matrix theory. Our formalization is part of the CeTA system which contains several termination techniques. The provided theories have been essential to formalize matrix-interpretations and arctic interpretations. extra-history = Change history: [2010-09-17]: Moved theory on arbitrary (ordered) semirings to Abstract Rewriting. notify = rene.thiemann@uibk.ac.at, christian.sternagel@uibk.ac.at [Matrix_Tensor] title = Tensor Product of Matrices topic = Computer science/Data structures, Mathematics/Algebra date = 2016-01-18 author = T.V.H. Prathamesh abstract = In this work, the Kronecker tensor product of matrices and the proofs of some of its properties are formalized. Properties which have been formalized include associativity of the tensor product and the mixed-product property. notify = prathamesh@imsc.res.in [Huffman] title = The Textbook Proof of Huffman's Algorithm author = Jasmin Christian Blanchette date = 2008-10-15 topic = Computer science/Data structures abstract = Huffman's algorithm is a procedure for constructing a binary tree with minimum weighted path length. This report presents a formal proof of the correctness of Huffman's algorithm written using Isabelle/HOL. Our proof closely follows the sketches found in standard algorithms textbooks, uncovering a few snags in the process. Another distinguishing feature of our formalization is the use of custom induction rules to help Isabelle's automatic tactics, leading to very short proofs for most of the lemmas. notify = jasmin.blanchette@gmail.com [Partial_Function_MR] title = Mutually Recursive Partial Functions author = René Thiemann topic = Computer science/Functional programming date = 2014-02-18 license = LGPL abstract = We provide a wrapper around the partial-function command that supports mutual recursion. notify = rene.thiemann@uibk.ac.at [Lifting_Definition_Option] title = Lifting Definition Option author = René Thiemann topic = Computer science/Functional programming date = 2014-10-13 license = LGPL abstract = We implemented a command that can be used to easily generate elements of a restricted type {x :: 'a. P x}, provided the definition is of the form f ys = (if check ys then Some(generate ys :: 'a) else None) where ys is a list of variables y1 ... yn and check ys ==> P(generate ys) can be proved.

In principle, such a definition is also directly possible using the lift_definition command. However, then this definition will not be suitable for code-generation. To this end, we automated a more complex construction of Joachim Breitner which is amenable for code-generation, and where the test check ys will only be performed once. In the automation, one auxiliary type is created, and Isabelle's lifting- and transfer-package is invoked several times. notify = rene.thiemann@uibk.ac.at [Coinductive] title = Coinductive topic = Computer science/Functional programming author = Andreas Lochbihler contributors = Johannes Hölzl date = 2010-02-12 abstract = This article collects formalisations of general-purpose coinductive data types and sets. Currently, it contains coinductive natural numbers, coinductive lists, i.e. lazy lists or streams, infinite streams, coinductive terminated lists, coinductive resumptions, a library of operations on coinductive lists, and a version of König's lemma as an application for coinductive lists.
The initial theory was contributed by Paulson and Wenzel. Extensions and other coinductive formalisations of general interest are welcome. extra-history = Change history: [2010-06-10]: coinductive lists: setup for quotient package (revision 015574f3bf3c)
[2010-06-28]: new codatatype terminated lazy lists (revision e12de475c558)
[2010-08-04]: terminated lazy lists: setup for quotient package; more lemmas (revision 6ead626f1d01)
[2010-08-17]: Koenig's lemma as an example application for coinductive lists (revision f81ce373fa96)
[2011-02-01]: lazy implementation of coinductive (terminated) lists for the code generator (revision 6034973dce83)
[2011-07-20]: new codatatype resumption (revision 811364c776c7)
[2012-06-27]: new codatatype stream with operations (with contributions by Peter Gammie) (revision dd789a56473c)
[2013-03-13]: construct codatatypes with the BNF package and adjust the definitions and proofs, setup for lifting and transfer packages (revision f593eda5b2c0)
[2013-09-20]: stream theory uses type and operations from HOL/BNF/Examples/Stream (revision 692809b2b262)
[2014-04-03]: ccpo structure on codatatypes used to define ldrop, ldropWhile, lfilter, lconcat as least fixpoint; ccpo topology on coinductive lists contributed by Johannes Hölzl; added examples (revision 23cd8156bd42)
notify = mail@andreas-lochbihler.de [Stream-Fusion] title = Stream Fusion author = Brian Huffman topic = Computer science/Functional programming date = 2009-04-29 abstract = Stream Fusion is a system for removing intermediate list structures from Haskell programs; it consists of a Haskell library along with several compiler rewrite rules. (The library is available online.)

These theories contain a formalization of much of the Stream Fusion library in HOLCF. Lazy list and stream types are defined, along with coercions between the two types, as well as an equivalence relation for streams that generate the same list. List and stream versions of map, filter, foldr, enumFromTo, append, zipWith, and concatMap are defined, and the stream versions are shown to respect stream equivalence. notify = brianh@cs.pdx.edu [Tycon] title = Type Constructor Classes and Monad Transformers author = Brian Huffman date = 2012-06-26 topic = Computer science/Functional programming abstract = These theories contain a formalization of first class type constructors and axiomatic constructor classes for HOLCF. This work is described in detail in the ICFP 2012 paper Formal Verification of Monad Transformers by the author. The formalization is a revised and updated version of earlier joint work with Matthews and White.

Based on the hierarchy of type classes in Haskell, we define classes for functors, monads, monad-plus, etc. Each one includes all the standard laws as axioms. We also provide a new user command, tycondef, for defining new type constructors in HOLCF. Using tycondef, we instantiate the type class hierarchy with various monads and monad transformers. notify = huffman@in.tum.de [CoreC++] title = CoreC++ author = Daniel Wasserrab date = 2006-05-15 topic = Computer science/Programming languages/Language definitions abstract = We present an operational semantics and type safety proof for multiple inheritance in C++. The semantics models the behavior of method calls, field accesses, and two forms of casts in C++ class hierarchies. For explanations see the OOPSLA 2006 paper by Wasserrab, Nipkow, Snelting and Tip. notify = nipkow@in.tum.de [FeatherweightJava] title = A Theory of Featherweight Java in Isabelle/HOL author = J. Nathan Foster , Dimitrios Vytiniotis date = 2006-03-31 topic = Computer science/Programming languages/Language definitions abstract = We formalize the type system, small-step operational semantics, and type soundness proof for Featherweight Java, a simple object calculus, in Isabelle/HOL. notify = kleing@cse.unsw.edu.au [Jinja] title = Jinja is not Java author = Gerwin Klein , Tobias Nipkow date = 2005-06-01 topic = Computer science/Programming languages/Language definitions abstract = We introduce Jinja, a Java-like programming language with a formal semantics designed to exhibit core features of the Java language architecture. Jinja is a compromise between realism of the language and tractability and clarity of the formal semantics. The following aspects are formalised: a big and a small step operational semantics for Jinja and a proof of their equivalence; a type system and a definite initialisation analysis; a type safety proof of the small step semantics; a virtual machine (JVM), its operational semantics and its type system; a type safety proof for the JVM; a bytecode verifier, i.e. data flow analyser for the JVM; a correctness proof of the bytecode verifier w.r.t. the type system; a compiler and a proof that it preserves semantics and well-typedness. The emphasis of this work is not on particular language features but on providing a unified model of the source language, the virtual machine and the compiler. The whole development has been carried out in the theorem prover Isabelle/HOL. notify = kleing@cse.unsw.edu.au, nipkow@in.tum.de [JinjaThreads] title = Jinja with Threads author = Andreas Lochbihler date = 2007-12-03 topic = Computer science/Programming languages/Language definitions abstract = We extend the Jinja source code semantics by Klein and Nipkow with Java-style arrays and threads. Concurrency is captured in a generic framework semantics for adding concurrency through interleaving to a sequential semantics, which features dynamic thread creation, inter-thread communication via shared memory, lock synchronisation and joins. Also, threads can suspend themselves and be notified by others. We instantiate the framework with the adapted versions of both Jinja source and byte code and show type safety for the multithreaded case. Equally, the compiler from source to byte code is extended, for which we prove weak bisimilarity between the source code small step semantics and the defensive Jinja virtual machine. On top of this, we formalise the JMM and show the DRF guarantee and consistency. For description of the different parts, see Lochbihler's papers at FOOL 2008, ESOP 2010, ITP 2011, and ESOP 2012. extra-history = Change history: [2008-04-23]: added bytecode formalisation with arrays and threads, added thread joins (revision f74a8be156a7)
[2009-04-27]: added verified compiler from source code to bytecode; encapsulate native methods in separate semantics (revision e4f26541e58a)
[2009-11-30]: extended compiler correctness proof to infinite and deadlocking computations (revision e50282397435)
[2010-06-08]: added thread interruption; new abstract memory model with sequential consistency as implementation (revision 0cb9e8dbd78d)
[2010-06-28]: new thread interruption model (revision c0440d0a1177)
[2010-10-15]: preliminary version of the Java memory model for source code (revision 02fee0ef3ca2)
[2010-12-16]: improved version of the Java memory model, also for bytecode executable scheduler for source code semantics (revision 1f41c1842f5a)
[2011-02-02]: simplified code generator setup new random scheduler (revision 3059dafd013f)
[2011-07-21]: new interruption model, generalized JMM proof of DRF guarantee, allow class Object to declare methods and fields, simplified subtyping relation, corrected division and modulo implementation (revision 46e4181ed142)
[2012-02-16]: added example programs (revision bf0b06c8913d)
[2012-11-21]: type safety proof for the Java memory model, allow spurious wake-ups (revision 76063d860ae0)
[2013-05-16]: support for non-deterministic memory allocators (revision cc3344a49ced)
[2017-10-20]: add an atomic compare-and-swap operation for volatile fields (revision a6189b1d6b30)
notify = mail@andreas-lochbihler.de [Locally-Nameless-Sigma] title = Locally Nameless Sigma Calculus author = Ludovic Henrio , Florian Kammüller , Bianca Lutz , Henry Sudhof date = 2010-04-30 topic = Computer science/Programming languages/Language definitions abstract = We present a Theory of Objects based on the original functional sigma-calculus by Abadi and Cardelli but with an additional parameter to methods. We prove confluence of the operational semantics following the outline of Nipkow's proof of confluence for the lambda-calculus reusing his theory Commutation, a generic diamond lemma reduction. We furthermore formalize a simple type system for our sigma-calculus including a proof of type safety. The entire development uses the concept of Locally Nameless representation for binders. We reuse an earlier proof of confluence for a simpler sigma-calculus based on de Bruijn indices and lists to represent objects. notify = nipkow@in.tum.de [Attack_Trees] title = Attack Trees in Isabelle for GDPR compliance of IoT healthcare systems author = Florian Kammueller topic = Computer science/Security date = 2020-04-27 notify = florian.kammuller@gmail.com abstract = In this article, we present a proof theory for Attack Trees. Attack Trees are a well established and useful model for the construction of attacks on systems since they allow a stepwise exploration of high level attacks in application scenarios. Using the expressiveness of Higher Order Logic in Isabelle, we develop a generic theory of Attack Trees with a state-based semantics based on Kripke structures and CTL. The resulting framework allows mechanically supported logic analysis of the meta-theory of the proof calculus of Attack Trees and at the same time the developed proof theory enables application to case studies. A central correctness and completeness result proved in Isabelle establishes a connection between the notion of Attack Tree validity and CTL. The application is illustrated on the example of a healthcare IoT system and GDPR compliance verification. [AutoFocus-Stream] title = AutoFocus Stream Processing for Single-Clocking and Multi-Clocking Semantics author = David Trachtenherz <> date = 2011-02-23 topic = Computer science/Programming languages/Language definitions abstract = We formalize the AutoFocus Semantics (a time-synchronous subset of the Focus formalism) as stream processing functions on finite and infinite message streams represented as finite/infinite lists. The formalization comprises both the conventional single-clocking semantics (uniform global clock for all components and communications channels) and its extension to multi-clocking semantics (internal execution clocking of a component may be a multiple of the external communication clocking). The semantics is defined by generic stream processing functions making it suitable for simulation/code generation in Isabelle/HOL. Furthermore, a number of AutoFocus semantics properties are formalized using definitions from the IntervalLogic theories. notify = nipkow@in.tum.de [FocusStreamsCaseStudies] title = Stream Processing Components: Isabelle/HOL Formalisation and Case Studies author = Maria Spichkova date = 2013-11-14 topic = Computer science/Programming languages/Language definitions abstract = This set of theories presents an Isabelle/HOL formalisation of stream processing components introduced in Focus, a framework for formal specification and development of interactive systems. This is an extended and updated version of the formalisation, which was elaborated within the methodology "Focus on Isabelle". In addition, we also applied the formalisation on three case studies that cover different application areas: process control (Steam Boiler System), data transmission (FlexRay communication protocol), memory and processing components (Automotive-Gateway System). notify = lp15@cam.ac.uk, maria.spichkova@rmit.edu.au [Isabelle_Meta_Model] title = A Meta-Model for the Isabelle API author = Frédéric Tuong , Burkhart Wolff date = 2015-09-16 topic = Computer science/Programming languages/Language definitions abstract = We represent a theory of (a fragment of) Isabelle/HOL in Isabelle/HOL. The purpose of this exercise is to write packages for domain-specific specifications such as class models, B-machines, ..., and generally speaking, any domain-specific languages whose abstract syntax can be defined by a HOL "datatype". On this basis, the Isabelle code-generator can then be used to generate code for global context transformations as well as tactic code.

Consequently the package is geared towards parsing, printing and code-generation to the Isabelle API. It is at the moment not sufficiently rich for doing meta theory on Isabelle itself. Extensions in this direction are possible though.

Moreover, the chosen fragment is fairly rudimentary. However it should be easily adapted to one's needs if a package is written on top of it. The supported API contains types, terms, transformation of global context like definitions and data-type declarations as well as infrastructure for Isar-setups.

This theory is drawn from the Featherweight OCL project where it is used to construct a package for object-oriented data-type theories generated from UML class diagrams. The Featherweight OCL, for example, allows for both the direct execution of compiled tactic code by the Isabelle API as well as the generation of ".thy"-files for debugging purposes.

Gained experience from this project shows that the compiled code is sufficiently efficient for practical purposes while being based on a formal model on which properties of the package can be proven such as termination of certain transformations, correctness, etc. notify = tuong@users.gforge.inria.fr, wolff@lri.fr [Clean] title = Clean - An Abstract Imperative Programming Language and its Theory author = Frédéric Tuong , Burkhart Wolff topic = Computer science/Programming languages, Computer science/Semantics date = 2019-10-04 notify = wolff@lri.fr, ftuong@lri.fr abstract = Clean is based on a simple, abstract execution model for an imperative target language. “Abstract” is understood in contrast to “Concrete Semantics”; alternatively, the term “shallow-style embedding” could be used. It strives for a type-safe notion of program-variables, an incremental construction of the typed state-space, support of incremental verification, and open-world extensibility of new type definitions being intertwined with the program definitions. Clean is based on a “no-frills” state-exception monad with the usual definitions of bind and unit for the compositional glue of state-based computations. Clean offers conditionals and loops supporting C-like control-flow operators such as break and return. The state-space construction is based on the extensible record package. Direct recursion of procedures is supported. Clean’s design strives for extreme simplicity. It is geared towards symbolic execution and proven correct verification tools. The underlying libraries of this package, however, deliberately restrict themselves to the most elementary infrastructure for these tasks. The package is intended to serve as demonstrator semantic backend for Isabelle/C, or for the test-generation techniques. [PCF] title = Logical Relations for PCF author = Peter Gammie date = 2012-07-01 topic = Computer science/Programming languages/Lambda calculi abstract = We apply Andy Pitts's methods of defining relations over domains to several classical results in the literature. We show that the Y combinator coincides with the domain-theoretic fixpoint operator, that parallel-or and the Plotkin existential are not definable in PCF, that the continuation semantics for PCF coincides with the direct semantics, and that our domain-theoretic semantics for PCF is adequate for reasoning about contextual equivalence in an operational semantics. Our version of PCF is untyped and has both strict and non-strict function abstractions. The development is carried out in HOLCF. notify = peteg42@gmail.com [POPLmark-deBruijn] title = POPLmark Challenge Via de Bruijn Indices author = Stefan Berghofer date = 2007-08-02 topic = Computer science/Programming languages/Lambda calculi abstract = We present a solution to the POPLmark challenge designed by Aydemir et al., which has as a goal the formalization of the meta-theory of System F<:. The formalization is carried out in the theorem prover Isabelle/HOL using an encoding based on de Bruijn indices. We start with a relatively simple formalization covering only the basic features of System F<:, and explain how it can be extended to also cover records and more advanced binding constructs. notify = berghofe@in.tum.de [Lam-ml-Normalization] title = Strong Normalization of Moggis's Computational Metalanguage author = Christian Doczkal date = 2010-08-29 topic = Computer science/Programming languages/Lambda calculi abstract = Handling variable binding is one of the main difficulties in formal proofs. In this context, Moggi's computational metalanguage serves as an interesting case study. It features monadic types and a commuting conversion rule that rearranges the binding structure. Lindley and Stark have given an elegant proof of strong normalization for this calculus. The key construction in their proof is a notion of relational TT-lifting, using stacks of elimination contexts to obtain a Girard-Tait style logical relation. I give a formalization of their proof in Isabelle/HOL-Nominal with a particular emphasis on the treatment of bound variables. notify = doczkal@ps.uni-saarland.de, nipkow@in.tum.de [MiniML] title = Mini ML author = Wolfgang Naraschewski <>, Tobias Nipkow date = 2004-03-19 topic = Computer science/Programming languages/Type systems abstract = This theory defines the type inference rules and the type inference algorithm W for MiniML (simply-typed lambda terms with let) due to Milner. It proves the soundness and completeness of W w.r.t. the rules. notify = kleing@cse.unsw.edu.au [Simpl] title = A Sequential Imperative Programming Language Syntax, Semantics, Hoare Logics and Verification Environment author = Norbert Schirmer <> date = 2008-02-29 topic = Computer science/Programming languages/Language definitions, Computer science/Programming languages/Logics license = LGPL abstract = We present the theory of Simpl, a sequential imperative programming language. We introduce its syntax, its semantics (big and small-step operational semantics) and Hoare logics for both partial as well as total correctness. We prove soundness and completeness of the Hoare logic. We integrate and automate the Hoare logic in Isabelle/HOL to obtain a practically usable verification environment for imperative programs. Simpl is independent of a concrete programming language but expressive enough to cover all common language features: mutually recursive procedures, abrupt termination and exceptions, runtime faults, local and global variables, pointers and heap, expressions with side effects, pointers to procedures, partial application and closures, dynamic method invocation and also unbounded nondeterminism. notify = kleing@cse.unsw.edu.au, norbert.schirmer@web.de [Separation_Algebra] title = Separation Algebra author = Gerwin Klein , Rafal Kolanski , Andrew Boyton date = 2012-05-11 topic = Computer science/Programming languages/Logics license = BSD abstract = We present a generic type class implementation of separation algebra for Isabelle/HOL as well as lemmas and generic tactics which can be used directly for any instantiation of the type class.

The ex directory contains example instantiations that include structures such as a heap or virtual memory.

The abstract separation algebra is based upon "Abstract Separation Logic" by Calcagno et al. These theories are also the basis of the ITP 2012 rough diamond "Mechanised Separation Algebra" by the authors.

The aim of this work is to support and significantly reduce the effort for future separation logic developments in Isabelle/HOL by factoring out the part of separation logic that can be treated abstractly once and for all. This includes developing typical default rule sets for reasoning as well as automated tactic support for separation logic. notify = kleing@cse.unsw.edu.au, rafal.kolanski@nicta.com.au [Separation_Logic_Imperative_HOL] title = A Separation Logic Framework for Imperative HOL author = Peter Lammich , Rene Meis date = 2012-11-14 topic = Computer science/Programming languages/Logics license = BSD abstract = We provide a framework for separation-logic based correctness proofs of Imperative HOL programs. Our framework comes with a set of proof methods to automate canonical tasks such as verification condition generation and frame inference. Moreover, we provide a set of examples that show the applicability of our framework. The examples include algorithms on lists, hash-tables, and union-find trees. We also provide abstract interfaces for lists, maps, and sets, that allow to develop generic imperative algorithms and use data-refinement techniques.
As we target Imperative HOL, our programs can be translated to efficiently executable code in various target languages, including ML, OCaml, Haskell, and Scala. notify = lammich@in.tum.de [Inductive_Confidentiality] title = Inductive Study of Confidentiality author = Giampaolo Bella date = 2012-05-02 topic = Computer science/Security abstract = This document contains the full theory files accompanying article Inductive Study of Confidentiality --- for Everyone in Formal Aspects of Computing. They aim at an illustrative and didactic presentation of the Inductive Method of protocol analysis, focusing on the treatment of one of the main goals of security protocols: confidentiality against a threat model. The treatment of confidentiality, which in fact forms a key aspect of all protocol analysis tools, has been found cryptic by many learners of the Inductive Method, hence the motivation for this work. The theory files in this document guide the reader step by step towards design and proof of significant confidentiality theorems. These are developed against two threat models, the standard Dolev-Yao and a more audacious one, the General Attacker, which turns out to be particularly useful also for teaching purposes. notify = giamp@dmi.unict.it [Possibilistic_Noninterference] title = Possibilistic Noninterference author = Andrei Popescu , Johannes Hölzl date = 2012-09-10 topic = Computer science/Security, Computer science/Programming languages/Type systems abstract = We formalize a wide variety of Volpano/Smith-style noninterference notions for a while language with parallel composition. We systematize and classify these notions according to compositionality w.r.t. the language constructs. Compositionality yields sound syntactic criteria (a.k.a. type systems) in a uniform way.

An article about these proofs is published in the proceedings of the conference Certified Programs and Proofs 2012. notify = hoelzl@in.tum.de [SIFUM_Type_Systems] title = A Formalization of Assumptions and Guarantees for Compositional Noninterference author = Sylvia Grewe , Heiko Mantel , Daniel Schoepe date = 2014-04-23 topic = Computer science/Security, Computer science/Programming languages/Type systems abstract = Research in information-flow security aims at developing methods to identify undesired information leaks within programs from private (high) sources to public (low) sinks. For a concurrent system, it is desirable to have compositional analysis methods that allow for analyzing each thread independently and that nevertheless guarantee that the parallel composition of successfully analyzed threads satisfies a global security guarantee. However, such a compositional analysis should not be overly pessimistic about what an environment might do with shared resources. Otherwise, the analysis will reject many intuitively secure programs.

The paper "Assumptions and Guarantees for Compositional Noninterference" by Mantel et. al. presents one solution for this problem: an approach for compositionally reasoning about non-interference in concurrent programs via rely-guarantee-style reasoning. We present an Isabelle/HOL formalization of the concepts and proofs of this approach. notify = [Dependent_SIFUM_Type_Systems] title = A Dependent Security Type System for Concurrent Imperative Programs author = Toby Murray , Robert Sison<>, Edward Pierzchalski<>, Christine Rizkallah notify = toby.murray@unimelb.edu.au date = 2016-06-25 topic = Computer science/Security, Computer science/Programming languages/Type systems abstract = The paper "Compositional Verification and Refinement of Concurrent Value-Dependent Noninterference" by Murray et. al. (CSF 2016) presents a dependent security type system for compositionally verifying a value-dependent noninterference property, defined in (Murray, PLAS 2015), for concurrent programs. This development formalises that security definition, the type system and its soundness proof, and demonstrates its application on some small examples. It was derived from the SIFUM_Type_Systems AFP entry, by Sylvia Grewe, Heiko Mantel and Daniel Schoepe, and whose structure it inherits. extra-history = Change history: [2016-08-19]: Removed unused "stop" parameter and "stop_no_eval" assumption from the sifum_security locale. (revision dbc482d36372) [2016-09-27]: Added security locale support for the imposition of requirements on the initial memory. (revision cce4ceb74ddb) [Dependent_SIFUM_Refinement] title = Compositional Security-Preserving Refinement for Concurrent Imperative Programs author = Toby Murray , Robert Sison<>, Edward Pierzchalski<>, Christine Rizkallah notify = toby.murray@unimelb.edu.au date = 2016-06-28 topic = Computer science/Security abstract = The paper "Compositional Verification and Refinement of Concurrent Value-Dependent Noninterference" by Murray et. al. (CSF 2016) presents a compositional theory of refinement for a value-dependent noninterference property, defined in (Murray, PLAS 2015), for concurrent programs. This development formalises that refinement theory, and demonstrates its application on some small examples. extra-history = Change history: [2016-08-19]: Removed unused "stop" parameters from the sifum_refinement locale. (revision dbc482d36372) [2016-09-02]: TobyM extended "simple" refinement theory to be usable for all bisimulations. (revision 547f31c25f60) [Relational-Incorrectness-Logic] title = An Under-Approximate Relational Logic author = Toby Murray topic = Computer science/Programming languages/Logics, Computer science/Security date = 2020-03-12 notify = toby.murray@unimelb.edu.au abstract = Recently, authors have proposed under-approximate logics for reasoning about programs. So far, all such logics have been confined to reasoning about individual program behaviours. Yet there exist many over-approximate relational logics for reasoning about pairs of programs and relating their behaviours. We present the first under-approximate relational logic, for the simple imperative language IMP. We prove our logic is both sound and complete. Additionally, we show how reasoning in this logic can be decomposed into non-relational reasoning in an under-approximate Hoare logic, mirroring Beringer’s result for over-approximate relational logics. We illustrate the application of our logic on some small examples in which we provably demonstrate the presence of insecurity. [Strong_Security] title = A Formalization of Strong Security author = Sylvia Grewe , Alexander Lux , Heiko Mantel , Jens Sauer date = 2014-04-23 topic = Computer science/Security, Computer science/Programming languages/Type systems abstract = Research in information-flow security aims at developing methods to identify undesired information leaks within programs from private sources to public sinks. Noninterference captures this intuition. Strong security from Sabelfeld and Sands formalizes noninterference for concurrent systems.

We present an Isabelle/HOL formalization of strong security for arbitrary security lattices (Sabelfeld and Sands use a two-element security lattice in the original publication). The formalization includes compositionality proofs for strong security and a soundness proof for a security type system that checks strong security for programs in a simple while language with dynamic thread creation.

Our formalization of the security type system is abstract in the language for expressions and in the semantic side conditions for expressions. It can easily be instantiated with different syntactic approximations for these side conditions. The soundness proof of such an instantiation boils down to showing that these syntactic approximations imply the semantic side conditions. notify = [WHATandWHERE_Security] title = A Formalization of Declassification with WHAT-and-WHERE-Security author = Sylvia Grewe , Alexander Lux , Heiko Mantel , Jens Sauer date = 2014-04-23 topic = Computer science/Security, Computer science/Programming languages/Type systems abstract = Research in information-flow security aims at developing methods to identify undesired information leaks within programs from private sources to public sinks. Noninterference captures this intuition by requiring that no information whatsoever flows from private sources to public sinks. However, in practice this definition is often too strict: Depending on the intuitive desired security policy, the controlled declassification of certain private information (WHAT) at certain points in the program (WHERE) might not result in an undesired information leak.

We present an Isabelle/HOL formalization of such a security property for controlled declassification, namely WHAT&WHERE-security from "Scheduler-Independent Declassification" by Lux, Mantel, and Perner. The formalization includes compositionality proofs for and a soundness proof for a security type system that checks for programs in a simple while language with dynamic thread creation.

Our formalization of the security type system is abstract in the language for expressions and in the semantic side conditions for expressions. It can easily be instantiated with different syntactic approximations for these side conditions. The soundness proof of such an instantiation boils down to showing that these syntactic approximations imply the semantic side conditions.

This Isabelle/HOL formalization uses theories from the entry Strong Security. notify = [VolpanoSmith] title = A Correctness Proof for the Volpano/Smith Security Typing System author = Gregor Snelting , Daniel Wasserrab date = 2008-09-02 topic = Computer science/Programming languages/Type systems, Computer science/Security abstract = The Volpano/Smith/Irvine security type systems requires that variables are annotated as high (secret) or low (public), and provides typing rules which guarantee that secret values cannot leak to public output ports. This property of a program is called confidentiality. For a simple while-language without threads, our proof shows that typeability in the Volpano/Smith system guarantees noninterference. Noninterference means that if two initial states for program execution are low-equivalent, then the final states are low-equivalent as well. This indeed implies that secret values cannot leak to public ports. The proof defines an abstract syntax and operational semantics for programs, formalizes noninterference, and then proceeds by rule induction on the operational semantics. The mathematically most intricate part is the treatment of implicit flows. Note that the Volpano/Smith system is not flow-sensitive and thus quite unprecise, resulting in false alarms. However, due to the correctness property, all potential breaks of confidentiality are discovered. notify = [Abstract-Hoare-Logics] title = Abstract Hoare Logics author = Tobias Nipkow date = 2006-08-08 topic = Computer science/Programming languages/Logics abstract = These therories describe Hoare logics for a number of imperative language constructs, from while-loops to mutually recursive procedures. Both partial and total correctness are treated. In particular a proof system for total correctness of recursive procedures in the presence of unbounded nondeterminism is presented. notify = nipkow@in.tum.de [Stone_Algebras] title = Stone Algebras author = Walter Guttmann notify = walter.guttmann@canterbury.ac.nz date = 2016-09-06 topic = Mathematics/Order abstract = A range of algebras between lattices and Boolean algebras generalise the notion of a complement. We develop a hierarchy of these pseudo-complemented algebras that includes Stone algebras. Independently of this theory we study filters based on partial orders. Both theories are combined to prove Chen and Grätzer's construction theorem for Stone algebras. The latter involves extensive reasoning about algebraic structures in addition to reasoning in algebraic structures. [Kleene_Algebra] title = Kleene Algebra author = Alasdair Armstrong <>, Georg Struth , Tjark Weber date = 2013-01-15 topic = Computer science/Programming languages/Logics, Computer science/Automata and formal languages, Mathematics/Algebra abstract = These files contain a formalisation of variants of Kleene algebras and their most important models as axiomatic type classes in Isabelle/HOL. Kleene algebras are foundational structures in computing with applications ranging from automata and language theory to computational modeling, program construction and verification.

We start with formalising dioids, which are additively idempotent semirings, and expand them by axiomatisations of the Kleene star for finite iteration and an omega operation for infinite iteration. We show that powersets over a given monoid, (regular) languages, sets of paths in a graph, sets of computation traces, binary relations and formal power series form Kleene algebras, and consider further models based on lattices, max-plus semirings and min-plus semirings. We also demonstrate that dioids are closed under the formation of matrices (proofs for Kleene algebras remain to be completed).

On the one hand we have aimed at a reference formalisation of variants of Kleene algebras that covers a wide range of variants and the core theorems in a structured and modular way and provides readable proofs at text book level. On the other hand, we intend to use this algebraic hierarchy and its models as a generic algebraic middle-layer from which programming applications can quickly be explored, implemented and verified. notify = g.struth@sheffield.ac.uk, tjark.weber@it.uu.se [KAT_and_DRA] title = Kleene Algebra with Tests and Demonic Refinement Algebras author = Alasdair Armstrong <>, Victor B. F. Gomes , Georg Struth date = 2014-01-23 topic = Computer science/Programming languages/Logics, Computer science/Automata and formal languages, Mathematics/Algebra abstract = We formalise Kleene algebra with tests (KAT) and demonic refinement algebra (DRA) in Isabelle/HOL. KAT is relevant for program verification and correctness proofs in the partial correctness setting. While DRA targets similar applications in the context of total correctness. Our formalisation contains the two most important models of these algebras: binary relations in the case of KAT and predicate transformers in the case of DRA. In addition, we derive the inference rules for Hoare logic in KAT and its relational model and present a simple formally verified program verification tool prototype based on the algebraic approach. notify = g.struth@dcs.shef.ac.uk [KAD] title = Kleene Algebras with Domain author = Victor B. F. Gomes , Walter Guttmann , Peter Höfner , Georg Struth , Tjark Weber date = 2016-04-12 topic = Computer science/Programming languages/Logics, Computer science/Automata and formal languages, Mathematics/Algebra abstract = Kleene algebras with domain are Kleene algebras endowed with an operation that maps each element of the algebra to its domain of definition (or its complement) in abstract fashion. They form a simple algebraic basis for Hoare logics, dynamic logics or predicate transformer semantics. We formalise a modular hierarchy of algebras with domain and antidomain (domain complement) operations in Isabelle/HOL that ranges from domain and antidomain semigroups to modal Kleene algebras and divergence Kleene algebras. We link these algebras with models of binary relations and program traces. We include some examples from modal logics, termination and program analysis. notify = walter.guttman@canterbury.ac.nz, g.struth@sheffield.ac.uk, tjark.weber@it.uu.se [Regular_Algebras] title = Regular Algebras author = Simon Foster , Georg Struth date = 2014-05-21 topic = Computer science/Automata and formal languages, Mathematics/Algebra abstract = Regular algebras axiomatise the equational theory of regular expressions as induced by regular language identity. We use Isabelle/HOL for a detailed systematic study of regular algebras given by Boffa, Conway, Kozen and Salomaa. We investigate the relationships between these classes, formalise a soundness proof for the smallest class (Salomaa's) and obtain completeness of the largest one (Boffa's) relative to a deep result by Krob. In addition we provide a large collection of regular identities in the general setting of Boffa's axiom. Our regular algebra hierarchy is orthogonal to the Kleene algebra hierarchy in the Archive of Formal Proofs; we have not aimed at an integration for pragmatic reasons. notify = simon.foster@york.ac.uk, g.struth@sheffield.ac.uk [BytecodeLogicJmlTypes] title = A Bytecode Logic for JML and Types author = Lennart Beringer <>, Martin Hofmann date = 2008-12-12 topic = Computer science/Programming languages/Logics abstract = This document contains the Isabelle/HOL sources underlying the paper A bytecode logic for JML and types by Beringer and Hofmann, updated to Isabelle 2008. We present a program logic for a subset of sequential Java bytecode that is suitable for representing both, features found in high-level specification language JML as well as interpretations of high-level type systems. To this end, we introduce a fine-grained collection of assertions, including strong invariants, local annotations and VDM-reminiscent partial-correctness specifications. Thanks to a goal-oriented structure and interpretation of judgements, verification may proceed without recourse to an additional control flow analysis. The suitability for interpreting intensional type systems is illustrated by the proof-carrying-code style encoding of a type system for a first-order functional language which guarantees a constant upper bound on the number of objects allocated throughout an execution, be the execution terminating or non-terminating. Like the published paper, the formal development is restricted to a comparatively small subset of the JVML, lacking (among other features) exceptions, arrays, virtual methods, and static fields. This shortcoming has been overcome meanwhile, as our paper has formed the basis of the Mobius base logic, a program logic for the full sequential fragment of the JVML. Indeed, the present formalisation formed the basis of a subsequent formalisation of the Mobius base logic in the proof assistant Coq, which includes a proof of soundness with respect to the Bicolano operational semantics by Pichardie. notify = [DataRefinementIBP] title = Semantics and Data Refinement of Invariant Based Programs author = Viorel Preoteasa , Ralph-Johan Back date = 2010-05-28 topic = Computer science/Programming languages/Logics abstract = The invariant based programming is a technique of constructing correct programs by first identifying the basic situations (pre- and post-conditions and invariants) that can occur during the execution of the program, and then defining the transitions and proving that they preserve the invariants. Data refinement is a technique of building correct programs working on concrete datatypes as refinements of more abstract programs. In the theories presented here we formalize the predicate transformer semantics for invariant based programs and their data refinement. extra-history = Change history: [2012-01-05]: Moved some general complete lattice properties to the AFP entry Lattice Properties. Changed the definition of the data refinement relation to be more general and updated all corresponding theorems. Added new syntax for demonic and angelic update statements. notify = viorel.preoteasa@aalto.fi [RefinementReactive] title = Formalization of Refinement Calculus for Reactive Systems author = Viorel Preoteasa date = 2014-10-08 topic = Computer science/Programming languages/Logics abstract = We present a formalization of refinement calculus for reactive systems. Refinement calculus is based on monotonic predicate transformers (monotonic functions from sets of post-states to sets of pre-states), and it is a powerful formalism for reasoning about imperative programs. We model reactive systems as monotonic property transformers that transform sets of output infinite sequences into sets of input infinite sequences. Within this semantics we can model refinement of reactive systems, (unbounded) angelic and demonic nondeterminism, sequential composition, and other semantic properties. We can model systems that may fail for some inputs, and we can model compatibility of systems. We can specify systems that have liveness properties using linear temporal logic, and we can refine system specifications into systems based on symbolic transitions systems, suitable for implementations. notify = viorel.preoteasa@aalto.fi [SIFPL] title = Secure information flow and program logics author = Lennart Beringer <>, Martin Hofmann date = 2008-11-10 topic = Computer science/Programming languages/Logics, Computer science/Security abstract = We present interpretations of type systems for secure information flow in Hoare logic, complementing previous encodings in relational program logics. We first treat the imperative language IMP, extended by a simple procedure call mechanism. For this language we consider base-line non-interference in the style of Volpano et al. and the flow-sensitive type system by Hunt and Sands. In both cases, we show how typing derivations may be used to automatically generate proofs in the program logic that certify the absence of illicit flows. We then add instructions for object creation and manipulation, and derive appropriate proof rules for base-line non-interference. As a consequence of our work, standard verification technology may be used for verifying that a concrete program satisfies the non-interference property.

The present proof development represents an update of the formalisation underlying our paper [CSF 2007] and is intended to resolve any ambiguities that may be present in the paper. notify = lennart.beringer@ifi.lmu.de [TLA] title = A Definitional Encoding of TLA* in Isabelle/HOL author = Gudmund Grov , Stephan Merz date = 2011-11-19 topic = Computer science/Programming languages/Logics abstract = We mechanise the logic TLA* [Merz 1999], an extension of Lamport's Temporal Logic of Actions (TLA) [Lamport 1994] for specifying and reasoning about concurrent and reactive systems. Aiming at a framework for mechanising] the verification of TLA (or TLA*) specifications, this contribution reuses some elements from a previous axiomatic encoding of TLA in Isabelle/HOL by the second author [Merz 1998], which has been part of the Isabelle distribution. In contrast to that previous work, we give here a shallow, definitional embedding, with the following highlights:

  • a theory of infinite sequences, including a formalisation of the concepts of stuttering invariance central to TLA and TLA*;
  • a definition of the semantics of TLA*, which extends TLA by a mutually-recursive definition of formulas and pre-formulas, generalising TLA action formulas;
  • a substantial set of derived proof rules, including the TLA* axioms and Lamport's proof rules for system verification;
  • a set of examples illustrating the usage of Isabelle/TLA* for reasoning about systems.
Note that this work is unrelated to the ongoing development of a proof system for the specification language TLA+, which includes an encoding of TLA+ as a new Isabelle object logic [Chaudhuri et al 2010]. notify = ggrov@inf.ed.ac.uk [Compiling-Exceptions-Correctly] title = Compiling Exceptions Correctly author = Tobias Nipkow date = 2004-07-09 topic = Computer science/Programming languages/Compiling abstract = An exception compilation scheme that dynamically creates and removes exception handler entries on the stack. A formalization of an article of the same name by Hutton and Wright. notify = nipkow@in.tum.de [NormByEval] title = Normalization by Evaluation author = Klaus Aehlig , Tobias Nipkow date = 2008-02-18 topic = Computer science/Programming languages/Compiling abstract = This article formalizes normalization by evaluation as implemented in Isabelle. Lambda calculus plus term rewriting is compiled into a functional program with pattern matching. It is proved that the result of a successful evaluation is a) correct, i.e. equivalent to the input, and b) in normal form. notify = nipkow@in.tum.de [Program-Conflict-Analysis] title = Formalization of Conflict Analysis of Programs with Procedures, Thread Creation, and Monitors topic = Computer science/Programming languages/Static analysis author = Peter Lammich , Markus Müller-Olm date = 2007-12-14 abstract = In this work we formally verify the soundness and precision of a static program analysis that detects conflicts (e. g. data races) in programs with procedures, thread creation and monitors with the Isabelle theorem prover. As common in static program analysis, our program model abstracts guarded branching by nondeterministic branching, but completely interprets the call-/return behavior of procedures, synchronization by monitors, and thread creation. The analysis is based on the observation that all conflicts already occur in a class of particularly restricted schedules. These restricted schedules are suited to constraint-system-based program analysis. The formalization is based upon a flowgraph-based program model with an operational semantics as reference point. notify = peter.lammich@uni-muenster.de [Shivers-CFA] title = Shivers' Control Flow Analysis topic = Computer science/Programming languages/Static analysis author = Joachim Breitner date = 2010-11-16 abstract = In his dissertation, Olin Shivers introduces a concept of control flow graphs for functional languages, provides an algorithm to statically derive a safe approximation of the control flow graph and proves this algorithm correct. In this research project, Shivers' algorithms and proofs are formalized in the HOLCF extension of HOL. notify = mail@joachim-breitner.de, nipkow@in.tum.de [Slicing] title = Towards Certified Slicing author = Daniel Wasserrab date = 2008-09-16 topic = Computer science/Programming languages/Static analysis abstract = Slicing is a widely-used technique with applications in e.g. compiler technology and software security. Thus verification of algorithms in these areas is often based on the correctness of slicing, which should ideally be proven independent of concrete programming languages and with the help of well-known verifying techniques such as proof assistants. As a first step in this direction, this contribution presents a framework for dynamic and static intraprocedural slicing based on control flow and program dependence graphs. Abstracting from concrete syntax we base the framework on a graph representation of the program fulfilling certain structural and well-formedness properties.

The formalization consists of the basic framework (in subdirectory Basic/), the correctness proof for dynamic slicing (in subdirectory Dynamic/), the correctness proof for static intraprocedural slicing (in subdirectory StaticIntra/) and instantiations of the framework with a simple While language (in subdirectory While/) and the sophisticated object-oriented bytecode language of Jinja (in subdirectory JinjaVM/). For more information on the framework, see the TPHOLS 2008 paper by Wasserrab and Lochbihler and the PLAS 2009 paper by Wasserrab et al. notify = [HRB-Slicing] title = Backing up Slicing: Verifying the Interprocedural Two-Phase Horwitz-Reps-Binkley Slicer author = Daniel Wasserrab date = 2009-11-13 topic = Computer science/Programming languages/Static analysis abstract = After verifying dynamic and static interprocedural slicing, we present a modular framework for static interprocedural slicing. To this end, we formalized the standard two-phase slicer from Horwitz, Reps and Binkley (see their TOPLAS 12(1) 1990 paper) together with summary edges as presented by Reps et al. (see FSE 1994). The framework is again modular in the programming language by using an abstract CFG, defined via structural and well-formedness properties. Using a weak simulation between the original and sliced graph, we were able to prove the correctness of static interprocedural slicing. We also instantiate our framework with a simple While language with procedures. This shows that the chosen abstractions are indeed valid. notify = nipkow@in.tum.de [WorkerWrapper] title = The Worker/Wrapper Transformation author = Peter Gammie date = 2009-10-30 topic = Computer science/Programming languages/Transformations abstract = Gill and Hutton formalise the worker/wrapper transformation, building on the work of Launchbury and Peyton-Jones who developed it as a way of changing the type at which a recursive function operates. This development establishes the soundness of the technique and several examples of its use. notify = peteg42@gmail.com, nipkow@in.tum.de [JiveDataStoreModel] title = Jive Data and Store Model author = Nicole Rauch , Norbert Schirmer <> date = 2005-06-20 license = LGPL topic = Computer science/Programming languages/Misc abstract = This document presents the formalization of an object-oriented data and store model in Isabelle/HOL. This model is being used in the Java Interactive Verification Environment, Jive. notify = kleing@cse.unsw.edu.au, schirmer@in.tum.de [HotelKeyCards] title = Hotel Key Card System author = Tobias Nipkow date = 2006-09-09 topic = Computer science/Security abstract = Two models of an electronic hotel key card system are contrasted: a state based and a trace based one. Both are defined, verified, and proved equivalent in the theorem prover Isabelle/HOL. It is shown that if a guest follows a certain safety policy regarding her key cards, she can be sure that nobody but her can enter her room. notify = nipkow@in.tum.de [RSAPSS] title = SHA1, RSA, PSS and more author = Christina Lindenberg <>, Kai Wirt <> date = 2005-05-02 topic = Computer science/Security/Cryptography abstract = Formal verification is getting more and more important in computer science. However the state of the art formal verification methods in cryptography are very rudimentary. These theories are one step to provide a tool box allowing the use of formal methods in every aspect of cryptography. Moreover we present a proof of concept for the feasibility of verification techniques to a standard signature algorithm. notify = nipkow@in.tum.de [InformationFlowSlicing] title = Information Flow Noninterference via Slicing author = Daniel Wasserrab date = 2010-03-23 topic = Computer science/Security abstract =

In this contribution, we show how correctness proofs for intra- and interprocedural slicing can be used to prove that slicing is able to guarantee information flow noninterference. Moreover, we also illustrate how to lift the control flow graphs of the respective frameworks such that they fulfil the additional assumptions needed in the noninterference proofs. A detailed description of the intraprocedural proof and its interplay with the slicing framework can be found in the PLAS'09 paper by Wasserrab et al.

This entry contains the part for intra-procedural slicing. See entry InformationFlowSlicing_Inter for the inter-procedural part.

extra-history = Change history: [2016-06-10]: The original entry InformationFlowSlicing contained both the inter- and intra-procedural case was split into two for easier maintenance. notify = [InformationFlowSlicing_Inter] title = Inter-Procedural Information Flow Noninterference via Slicing author = Daniel Wasserrab date = 2010-03-23 topic = Computer science/Security abstract =

In this contribution, we show how correctness proofs for intra- and interprocedural slicing can be used to prove that slicing is able to guarantee information flow noninterference. Moreover, we also illustrate how to lift the control flow graphs of the respective frameworks such that they fulfil the additional assumptions needed in the noninterference proofs. A detailed description of the intraprocedural proof and its interplay with the slicing framework can be found in the PLAS'09 paper by Wasserrab et al.

This entry contains the part for inter-procedural slicing. See entry InformationFlowSlicing for the intra-procedural part.

extra-history = Change history: [2016-06-10]: The original entry InformationFlowSlicing contained both the inter- and intra-procedural case was split into two for easier maintenance. notify = [ComponentDependencies] title = Formalisation and Analysis of Component Dependencies author = Maria Spichkova date = 2014-04-28 topic = Computer science/System description languages abstract = This set of theories presents a formalisation in Isabelle/HOL of data dependencies between components. The approach allows to analyse system structure oriented towards efficient checking of system: it aims at elaborating for a concrete system, which parts of the system are necessary to check a given property. notify = maria.spichkova@rmit.edu.au [Verified-Prover] title = A Mechanically Verified, Efficient, Sound and Complete Theorem Prover For First Order Logic author = Tom Ridge <> date = 2004-09-28 topic = Logic/General logic/Mechanization of proofs abstract = Soundness and completeness for a system of first order logic are formally proved, building on James Margetson's formalization of work by Wainer and Wallen. The completeness proofs naturally suggest an algorithm to derive proofs. This algorithm, which can be implemented tail recursively, is formalized in Isabelle/HOL. The algorithm can be executed via the rewriting tactics of Isabelle. Alternatively, the definitions can be exported to OCaml, yielding a directly executable program. notify = lp15@cam.ac.uk [Completeness] title = Completeness theorem author = James Margetson <>, Tom Ridge <> date = 2004-09-20 topic = Logic/Proof theory abstract = The completeness of first-order logic is proved, following the first five pages of Wainer and Wallen's chapter of the book Proof Theory by Aczel et al., CUP, 1992. Their presentation of formulas allows the proofs to use symmetry arguments. Margetson formalized this theorem by early 2000. The Isar conversion is thanks to Tom Ridge. A paper describing the formalization is available [pdf]. notify = lp15@cam.ac.uk [Ordinal] title = Countable Ordinals author = Brian Huffman date = 2005-11-11 topic = Logic/Set theory abstract = This development defines a well-ordered type of countable ordinals. It includes notions of continuous and normal functions, recursively defined functions over ordinals, least fixed-points, and derivatives. Much of ordinal arithmetic is formalized, including exponentials and logarithms. The development concludes with formalizations of Cantor Normal Form and Veblen hierarchies over normal functions. notify = lcp@cl.cam.ac.uk [Ordinals_and_Cardinals] title = Ordinals and Cardinals author = Andrei Popescu date = 2009-09-01 topic = Logic/Set theory abstract = We develop a basic theory of ordinals and cardinals in Isabelle/HOL, up to the point where some cardinality facts relevant for the ``working mathematician" become available. Unlike in set theory, here we do not have at hand canonical notions of ordinal and cardinal. Therefore, here an ordinal is merely a well-order relation and a cardinal is an ordinal minim w.r.t. order embedding on its field. extra-history = Change history: [2012-09-25]: This entry has been discontinued because it is now part of the Isabelle distribution. notify = uuomul@yahoo.com, nipkow@in.tum.de [FOL-Fitting] title = First-Order Logic According to Fitting author = Stefan Berghofer contributors = Asta Halkjær From date = 2007-08-02 topic = Logic/General logic/Classical first-order logic abstract = We present a formalization of parts of Melvin Fitting's book "First-Order Logic and Automated Theorem Proving". The formalization covers the syntax of first-order logic, its semantics, the model existence theorem, a natural deduction proof calculus together with a proof of correctness and completeness, as well as the Löwenheim-Skolem theorem. extra-history = Change history: [2018-07-21]: Proved completeness theorem for open formulas. Proofs are now written in the declarative style. Enumeration of pairs and datatypes is automated using the Countable theory. notify = berghofe@in.tum.de [Epistemic_Logic] title = Epistemic Logic: Completeness of Modal Logics author = Asta Halkjær From topic = Logic/General logic/Logics of knowledge and belief date = 2018-10-29 notify = ahfrom@dtu.dk abstract = This work is a formalization of epistemic logic with countably many agents. It includes proofs of soundness and completeness for the axiom system K. The completeness proof is based on the textbook "Reasoning About Knowledge" by Fagin, Halpern, Moses and Vardi (MIT Press 1995). The extensions of system K (T, KB, K4, S4, S5) and their completeness proofs are based on the textbook "Modal Logic" by Blackburn, de Rijke and Venema (Cambridge University Press 2001). extra-history = Change history: [2021-04-15]: Added completeness of modal logics T, KB, K4, S4 and S5. [SequentInvertibility] title = Invertibility in Sequent Calculi author = Peter Chapman <> date = 2009-08-28 topic = Logic/Proof theory license = LGPL abstract = The invertibility of the rules of a sequent calculus is important for guiding proof search and can be used in some formalised proofs of Cut admissibility. We present sufficient conditions for when a rule is invertible with respect to a calculus. We illustrate the conditions with examples. It must be noted we give purely syntactic criteria; no guarantees are given as to the suitability of the rules. notify = pc@cs.st-andrews.ac.uk, nipkow@in.tum.de [LinearQuantifierElim] title = Quantifier Elimination for Linear Arithmetic author = Tobias Nipkow date = 2008-01-11 topic = Logic/General logic/Decidability of theories abstract = This article formalizes quantifier elimination procedures for dense linear orders, linear real arithmetic and Presburger arithmetic. In each case both a DNF-based non-elementary algorithm and one or more (doubly) exponential NNF-based algorithms are formalized, including the well-known algorithms by Ferrante and Rackoff and by Cooper. The NNF-based algorithms for dense linear orders are new but based on Ferrante and Rackoff and on an algorithm by Loos and Weisspfenning which simulates infenitesimals. All algorithms are directly executable. In particular, they yield reflective quantifier elimination procedures for HOL itself. The formalization makes heavy use of locales and is therefore highly modular. notify = nipkow@in.tum.de [Nat-Interval-Logic] title = Interval Temporal Logic on Natural Numbers author = David Trachtenherz <> date = 2011-02-23 topic = Logic/General logic/Temporal logic abstract = We introduce a theory of temporal logic operators using sets of natural numbers as time domain, formalized in a shallow embedding manner. The theory comprises special natural intervals (theory IL_Interval: open and closed intervals, continuous and modulo intervals, interval traversing results), operators for shifting intervals to left/right on the number axis as well as expanding/contracting intervals by constant factors (theory IL_IntervalOperators.thy), and ultimately definitions and results for unary and binary temporal operators on arbitrary natural sets (theory IL_TemporalOperators). notify = nipkow@in.tum.de [Recursion-Theory-I] title = Recursion Theory I author = Michael Nedzelsky <> date = 2008-04-05 topic = Logic/Computability abstract = This document presents the formalization of introductory material from recursion theory --- definitions and basic properties of primitive recursive functions, Cantor pairing function and computably enumerable sets (including a proof of existence of a one-complete computably enumerable set and a proof of the Rice's theorem). notify = MichaelNedzelsky@yandex.ru [Free-Boolean-Algebra] topic = Logic/General logic/Classical propositional logic title = Free Boolean Algebra author = Brian Huffman date = 2010-03-29 abstract = This theory defines a type constructor representing the free Boolean algebra over a set of generators. Values of type (α)formula represent propositional formulas with uninterpreted variables from type α, ordered by implication. In addition to all the standard Boolean algebra operations, the library also provides a function for building homomorphisms to any other Boolean algebra type. notify = brianh@cs.pdx.edu [Sort_Encodings] title = Sound and Complete Sort Encodings for First-Order Logic author = Jasmin Christian Blanchette , Andrei Popescu date = 2013-06-27 topic = Logic/General logic/Mechanization of proofs abstract = This is a formalization of the soundness and completeness properties for various efficient encodings of sorts in unsorted first-order logic used by Isabelle's Sledgehammer tool.

Essentially, the encodings proceed as follows: a many-sorted problem is decorated with (as few as possible) tags or guards that make the problem monotonic; then sorts can be soundly erased.

The development employs a formalization of many-sorted first-order logic in clausal form (clauses, structures and the basic properties of the satisfaction relation), which could be of interest as the starting point for other formalizations of first-order logic metatheory. notify = uuomul@yahoo.com [Lambda_Free_RPOs] title = Formalization of Recursive Path Orders for Lambda-Free Higher-Order Terms author = Jasmin Christian Blanchette , Uwe Waldmann , Daniel Wand date = 2016-09-23 topic = Logic/Rewriting abstract = This Isabelle/HOL formalization defines recursive path orders (RPOs) for higher-order terms without lambda-abstraction and proves many useful properties about them. The main order fully coincides with the standard RPO on first-order terms also in the presence of currying, distinguishing it from previous work. An optimized variant is formalized as well. It appears promising as the basis of a higher-order superposition calculus. notify = jasmin.blanchette@gmail.com [Lambda_Free_KBOs] title = Formalization of Knuth–Bendix Orders for Lambda-Free Higher-Order Terms author = Heiko Becker , Jasmin Christian Blanchette , Uwe Waldmann , Daniel Wand date = 2016-11-12 topic = Logic/Rewriting abstract = This Isabelle/HOL formalization defines Knuth–Bendix orders for higher-order terms without lambda-abstraction and proves many useful properties about them. The main order fully coincides with the standard transfinite KBO with subterm coefficients on first-order terms. It appears promising as the basis of a higher-order superposition calculus. notify = jasmin.blanchette@gmail.com [Lambda_Free_EPO] title = Formalization of the Embedding Path Order for Lambda-Free Higher-Order Terms author = Alexander Bentkamp topic = Logic/Rewriting date = 2018-10-19 notify = a.bentkamp@vu.nl abstract = This Isabelle/HOL formalization defines the Embedding Path Order (EPO) for higher-order terms without lambda-abstraction and proves many useful properties about it. In contrast to the lambda-free recursive path orders, it does not fully coincide with RPO on first-order terms, but it is compatible with arbitrary higher-order contexts. [Nested_Multisets_Ordinals] title = Formalization of Nested Multisets, Hereditary Multisets, and Syntactic Ordinals author = Jasmin Christian Blanchette , Mathias Fleury , Dmitriy Traytel date = 2016-11-12 topic = Logic/Rewriting abstract = This Isabelle/HOL formalization introduces a nested multiset datatype and defines Dershowitz and Manna's nested multiset order. The order is proved well founded and linear. By removing one constructor, we transform the nested multisets into hereditary multisets. These are isomorphic to the syntactic ordinals—the ordinals can be recursively expressed in Cantor normal form. Addition, subtraction, multiplication, and linear orders are provided on this type. notify = jasmin.blanchette@gmail.com [Abstract-Rewriting] title = Abstract Rewriting topic = Logic/Rewriting date = 2010-06-14 author = Christian Sternagel , René Thiemann license = LGPL abstract = We present an Isabelle formalization of abstract rewriting (see, e.g., the book by Baader and Nipkow). First, we define standard relations like joinability, meetability, conversion, etc. Then, we formalize important properties of abstract rewrite systems, e.g., confluence and strong normalization. Our main concern is on strong normalization, since this formalization is the basis of CeTA (which is mainly about strong normalization of term rewrite systems). Hence lemmas involving strong normalization constitute by far the biggest part of this theory. One of those is Newman's lemma. extra-history = Change history: [2010-09-17]: Added theories defining several (ordered) semirings related to strong normalization and giving some standard instances.
[2013-10-16]: Generalized delta-orders from rationals to Archimedean fields. notify = christian.sternagel@uibk.ac.at, rene.thiemann@uibk.ac.at [First_Order_Terms] title = First-Order Terms author = Christian Sternagel , René Thiemann topic = Logic/Rewriting, Computer science/Algorithms license = LGPL date = 2018-02-06 notify = c.sternagel@gmail.com, rene.thiemann@uibk.ac.at abstract = We formalize basic results on first-order terms, including matching and a first-order unification algorithm, as well as well-foundedness of the subsumption order. This entry is part of the Isabelle Formalization of Rewriting IsaFoR, where first-order terms are omni-present: the unification algorithm is used to certify several confluence and termination techniques, like critical-pair computation and dependency graph approximations; and the subsumption order is a crucial ingredient for completion. [Free-Groups] title = Free Groups author = Joachim Breitner date = 2010-06-24 topic = Mathematics/Algebra abstract = Free Groups are, in a sense, the most generic kind of group. They are defined over a set of generators with no additional relations in between them. They play an important role in the definition of group presentations and in other fields. This theory provides the definition of Free Group as the set of fully canceled words in the generators. The universal property is proven, as well as some isomorphisms results about Free Groups. extra-history = Change history: [2011-12-11]: Added the Ping Pong Lemma. notify = [CofGroups] title = An Example of a Cofinitary Group in Isabelle/HOL author = Bart Kastermans date = 2009-08-04 topic = Mathematics/Algebra abstract = We formalize the usual proof that the group generated by the function k -> k + 1 on the integers gives rise to a cofinitary group. notify = nipkow@in.tum.de [Finitely_Generated_Abelian_Groups] title = Finitely Generated Abelian Groups author = Joseph Thommes<>, Manuel Eberl topic = Mathematics/Algebra date = 2021-07-07 notify = joseph-thommes@gmx.de, manuel@pruvisto.org abstract = This article deals with the formalisation of some group-theoretic results including the fundamental theorem of finitely generated abelian groups characterising the structure of these groups as a uniquely determined product of cyclic groups. Both the invariant factor decomposition and the primary decomposition are covered. Additional work includes results about the direct product, the internal direct product and more group-theoretic lemmas. [Group-Ring-Module] title = Groups, Rings and Modules author = Hidetsune Kobayashi <>, L. Chen <>, H. Murao <> date = 2004-05-18 topic = Mathematics/Algebra abstract = The theory of groups, rings and modules is developed to a great depth. Group theory results include Zassenhaus's theorem and the Jordan-Hoelder theorem. The ring theory development includes ideals, quotient rings and the Chinese remainder theorem. The module development includes the Nakayama lemma, exact sequences and Tensor products. notify = lp15@cam.ac.uk [Robbins-Conjecture] title = A Complete Proof of the Robbins Conjecture author = Matthew Wampler-Doty <> date = 2010-05-22 topic = Mathematics/Algebra abstract = This document gives a formalization of the proof of the Robbins conjecture, following A. Mann, A Complete Proof of the Robbins Conjecture, 2003. notify = nipkow@in.tum.de [Valuation] title = Fundamental Properties of Valuation Theory and Hensel's Lemma author = Hidetsune Kobayashi <> date = 2007-08-08 topic = Mathematics/Algebra abstract = Convergence with respect to a valuation is discussed as convergence of a Cauchy sequence. Cauchy sequences of polynomials are defined. They are used to formalize Hensel's lemma. notify = lp15@cam.ac.uk [Rank_Nullity_Theorem] title = Rank-Nullity Theorem in Linear Algebra author = Jose Divasón , Jesús Aransay topic = Mathematics/Algebra date = 2013-01-16 abstract = In this contribution, we present some formalizations based on the HOL-Multivariate-Analysis session of Isabelle. Firstly, a generalization of several theorems of such library are presented. Secondly, some definitions and proofs involving Linear Algebra and the four fundamental subspaces of a matrix are shown. Finally, we present a proof of the result known in Linear Algebra as the ``Rank-Nullity Theorem'', which states that, given any linear map f from a finite dimensional vector space V to a vector space W, then the dimension of V is equal to the dimension of the kernel of f (which is a subspace of V) and the dimension of the range of f (which is a subspace of W). The proof presented here is based on the one given by Sheldon Axler in his book Linear Algebra Done Right. As a corollary of the previous theorem, and taking advantage of the relationship between linear maps and matrices, we prove that, for every matrix A (which has associated a linear map between finite dimensional vector spaces), the sum of its null space and its column space (which is equal to the range of the linear map) is equal to the number of columns of A. extra-history = Change history: [2014-07-14]: Added some generalizations that allow us to formalize the Rank-Nullity Theorem over finite dimensional vector spaces, instead of over the more particular euclidean spaces. Updated abstract. notify = jose.divasonm@unirioja.es, jesus-maria.aransay@unirioja.es [Affine_Arithmetic] title = Affine Arithmetic author = Fabian Immler date = 2014-02-07 topic = Mathematics/Analysis abstract = We give a formalization of affine forms as abstract representations of zonotopes. We provide affine operations as well as overapproximations of some non-affine operations like multiplication and division. Expressions involving those operations can automatically be turned into (executable) functions approximating the original expression in affine arithmetic. extra-history = Change history: [2015-01-31]: added algorithm for zonotope/hyperplane intersection
[2017-09-20]: linear approximations for all symbols from the floatarith data type notify = immler@in.tum.de [Laplace_Transform] title = Laplace Transform author = Fabian Immler topic = Mathematics/Analysis date = 2019-08-14 notify = fimmler@cs.cmu.edu abstract = This entry formalizes the Laplace transform and concrete Laplace transforms for arithmetic functions, frequency shift, integration and (higher) differentiation in the time domain. It proves Lerch's lemma and uniqueness of the Laplace transform for continuous functions. In order to formalize the foundational assumptions, this entry contains a formalization of piecewise continuous functions and functions of exponential order. [Cauchy] title = Cauchy's Mean Theorem and the Cauchy-Schwarz Inequality author = Benjamin Porter <> date = 2006-03-14 topic = Mathematics/Analysis abstract = This document presents the mechanised proofs of two popular theorems attributed to Augustin Louis Cauchy - Cauchy's Mean Theorem and the Cauchy-Schwarz Inequality. notify = kleing@cse.unsw.edu.au [Integration] title = Integration theory and random variables author = Stefan Richter date = 2004-11-19 topic = Mathematics/Analysis abstract = Lebesgue-style integration plays a major role in advanced probability. We formalize concepts of elementary measure theory, real-valued random variables as Borel-measurable functions, and a stepwise inductive definition of the integral itself. All proofs are carried out in human readable style using the Isar language. extra-note = Note: This article is of historical interest only. Lebesgue-style integration and probability theory are now available as part of the Isabelle/HOL distribution (directory Probability). notify = richter@informatik.rwth-aachen.de, nipkow@in.tum.de, hoelzl@in.tum.de [Ordinary_Differential_Equations] title = Ordinary Differential Equations author = Fabian Immler , Johannes Hölzl topic = Mathematics/Analysis date = 2012-04-26 abstract =

Session Ordinary-Differential-Equations formalizes ordinary differential equations (ODEs) and initial value problems. This work comprises proofs for local and global existence of unique solutions (Picard-Lindelöf theorem). Moreover, it contains a formalization of the (continuous or even differentiable) dependency of the flow on initial conditions as the flow of ODEs.

Not in the generated document are the following sessions:

  • HOL-ODE-Numerics: Rigorous numerical algorithms for computing enclosures of solutions based on Runge-Kutta methods and affine arithmetic. Reachability analysis with splitting and reduction at hyperplanes.
  • HOL-ODE-Examples: Applications of the numerical algorithms to concrete systems of ODEs.
  • Lorenz_C0, Lorenz_C1: Verified algorithms for checking C1-information according to Tucker's proof, computation of C0-information.

extra-history = Change history: [2014-02-13]: added an implementation of the Euler method based on affine arithmetic
[2016-04-14]: added flow and variational equation
[2016-08-03]: numerical algorithms for reachability analysis (using second-order Runge-Kutta methods, splitting, and reduction) implemented using Lammich's framework for automatic refinement
[2017-09-20]: added Poincare map and propagation of variational equation in reachability analysis, verified algorithms for C1-information and computations for C0-information of the Lorenz attractor. notify = immler@in.tum.de, hoelzl@in.tum.de [Polynomials] title = Executable Multivariate Polynomials author = Christian Sternagel , René Thiemann , Alexander Maletzky , Fabian Immler , Florian Haftmann , Andreas Lochbihler , Alexander Bentkamp date = 2010-08-10 topic = Mathematics/Analysis, Mathematics/Algebra, Computer science/Algorithms/Mathematical license = LGPL abstract = We define multivariate polynomials over arbitrary (ordered) semirings in combination with (executable) operations like addition, multiplication, and substitution. We also define (weak) monotonicity of polynomials and comparison of polynomials where we provide standard estimations like absolute positiveness or the more recent approach of Neurauter, Zankl, and Middeldorp. Moreover, it is proven that strongly normalizing (monotone) orders can be lifted to strongly normalizing (monotone) orders over polynomials. Our formalization was performed as part of the IsaFoR/CeTA-system which contains several termination techniques. The provided theories have been essential to formalize polynomial interpretations.

This formalization also contains an abstract representation as coefficient functions with finite support and a type of power-products. If this type is ordered by a linear (term) ordering, various additional notions, such as leading power-product, leading coefficient etc., are introduced as well. Furthermore, a lot of generic properties of, and functions on, multivariate polynomials are formalized, including the substitution and evaluation homomorphisms, embeddings of polynomial rings into larger rings (i.e. with one additional indeterminate), homogenization and dehomogenization of polynomials, and the canonical isomorphism between R[X,Y] and R[X][Y]. extra-history = Change history: [2010-09-17]: Moved theories on arbitrary (ordered) semirings to Abstract Rewriting.
[2016-10-28]: Added abstract representation of polynomials and authors Maletzky/Immler.
[2018-01-23]: Added authors Haftmann, Lochbihler after incorporating their formalization of multivariate polynomials based on Polynomial mappings. Moved material from Bentkamp's entry "Deep Learning".
[2019-04-18]: Added material about polynomials whose power-products are represented themselves by polynomial mappings. notify = rene.thiemann@uibk.ac.at, christian.sternagel@uibk.ac.at, alexander.maletzky@risc.jku.at, immler@in.tum.de [Sqrt_Babylonian] title = Computing N-th Roots using the Babylonian Method author = René Thiemann date = 2013-01-03 topic = Mathematics/Analysis license = LGPL abstract = We implement the Babylonian method to compute n-th roots of numbers. We provide precise algorithms for naturals, integers and rationals, and offer an approximation algorithm for square roots over linear ordered fields. Moreover, there are precise algorithms to compute the floor and the ceiling of n-th roots. extra-history = Change history: [2013-10-16]: Added algorithms to compute floor and ceiling of sqrt of integers. [2014-07-11]: Moved NthRoot_Impl from Real-Impl to this entry. notify = rene.thiemann@uibk.ac.at [Sturm_Sequences] title = Sturm's Theorem author = Manuel Eberl date = 2014-01-11 topic = Mathematics/Analysis abstract = Sturm's Theorem states that polynomial sequences with certain properties, so-called Sturm sequences, can be used to count the number of real roots of a real polynomial. This work contains a proof of Sturm's Theorem and code for constructing Sturm sequences efficiently. It also provides the “sturm” proof method, which can decide certain statements about the roots of real polynomials, such as “the polynomial P has exactly n roots in the interval I” or “P(x) > Q(x) for all x ∈ ℝ”. notify = manuel@pruvisto.org [Sturm_Tarski] title = The Sturm-Tarski Theorem author = Wenda Li date = 2014-09-19 topic = Mathematics/Analysis abstract = We have formalized the Sturm-Tarski theorem (also referred as the Tarski theorem), which generalizes Sturm's theorem. Sturm's theorem is usually used as a way to count distinct real roots, while the Sturm-Tarksi theorem forms the basis for Tarski's classic quantifier elimination for real closed field. notify = wl302@cam.ac.uk [Markov_Models] title = Markov Models author = Johannes Hölzl , Tobias Nipkow date = 2012-01-03 topic = Mathematics/Probability theory, Computer science/Automata and formal languages abstract = This is a formalization of Markov models in Isabelle/HOL. It builds on Isabelle's probability theory. The available models are currently Discrete-Time Markov Chains and a extensions of them with rewards.

As application of these models we formalize probabilistic model checking of pCTL formulas, analysis of IPv4 address allocation in ZeroConf and an analysis of the anonymity of the Crowds protocol. See here for the corresponding paper. notify = hoelzl@in.tum.de [Probabilistic_System_Zoo] title = A Zoo of Probabilistic Systems author = Johannes Hölzl , Andreas Lochbihler , Dmitriy Traytel date = 2015-05-27 topic = Computer science/Automata and formal languages abstract = Numerous models of probabilistic systems are studied in the literature. Coalgebra has been used to classify them into system types and compare their expressiveness. We formalize the resulting hierarchy of probabilistic system types by modeling the semantics of the different systems as codatatypes. This approach yields simple and concise proofs, as bisimilarity coincides with equality for codatatypes.

This work is described in detail in the ITP 2015 publication by the authors. notify = traytel@in.tum.de [Density_Compiler] title = A Verified Compiler for Probability Density Functions author = Manuel Eberl , Johannes Hölzl , Tobias Nipkow date = 2014-10-09 topic = Mathematics/Probability theory, Computer science/Programming languages/Compiling abstract = Bhat et al. [TACAS 2013] developed an inductive compiler that computes density functions for probability spaces described by programs in a probabilistic functional language. In this work, we implement such a compiler for a modified version of this language within the theorem prover Isabelle and give a formal proof of its soundness w.r.t. the semantics of the source and target language. Together with Isabelle's code generation for inductive predicates, this yields a fully verified, executable density compiler. The proof is done in two steps: First, an abstract compiler working with abstract functions modelled directly in the theorem prover's logic is defined and proved sound. Then, this compiler is refined to a concrete version that returns a target-language expression.

An article with the same title and authors is published in the proceedings of ESOP 2015. A detailed presentation of this work can be found in the first author's master's thesis. notify = hoelzl@in.tum.de [CAVA_Automata] title = The CAVA Automata Library author = Peter Lammich date = 2014-05-28 topic = Computer science/Automata and formal languages abstract = We report on the graph and automata library that is used in the fully verified LTL model checker CAVA. As most components of CAVA use some type of graphs or automata, a common automata library simplifies assembly of the components and reduces redundancy.

The CAVA Automata Library provides a hierarchy of graph and automata classes, together with some standard algorithms. Its object oriented design allows for sharing of algorithms, theorems, and implementations between its classes, and also simplifies extensions of the library. Moreover, it is integrated into the Automatic Refinement Framework, supporting automatic refinement of the abstract automata types to efficient data structures.

Note that the CAVA Automata Library is work in progress. Currently, it is very specifically tailored towards the requirements of the CAVA model checker. Nevertheless, the formalization techniques presented here allow an extension of the library to a wider scope. Moreover, they are not limited to graph libraries, but apply to class hierarchies in general.

The CAVA Automata Library is described in the paper: Peter Lammich, The CAVA Automata Library, Isabelle Workshop 2014. notify = lammich@in.tum.de [LTL] title = Linear Temporal Logic author = Salomon Sickert contributors = Benedikt Seidl date = 2016-03-01 topic = Logic/General logic/Temporal logic, Computer science/Automata and formal languages abstract = This theory provides a formalisation of linear temporal logic (LTL) and unifies previous formalisations within the AFP. This entry establishes syntax and semantics for this logic and decouples it from existing entries, yielding a common environment for theories reasoning about LTL. Furthermore a parser written in SML and an executable simplifier are provided. extra-history = Change history: [2019-03-12]: Support for additional operators, implementation of common equivalence relations, definition of syntactic fragments of LTL and the minimal disjunctive normal form.
notify = sickert@in.tum.de [LTL_to_GBA] title = Converting Linear-Time Temporal Logic to Generalized Büchi Automata author = Alexander Schimpf , Peter Lammich date = 2014-05-28 topic = Computer science/Automata and formal languages abstract = We formalize linear-time temporal logic (LTL) and the algorithm by Gerth et al. to convert LTL formulas to generalized Büchi automata. We also formalize some syntactic rewrite rules that can be applied to optimize the LTL formula before conversion. Moreover, we integrate the Stuttering Equivalence AFP-Entry by Stefan Merz, adapting the lemma that next-free LTL formula cannot distinguish between stuttering equivalent runs to our setting.

We use the Isabelle Refinement and Collection framework, as well as the Autoref tool, to obtain a refined version of our algorithm, from which efficiently executable code can be extracted. notify = lammich@in.tum.de [Gabow_SCC] title = Verified Efficient Implementation of Gabow's Strongly Connected Components Algorithm author = Peter Lammich date = 2014-05-28 topic = Computer science/Algorithms/Graph, Mathematics/Graph theory abstract = We present an Isabelle/HOL formalization of Gabow's algorithm for finding the strongly connected components of a directed graph. Using data refinement techniques, we extract efficient code that performs comparable to a reference implementation in Java. Our style of formalization allows for re-using large parts of the proofs when defining variants of the algorithm. We demonstrate this by verifying an algorithm for the emptiness check of generalized Büchi automata, re-using most of the existing proofs. notify = lammich@in.tum.de [Promela] title = Promela Formalization author = René Neumann date = 2014-05-28 topic = Computer science/System description languages abstract = We present an executable formalization of the language Promela, the description language for models of the model checker SPIN. This formalization is part of the work for a completely verified model checker (CAVA), but also serves as a useful (and executable!) description of the semantics of the language itself, something that is currently missing. The formalization uses three steps: It takes an abstract syntax tree generated from an SML parser, removes syntactic sugar and enriches it with type information. This further gets translated into a transition system, on which the semantic engine (read: successor function) operates. notify = [CAVA_LTL_Modelchecker] title = A Fully Verified Executable LTL Model Checker author = Javier Esparza , Peter Lammich , René Neumann , Tobias Nipkow , Alexander Schimpf , Jan-Georg Smaus date = 2014-05-28 topic = Computer science/Automata and formal languages abstract = We present an LTL model checker whose code has been completely verified using the Isabelle theorem prover. The checker consists of over 4000 lines of ML code. The code is produced using the Isabelle Refinement Framework, which allows us to split its correctness proof into (1) the proof of an abstract version of the checker, consisting of a few hundred lines of ``formalized pseudocode'', and (2) a verified refinement step in which mathematical sets and other abstract structures are replaced by implementations of efficient structures like red-black trees and functional arrays. This leads to a checker that, while still slower than unverified checkers, can already be used as a trusted reference implementation against which advanced implementations can be tested.

An early version of this model checker is described in the CAV 2013 paper with the same title. notify = lammich@in.tum.de [Fermat3_4] title = Fermat's Last Theorem for Exponents 3 and 4 and the Parametrisation of Pythagorean Triples author = Roelof Oosterhuis <> date = 2007-08-12 topic = Mathematics/Number theory abstract = This document presents the mechanised proofs of

  • Fermat's Last Theorem for exponents 3 and 4 and
  • the parametrisation of Pythagorean Triples.
notify = nipkow@in.tum.de, roelofoosterhuis@gmail.com [Perfect-Number-Thm] title = Perfect Number Theorem author = Mark Ijbema date = 2009-11-22 topic = Mathematics/Number theory abstract = These theories present the mechanised proof of the Perfect Number Theorem. notify = nipkow@in.tum.de [SumSquares] title = Sums of Two and Four Squares author = Roelof Oosterhuis <> date = 2007-08-12 topic = Mathematics/Number theory abstract = This document presents the mechanised proofs of the following results:
  • any prime number of the form 4m+1 can be written as the sum of two squares;
  • any natural number can be written as the sum of four squares
notify = nipkow@in.tum.de, roelofoosterhuis@gmail.com [Lehmer] title = Lehmer's Theorem author = Simon Wimmer , Lars Noschinski date = 2013-07-22 topic = Mathematics/Number theory abstract = In 1927, Lehmer presented criterions for primality, based on the converse of Fermat's litte theorem. This work formalizes the second criterion from Lehmer's paper, a necessary and sufficient condition for primality.

As a side product we formalize some properties of Euler's phi-function, the notion of the order of an element of a group, and the cyclicity of the multiplicative group of a finite field. notify = noschinl@gmail.com, simon.wimmer@tum.de [Pratt_Certificate] title = Pratt's Primality Certificates author = Simon Wimmer , Lars Noschinski date = 2013-07-22 topic = Mathematics/Number theory abstract = In 1975, Pratt introduced a proof system for certifying primes. He showed that a number p is prime iff a primality certificate for p exists. By showing a logarithmic upper bound on the length of the certificates in size of the prime number, he concluded that the decision problem for prime numbers is in NP. This work formalizes soundness and completeness of Pratt's proof system as well as an upper bound for the size of the certificate. notify = noschinl@gmail.com, simon.wimmer@tum.de [Monad_Memo_DP] title = Monadification, Memoization and Dynamic Programming author = Simon Wimmer , Shuwei Hu , Tobias Nipkow topic = Computer science/Programming languages/Transformations, Computer science/Algorithms, Computer science/Functional programming date = 2018-05-22 notify = wimmers@in.tum.de abstract = We present a lightweight framework for the automatic verified (functional or imperative) memoization of recursive functions. Our tool can turn a pure Isabelle/HOL function definition into a monadified version in a state monad or the Imperative HOL heap monad, and prove a correspondence theorem. We provide a variety of memory implementations for the two types of monads. A number of simple techniques allow us to achieve bottom-up computation and space-efficient memoization. The framework’s utility is demonstrated on a number of representative dynamic programming problems. A detailed description of our work can be found in the accompanying paper [2]. [Probabilistic_Timed_Automata] title = Probabilistic Timed Automata author = Simon Wimmer , Johannes Hölzl topic = Mathematics/Probability theory, Computer science/Automata and formal languages date = 2018-05-24 notify = wimmers@in.tum.de, hoelzl@in.tum.de abstract = We present a formalization of probabilistic timed automata (PTA) for which we try to follow the formula MDP + TA = PTA as far as possible: our work starts from our existing formalizations of Markov decision processes (MDP) and timed automata (TA) and combines them modularly. We prove the fundamental result for probabilistic timed automata: the region construction that is known from timed automata carries over to the probabilistic setting. In particular, this allows us to prove that minimum and maximum reachability probabilities can be computed via a reduction to MDP model checking, including the case where one wants to disregard unrealizable behavior. Further information can be found in our ITP paper [2]. [Hidden_Markov_Models] title = Hidden Markov Models author = Simon Wimmer topic = Mathematics/Probability theory, Computer science/Algorithms date = 2018-05-25 notify = wimmers@in.tum.de abstract = This entry contains a formalization of hidden Markov models [3] based on Johannes Hölzl's formalization of discrete time Markov chains [1]. The basic definitions are provided and the correctness of two main (dynamic programming) algorithms for hidden Markov models is proved: the forward algorithm for computing the likelihood of an observed sequence, and the Viterbi algorithm for decoding the most probable hidden state sequence. The Viterbi algorithm is made executable including memoization. Hidden markov models have various applications in natural language processing. For an introduction see Jurafsky and Martin [2]. [ArrowImpossibilityGS] title = Arrow and Gibbard-Satterthwaite author = Tobias Nipkow date = 2008-09-01 topic = Mathematics/Games and economics abstract = This article formalizes two proofs of Arrow's impossibility theorem due to Geanakoplos and derives the Gibbard-Satterthwaite theorem as a corollary. One formalization is based on utility functions, the other one on strict partial orders.

An article about these proofs is found here. notify = nipkow@in.tum.de [SenSocialChoice] title = Some classical results in Social Choice Theory author = Peter Gammie date = 2008-11-09 topic = Mathematics/Games and economics abstract = Drawing on Sen's landmark work "Collective Choice and Social Welfare" (1970), this development proves Arrow's General Possibility Theorem, Sen's Liberal Paradox and May's Theorem in a general setting. The goal was to make precise the classical statements and proofs of these results, and to provide a foundation for more recent results such as the Gibbard-Satterthwaite and Duggan-Schwartz theorems. notify = nipkow@in.tum.de [Vickrey_Clarke_Groves] title = VCG - Combinatorial Vickrey-Clarke-Groves Auctions author = Marco B. Caminati <>, Manfred Kerber , Christoph Lange, Colin Rowat date = 2015-04-30 topic = Mathematics/Games and economics abstract = A VCG auction (named after their inventors Vickrey, Clarke, and Groves) is a generalization of the single-good, second price Vickrey auction to the case of a combinatorial auction (multiple goods, from which any participant can bid on each possible combination). We formalize in this entry VCG auctions, including tie-breaking and prove that the functions for the allocation and the price determination are well-defined. Furthermore we show that the allocation function allocates goods only to participants, only goods in the auction are allocated, and no good is allocated twice. We also show that the price function is non-negative. These properties also hold for the automatically extracted Scala code. notify = mnfrd.krbr@gmail.com [Topology] title = Topology author = Stefan Friedrich <> date = 2004-04-26 topic = Mathematics/Topology abstract = This entry contains two theories. The first, Topology, develops the basic notions of general topology. The second, which can be viewed as a demonstration of the first, is called LList_Topology. It develops the topology of lazy lists. notify = lcp@cl.cam.ac.uk [Knot_Theory] title = Knot Theory author = T.V.H. Prathamesh date = 2016-01-20 topic = Mathematics/Topology abstract = This work contains a formalization of some topics in knot theory. The concepts that were formalized include definitions of tangles, links, framed links and link/tangle equivalence. The formalization is based on a formulation of links in terms of tangles. We further construct and prove the invariance of the Bracket polynomial. Bracket polynomial is an invariant of framed links closely linked to the Jones polynomial. This is perhaps the first attempt to formalize any aspect of knot theory in an interactive proof assistant. notify = prathamesh@imsc.res.in [Graph_Theory] title = Graph Theory author = Lars Noschinski date = 2013-04-28 topic = Mathematics/Graph theory abstract = This development provides a formalization of directed graphs, supporting (labelled) multi-edges and infinite graphs. A polymorphic edge type allows edges to be treated as pairs of vertices, if multi-edges are not required. Formalized properties are i.a. walks (and related concepts), connectedness and subgraphs and basic properties of isomorphisms.

This formalization is used to prove characterizations of Euler Trails, Shortest Paths and Kuratowski subgraphs. notify = noschinl@gmail.com [Planarity_Certificates] title = Planarity Certificates author = Lars Noschinski date = 2015-11-11 topic = Mathematics/Graph theory abstract = This development provides a formalization of planarity based on combinatorial maps and proves that Kuratowski's theorem implies combinatorial planarity. Moreover, it contains verified implementations of programs checking certificates for planarity (i.e., a combinatorial map) or non-planarity (i.e., a Kuratowski subgraph). notify = noschinl@gmail.com [Max-Card-Matching] title = Maximum Cardinality Matching author = Christine Rizkallah date = 2011-07-21 topic = Mathematics/Graph theory abstract =

A matching in a graph G is a subset M of the edges of G such that no two share an endpoint. A matching has maximum cardinality if its cardinality is at least as large as that of any other matching. An odd-set cover OSC of a graph G is a labeling of the nodes of G with integers such that every edge of G is either incident to a node labeled 1 or connects two nodes labeled with the same number i ≥ 2.

This article proves Edmonds theorem:
Let M be a matching in a graph G and let OSC be an odd-set cover of G. For any i ≥ 0, let n(i) be the number of nodes labeled i. If |M| = n(1) + ∑i ≥ 2(n(i) div 2), then M is a maximum cardinality matching.

notify = nipkow@in.tum.de [Girth_Chromatic] title = A Probabilistic Proof of the Girth-Chromatic Number Theorem author = Lars Noschinski date = 2012-02-06 topic = Mathematics/Graph theory abstract = This works presents a formalization of the Girth-Chromatic number theorem in graph theory, stating that graphs with arbitrarily large girth and chromatic number exist. The proof uses the theory of Random Graphs to prove the existence with probabilistic arguments. notify = noschinl@gmail.com [Random_Graph_Subgraph_Threshold] title = Properties of Random Graphs -- Subgraph Containment author = Lars Hupel date = 2014-02-13 topic = Mathematics/Graph theory, Mathematics/Probability theory abstract = Random graphs are graphs with a fixed number of vertices, where each edge is present with a fixed probability. We are interested in the probability that a random graph contains a certain pattern, for example a cycle or a clique. A very high edge probability gives rise to perhaps too many edges (which degrades performance for many algorithms), whereas a low edge probability might result in a disconnected graph. We prove a theorem about a threshold probability such that a higher edge probability will asymptotically almost surely produce a random graph with the desired subgraph. notify = hupel@in.tum.de [Flyspeck-Tame] title = Flyspeck I: Tame Graphs author = Gertrud Bauer <>, Tobias Nipkow date = 2006-05-22 topic = Mathematics/Graph theory abstract = These theories present the verified enumeration of tame plane graphs as defined by Thomas C. Hales in his proof of the Kepler Conjecture in his book Dense Sphere Packings. A Blueprint for Formal Proofs. [CUP 2012]. The values of the constants in the definition of tameness are identical to those in the Flyspeck project. The IJCAR 2006 paper by Nipkow, Bauer and Schultz refers to the original version of Hales' proof, the ITP 2011 paper by Nipkow refers to the Blueprint version of the proof. extra-history = Change history: [2010-11-02]: modified theories to reflect the modified definition of tameness in Hales' revised proof.
[2014-07-03]: modified constants in def of tameness and Archive according to the final state of the Flyspeck proof. notify = nipkow@in.tum.de [Well_Quasi_Orders] title = Well-Quasi-Orders author = Christian Sternagel date = 2012-04-13 topic = Mathematics/Combinatorics abstract = Based on Isabelle/HOL's type class for preorders, we introduce a type class for well-quasi-orders (wqo) which is characterized by the absence of "bad" sequences (our proofs are along the lines of the proof of Nash-Williams, from which we also borrow terminology). Our main results are instantiations for the product type, the list type, and a type of finite trees, which (almost) directly follow from our proofs of (1) Dickson's Lemma, (2) Higman's Lemma, and (3) Kruskal's Tree Theorem. More concretely:
  • If the sets A and B are wqo then their Cartesian product is wqo.
  • If the set A is wqo then the set of finite lists over A is wqo.
  • If the set A is wqo then the set of finite trees over A is wqo.
The research was funded by the Austrian Science Fund (FWF): J3202. extra-history = Change history: [2012-06-11]: Added Kruskal's Tree Theorem.
[2012-12-19]: New variant of Kruskal's tree theorem for terms (as opposed to variadic terms, i.e., trees), plus finite version of the tree theorem as corollary.
[2013-05-16]: Simplified construction of minimal bad sequences.
[2014-07-09]: Simplified proofs of Higman's lemma and Kruskal's tree theorem, based on homogeneous sequences.
[2016-01-03]: An alternative proof of Higman's lemma by open induction.
[2017-06-08]: Proved (classical) equivalence to inductive definition of almost-full relations according to the ITP 2012 paper "Stop When You Are Almost-Full" by Vytiniotis, Coquand, and Wahlstedt. notify = c.sternagel@gmail.com [Marriage] title = Hall's Marriage Theorem author = Dongchen Jiang , Tobias Nipkow date = 2010-12-17 topic = Mathematics/Combinatorics abstract = Two proofs of Hall's Marriage Theorem: one due to Halmos and Vaughan, one due to Rado. extra-history = Change history: [2011-09-09]: Added Rado's proof notify = nipkow@in.tum.de [Bondy] title = Bondy's Theorem author = Jeremy Avigad , Stefan Hetzl date = 2012-10-27 topic = Mathematics/Combinatorics abstract = A proof of Bondy's theorem following B. Bollabas, Combinatorics, 1986, Cambridge University Press. notify = avigad@cmu.edu, hetzl@logic.at [Ramsey-Infinite] title = Ramsey's theorem, infinitary version author = Tom Ridge <> date = 2004-09-20 topic = Mathematics/Combinatorics abstract = This formalization of Ramsey's theorem (infinitary version) is taken from Boolos and Jeffrey, Computability and Logic, 3rd edition, Chapter 26. It differs slightly from the text by assuming a slightly stronger hypothesis. In particular, the induction hypothesis is stronger, holding for any infinite subset of the naturals. This avoids the rather peculiar mapping argument between kj and aikj on p.263, which is unnecessary and slightly mars this really beautiful result. notify = lp15@cam.ac.uk [Derangements] title = Derangements Formula author = Lukas Bulwahn date = 2015-06-27 topic = Mathematics/Combinatorics abstract = The Derangements Formula describes the number of fixpoint-free permutations as a closed formula. This theorem is the 88th theorem in a list of the ``Top 100 Mathematical Theorems''. notify = lukas.bulwahn@gmail.com [Euler_Partition] title = Euler's Partition Theorem author = Lukas Bulwahn date = 2015-11-19 topic = Mathematics/Combinatorics abstract = Euler's Partition Theorem states that the number of partitions with only distinct parts is equal to the number of partitions with only odd parts. The combinatorial proof follows John Harrison's HOL Light formalization. This theorem is the 45th theorem of the Top 100 Theorems list. notify = lukas.bulwahn@gmail.com [Discrete_Summation] title = Discrete Summation author = Florian Haftmann contributors = Amine Chaieb <> date = 2014-04-13 topic = Mathematics/Combinatorics abstract = These theories introduce basic concepts and proofs about discrete summation: shifts, formal summation, falling factorials and stirling numbers. As proof of concept, a simple summation conversion is provided. notify = florian.haftmann@informatik.tu-muenchen.de [Open_Induction] title = Open Induction author = Mizuhito Ogawa <>, Christian Sternagel date = 2012-11-02 topic = Mathematics/Combinatorics abstract = A proof of the open induction schema based on J.-C. Raoult, Proving open properties by induction, Information Processing Letters 29, 1988, pp.19-23.

This research was supported by the Austrian Science Fund (FWF): J3202.

notify = c.sternagel@gmail.com [Category] title = Category Theory to Yoneda's Lemma author = Greg O'Keefe date = 2005-04-21 topic = Mathematics/Category theory license = LGPL abstract = This development proves Yoneda's lemma and aims to be readable by humans. It only defines what is needed for the lemma: categories, functors and natural transformations. Limits, adjunctions and other important concepts are not included. extra-history = Change history: [2010-04-23]: The definition of the constant equinumerous was slightly too weak in the original submission and has been fixed in revision 8c2b5b3c995f. notify = lcp@cl.cam.ac.uk [Category2] title = Category Theory author = Alexander Katovsky date = 2010-06-20 topic = Mathematics/Category theory abstract = This article presents a development of Category Theory in Isabelle/HOL. A Category is defined using records and locales. Functors and Natural Transformations are also defined. The main result that has been formalized is that the Yoneda functor is a full and faithful embedding. We also formalize the completeness of many sorted monadic equational logic. Extensive use is made of the HOLZF theory in both cases. For an informal description see here [pdf]. notify = alexander.katovsky@cantab.net [FunWithFunctions] title = Fun With Functions author = Tobias Nipkow date = 2008-08-26 topic = Mathematics/Misc abstract = This is a collection of cute puzzles of the form ``Show that if a function satisfies the following constraints, it must be ...'' Please add further examples to this collection! notify = nipkow@in.tum.de [FunWithTilings] title = Fun With Tilings author = Tobias Nipkow , Lawrence C. Paulson date = 2008-11-07 topic = Mathematics/Misc abstract = Tilings are defined inductively. It is shown that one form of mutilated chess board cannot be tiled with dominoes, while another one can be tiled with L-shaped tiles. Please add further fun examples of this kind! notify = nipkow@in.tum.de [Lazy-Lists-II] title = Lazy Lists II author = Stefan Friedrich <> date = 2004-04-26 topic = Computer science/Data structures abstract = This theory contains some useful extensions to the LList (lazy list) theory by Larry Paulson, including finite, infinite, and positive llists over an alphabet, as well as the new constants take and drop and the prefix order of llists. Finally, the notions of safety and liveness in the sense of Alpern and Schneider (1985) are defined. notify = lcp@cl.cam.ac.uk [Ribbon_Proofs] title = Ribbon Proofs author = John Wickerson <> date = 2013-01-19 topic = Computer science/Programming languages/Logics abstract = This document concerns the theory of ribbon proofs: a diagrammatic proof system, based on separation logic, for verifying program correctness. We include the syntax, proof rules, and soundness results for two alternative formalisations of ribbon proofs.

Compared to traditional proof outlines, ribbon proofs emphasise the structure of a proof, so are intelligible and pedagogical. Because they contain less redundancy than proof outlines, and allow each proof step to be checked locally, they may be more scalable. Where proof outlines are cumbersome to modify, ribbon proofs can be visually manoeuvred to yield proofs of variant programs. notify = [Koenigsberg_Friendship] title = The Königsberg Bridge Problem and the Friendship Theorem author = Wenda Li date = 2013-07-19 topic = Mathematics/Graph theory abstract = This development provides a formalization of undirected graphs and simple graphs, which are based on Benedikt Nordhoff and Peter Lammich's simple formalization of labelled directed graphs in the archive. Then, with our formalization of graphs, we show both necessary and sufficient conditions for Eulerian trails and circuits as well as the fact that the Königsberg Bridge Problem does not have a solution. In addition, we show the Friendship Theorem in simple graphs. notify = [Tree_Decomposition] title = Tree Decomposition author = Christoph Dittmann notify = date = 2016-05-31 topic = Mathematics/Graph theory abstract = We formalize tree decompositions and tree width in Isabelle/HOL, proving that trees have treewidth 1. We also show that every edge of a tree decomposition is a separation of the underlying graph. As an application of this theorem we prove that complete graphs of size n have treewidth n-1. [Menger] title = Menger's Theorem author = Christoph Dittmann topic = Mathematics/Graph theory date = 2017-02-26 notify = isabelle@christoph-d.de abstract = We present a formalization of Menger's Theorem for directed and undirected graphs in Isabelle/HOL. This well-known result shows that if two non-adjacent distinct vertices u, v in a directed graph have no separator smaller than n, then there exist n internally vertex-disjoint paths from u to v. The version for undirected graphs follows immediately because undirected graphs are a special case of directed graphs. [IEEE_Floating_Point] title = A Formal Model of IEEE Floating Point Arithmetic author = Lei Yu contributors = Fabian Hellauer , Fabian Immler date = 2013-07-27 topic = Computer science/Data structures abstract = This development provides a formal model of IEEE-754 floating-point arithmetic. This formalization, including formal specification of the standard and proofs of important properties of floating-point arithmetic, forms the foundation for verifying programs with floating-point computation. There is also a code generation setup for floats so that we can execute programs using this formalization in functional programming languages. notify = lp15@cam.ac.uk, immler@in.tum.de extra-history = Change history: [2017-09-25]: Added conversions from and to software floating point numbers (by Fabian Hellauer and Fabian Immler).
[2018-02-05]: 'Modernized' representation following the formalization in HOL4: former "float_format" and predicate "is_valid" is now encoded in a type "('e, 'f) float" where 'e and 'f encode the size of exponent and fraction. [Native_Word] title = Native Word author = Andreas Lochbihler contributors = Peter Lammich date = 2013-09-17 topic = Computer science/Data structures abstract = This entry makes machine words and machine arithmetic available for code generation from Isabelle/HOL. It provides a common abstraction that hides the differences between the different target languages. The code generator maps these operations to the APIs of the target languages. Apart from that, we extend the available bit operations on types int and integer, and map them to the operations in the target languages. extra-history = Change history: [2013-11-06]: added conversion function between native words and characters (revision fd23d9a7fe3a)
[2014-03-31]: added words of default size in the target language (by Peter Lammich) (revision 25caf5065833)
[2014-10-06]: proper test setup with compilation and execution of tests in all target languages (revision 5d7a1c9ae047)
[2017-09-02]: added 64-bit words (revision c89f86244e3c)
[2018-07-15]: added cast operators for default-size words (revision fc1f1fb8dd30)
notify = mail@andreas-lochbihler.de [XML] title = XML author = Christian Sternagel , René Thiemann date = 2014-10-03 topic = Computer science/Functional programming, Computer science/Data structures abstract = This entry provides an XML library for Isabelle/HOL. This includes parsing and pretty printing of XML trees as well as combinators for transforming XML trees into arbitrary user-defined data. The main contribution of this entry is an interface (fit for code generation) that allows for communication between verified programs formalized in Isabelle/HOL and the outside world via XML. This library was developed as part of the IsaFoR/CeTA project to which we refer for examples of its usage. notify = c.sternagel@gmail.com, rene.thiemann@uibk.ac.at [HereditarilyFinite] title = The Hereditarily Finite Sets author = Lawrence C. Paulson date = 2013-11-17 topic = Logic/Set theory abstract = The theory of hereditarily finite sets is formalised, following the development of Swierczkowski. An HF set is a finite collection of other HF sets; they enjoy an induction principle and satisfy all the axioms of ZF set theory apart from the axiom of infinity, which is negated. All constructions that are possible in ZF set theory (Cartesian products, disjoint sums, natural numbers, functions) without using infinite sets are possible here. The definition of addition for the HF sets follows Kirby. This development forms the foundation for the Isabelle proof of Gödel's incompleteness theorems, which has been formalised separately. extra-history = Change history: [2015-02-23]: Added the theory "Finitary" defining the class of types that can be embedded in hf, including int, char, option, list, etc. notify = lp15@cam.ac.uk [Incompleteness] title = Gödel's Incompleteness Theorems author = Lawrence C. Paulson date = 2013-11-17 topic = Logic/Proof theory abstract = Gödel's two incompleteness theorems are formalised, following a careful presentation by Swierczkowski, in the theory of hereditarily finite sets. This represents the first ever machine-assisted proof of the second incompleteness theorem. Compared with traditional formalisations using Peano arithmetic (see e.g. Boolos), coding is simpler, with no need to formalise the notion of multiplication (let alone that of a prime number) in the formalised calculus upon which the theorem is based. However, other technical problems had to be solved in order to complete the argument. notify = lp15@cam.ac.uk [Finite_Automata_HF] title = Finite Automata in Hereditarily Finite Set Theory author = Lawrence C. Paulson date = 2015-02-05 topic = Computer science/Automata and formal languages abstract = Finite Automata, both deterministic and non-deterministic, for regular languages. The Myhill-Nerode Theorem. Closure under intersection, concatenation, etc. Regular expressions define regular languages. Closure under reversal; the powerset construction mapping NFAs to DFAs. Left and right languages; minimal DFAs. Brzozowski's minimization algorithm. Uniqueness up to isomorphism of minimal DFAs. notify = lp15@cam.ac.uk [Decreasing-Diagrams] title = Decreasing Diagrams author = Harald Zankl license = LGPL date = 2013-11-01 topic = Logic/Rewriting abstract = This theory contains a formalization of decreasing diagrams showing that any locally decreasing abstract rewrite system is confluent. We consider the valley (van Oostrom, TCS 1994) and the conversion version (van Oostrom, RTA 2008) and closely follow the original proofs. As an application we prove Newman's lemma. notify = Harald.Zankl@uibk.ac.at [Decreasing-Diagrams-II] title = Decreasing Diagrams II author = Bertram Felgenhauer license = LGPL date = 2015-08-20 topic = Logic/Rewriting abstract = This theory formalizes the commutation version of decreasing diagrams for Church-Rosser modulo. The proof follows Felgenhauer and van Oostrom (RTA 2013). The theory also provides important specializations, in particular van Oostrom’s conversion version (TCS 2008) of decreasing diagrams. notify = bertram.felgenhauer@uibk.ac.at [GoedelGod] title = Gödel's God in Isabelle/HOL author = Christoph Benzmüller , Bruno Woltzenlogel Paleo date = 2013-11-12 topic = Logic/Philosophical aspects abstract = Dana Scott's version of Gödel's proof of God's existence is formalized in quantified modal logic KB (QML KB). QML KB is modeled as a fragment of classical higher-order logic (HOL); thus, the formalization is essentially a formalization in HOL. notify = lp15@cam.ac.uk, c.benzmueller@fu-berlin.de [Types_Tableaus_and_Goedels_God] title = Types, Tableaus and Gödel’s God in Isabelle/HOL author = David Fuenmayor , Christoph Benzmüller topic = Logic/Philosophical aspects date = 2017-05-01 notify = davfuenmayor@gmail.com, c.benzmueller@gmail.com abstract = A computer-formalisation of the essential parts of Fitting's textbook "Types, Tableaus and Gödel's God" in Isabelle/HOL is presented. In particular, Fitting's (and Anderson's) variant of the ontological argument is verified and confirmed. This variant avoids the modal collapse, which has been criticised as an undesirable side-effect of Kurt Gödel's (and Dana Scott's) versions of the ontological argument. Fitting's work is employing an intensional higher-order modal logic, which we shallowly embed here in classical higher-order logic. We then utilize the embedded logic for the formalisation of Fitting's argument. (See also the earlier AFP entry ``Gödel's God in Isabelle/HOL''.) [GewirthPGCProof] title = Formalisation and Evaluation of Alan Gewirth's Proof for the Principle of Generic Consistency in Isabelle/HOL author = David Fuenmayor , Christoph Benzmüller topic = Logic/Philosophical aspects date = 2018-10-30 notify = davfuenmayor@gmail.com, c.benzmueller@gmail.com abstract = An ambitious ethical theory ---Alan Gewirth's "Principle of Generic Consistency"--- is encoded and analysed in Isabelle/HOL. Gewirth's theory has stirred much attention in philosophy and ethics and has been proposed as a potential means to bound the impact of artificial general intelligence. extra-history = Change history: [2019-04-09]: added proof for a stronger variant of the PGC and examplary inferences (revision 88182cb0a2f6)
[Lowe_Ontological_Argument] title = Computer-assisted Reconstruction and Assessment of E. J. Lowe's Modal Ontological Argument author = David Fuenmayor , Christoph Benzmüller topic = Logic/Philosophical aspects date = 2017-09-21 notify = davfuenmayor@gmail.com, c.benzmueller@gmail.com abstract = Computers may help us to understand --not just verify-- philosophical arguments. By utilizing modern proof assistants in an iterative interpretive process, we can reconstruct and assess an argument by fully formal means. Through the mechanization of a variant of St. Anselm's ontological argument by E. J. Lowe, which is a paradigmatic example of a natural-language argument with strong ties to metaphysics and religion, we offer an ideal showcase for our computer-assisted interpretive method. [AnselmGod] title = Anselm's God in Isabelle/HOL author = Ben Blumson topic = Logic/Philosophical aspects date = 2017-09-06 notify = benblumson@gmail.com abstract = Paul Oppenheimer and Edward Zalta's formalisation of Anselm's ontological argument for the existence of God is automated by embedding a free logic for definite descriptions within Isabelle/HOL. [Tail_Recursive_Functions] title = A General Method for the Proof of Theorems on Tail-recursive Functions author = Pasquale Noce date = 2013-12-01 topic = Computer science/Functional programming abstract =

Tail-recursive function definitions are sometimes more straightforward than alternatives, but proving theorems on them may be roundabout because of the peculiar form of the resulting recursion induction rules.

This paper describes a proof method that provides a general solution to this problem by means of suitable invariants over inductive sets, and illustrates the application of such method by examining two case studies.

notify = pasquale.noce.lavoro@gmail.com [CryptoBasedCompositionalProperties] title = Compositional Properties of Crypto-Based Components author = Maria Spichkova date = 2014-01-11 topic = Computer science/Security abstract = This paper presents an Isabelle/HOL set of theories which allows the specification of crypto-based components and the verification of their composition properties wrt. cryptographic aspects. We introduce a formalisation of the security property of data secrecy, the corresponding definitions and proofs. Please note that here we import the Isabelle/HOL theory ListExtras.thy, presented in the AFP entry FocusStreamsCaseStudies-AFP. notify = maria.spichkova@rmit.edu.au [Featherweight_OCL] title = Featherweight OCL: A Proposal for a Machine-Checked Formal Semantics for OCL 2.5 author = Achim D. Brucker , Frédéric Tuong , Burkhart Wolff date = 2014-01-16 topic = Computer science/System description languages abstract = The Unified Modeling Language (UML) is one of the few modeling languages that is widely used in industry. While UML is mostly known as diagrammatic modeling language (e.g., visualizing class models), it is complemented by a textual language, called Object Constraint Language (OCL). The current version of OCL is based on a four-valued logic that turns UML into a formal language. Any type comprises the elements "invalid" and "null" which are propagated as strict and non-strict, respectively. Unfortunately, the former semi-formal semantics of this specification language, captured in the "Annex A" of the OCL standard, leads to different interpretations of corner cases. We formalize the core of OCL: denotational definitions, a logical calculus and operational rules that allow for the execution of OCL expressions by a mixture of term rewriting and code compilation. Our formalization reveals several inconsistencies and contradictions in the current version of the OCL standard. Overall, this document is intended to provide the basis for a machine-checked text "Annex A" of the OCL standard targeting at tool implementors. extra-history = Change history: [2015-10-13]: afp-devel@ea3b38fc54d6 and hol-testgen@12148
   Update of Featherweight OCL including a change in the abstract.
[2014-01-16]: afp-devel@9091ce05cb20 and hol-testgen@10241
   New Entry: Featherweight OCL notify = brucker@spamfence.net, tuong@users.gforge.inria.fr, wolff@lri.fr [Relation_Algebra] title = Relation Algebra author = Alasdair Armstrong <>, Simon Foster , Georg Struth , Tjark Weber date = 2014-01-25 topic = Mathematics/Algebra abstract = Tarski's algebra of binary relations is formalised along the lines of the standard textbooks of Maddux and Schmidt and Ströhlein. This includes relation-algebraic concepts such as subidentities, vectors and a domain operation as well as various notions associated to functions. Relation algebras are also expanded by a reflexive transitive closure operation, and they are linked with Kleene algebras and models of binary relations and Boolean matrices. notify = g.struth@sheffield.ac.uk, tjark.weber@it.uu.se [PSemigroupsConvolution] title = Partial Semigroups and Convolution Algebras author = Brijesh Dongol , Victor B. F. Gomes , Ian J. Hayes , Georg Struth topic = Mathematics/Algebra date = 2017-06-13 notify = g.struth@sheffield.ac.uk, victor.gomes@cl.cam.ac.uk abstract = Partial Semigroups are relevant to the foundations of quantum mechanics and combinatorics as well as to interval and separation logics. Convolution algebras can be understood either as algebras of generalised binary modalities over ternary Kripke frames, in particular over partial semigroups, or as algebras of quantale-valued functions which are equipped with a convolution-style operation of multiplication that is parametrised by a ternary relation. Convolution algebras provide algebraic semantics for various substructural logics, including categorial, relevance and linear logics, for separation logic and for interval logics; they cover quantitative and qualitative applications. These mathematical components for partial semigroups and convolution algebras provide uniform foundations from which models of computation based on relations, program traces or pomsets, and verification components for separation or interval temporal logics can be built with little effort. [Secondary_Sylow] title = Secondary Sylow Theorems author = Jakob von Raumer date = 2014-01-28 topic = Mathematics/Algebra abstract = These theories extend the existing proof of the first Sylow theorem (written by Florian Kammueller and L. C. Paulson) by what are often called the second, third and fourth Sylow theorems. These theorems state propositions about the number of Sylow p-subgroups of a group and the fact that they are conjugate to each other. The proofs make use of an implementation of group actions and their properties. notify = psxjv4@nottingham.ac.uk [Jordan_Hoelder] title = The Jordan-Hölder Theorem author = Jakob von Raumer date = 2014-09-09 topic = Mathematics/Algebra abstract = This submission contains theories that lead to a formalization of the proof of the Jordan-Hölder theorem about composition series of finite groups. The theories formalize the notions of isomorphism classes of groups, simple groups, normal series, composition series, maximal normal subgroups. Furthermore, they provide proofs of the second isomorphism theorem for groups, the characterization theorem for maximal normal subgroups as well as many useful lemmas about normal subgroups and factor groups. The proof is inspired by course notes of Stuart Rankin. notify = psxjv4@nottingham.ac.uk [Cayley_Hamilton] title = The Cayley-Hamilton Theorem author = Stephan Adelsberger , Stefan Hetzl , Florian Pollak date = 2014-09-15 topic = Mathematics/Algebra abstract = This document contains a proof of the Cayley-Hamilton theorem based on the development of matrices in HOL/Multivariate Analysis. notify = stvienna@gmail.com [Probabilistic_Noninterference] title = Probabilistic Noninterference author = Andrei Popescu , Johannes Hölzl date = 2014-03-11 topic = Computer science/Security abstract = We formalize a probabilistic noninterference for a multi-threaded language with uniform scheduling, where probabilistic behaviour comes from both the scheduler and the individual threads. We define notions probabilistic noninterference in two variants: resumption-based and trace-based. For the resumption-based notions, we prove compositionality w.r.t. the language constructs and establish sound type-system-like syntactic criteria. This is a formalization of the mathematical development presented at CPP 2013 and CALCO 2013. It is the probabilistic variant of the Possibilistic Noninterference AFP entry. notify = hoelzl@in.tum.de [HyperCTL] title = A shallow embedding of HyperCTL* author = Markus N. Rabe , Peter Lammich , Andrei Popescu date = 2014-04-16 topic = Computer science/Security, Logic/General logic/Temporal logic abstract = We formalize HyperCTL*, a temporal logic for expressing security properties. We first define a shallow embedding of HyperCTL*, within which we prove inductive and coinductive rules for the operators. Then we show that a HyperCTL* formula captures Goguen-Meseguer noninterference, a landmark information flow property. We also define a deep embedding and connect it to the shallow embedding by a denotational semantics, for which we prove sanity w.r.t. dependence on the free variables. Finally, we show that under some finiteness assumptions about the model, noninterference is given by a (finitary) syntactic formula. notify = uuomul@yahoo.com [Bounded_Deducibility_Security] title = Bounded-Deducibility Security author = Andrei Popescu , Peter Lammich , Thomas Bauereiss date = 2014-04-22 topic = Computer science/Security abstract = This is a formalization of bounded-deducibility security (BD security), a flexible notion of information-flow security applicable to arbitrary transition systems. It generalizes Sutherland's classic notion of nondeducibility by factoring in declassification bounds and trigger, whereas nondeducibility states that, in a system, information cannot flow between specified sources and sinks, BD security indicates upper bounds for the flow and triggers under which these upper bounds are no longer guaranteed. notify = uuomul@yahoo.com, lammich@in.tum.de, thomas@bauereiss.name extra-history = Change history: [2021-08-12]: Generalised BD Security from I/O automata to nondeterministic transition systems, with the former retained as an instance of the latter (renaming locale BD_Security to BD_Security_IO). Generalise unwinding conditions to allow making more than one transition at a time when constructing alternative traces. Add results about the expressivity of declassification triggers vs. bounds, due to Thomas Bauereiss (added as author). [Network_Security_Policy_Verification] title = Network Security Policy Verification author = Cornelius Diekmann date = 2014-07-04 topic = Computer science/Security abstract = We present a unified theory for verifying network security policies. A security policy is represented as directed graph. To check high-level security goals, security invariants over the policy are expressed. We cover monotonic security invariants, i.e. prohibiting more does not harm security. We provide the following contributions for the security invariant theory.
  • Secure auto-completion of scenario-specific knowledge, which eases usability.
  • Security violations can be repaired by tightening the policy iff the security invariants hold for the deny-all policy.
  • An algorithm to compute a security policy.
  • A formalization of stateful connection semantics in network security mechanisms.
  • An algorithm to compute a secure stateful implementation of a policy.
  • An executable implementation of all the theory.
  • Examples, ranging from an aircraft cabin data network to the analysis of a large real-world firewall.
  • More examples: A fully automated translation of high-level security goals to both firewall and SDN configurations (see Examples/Distributed_WebApp.thy).
For a detailed description, see extra-history = Change history: [2015-04-14]: Added Distributed WebApp example and improved graphviz visualization (revision 4dde08ca2ab8)
notify = diekmann@net.in.tum.de [Abstract_Completeness] title = Abstract Completeness author = Jasmin Christian Blanchette , Andrei Popescu , Dmitriy Traytel date = 2014-04-16 topic = Logic/Proof theory abstract = A formalization of an abstract property of possibly infinite derivation trees (modeled by a codatatype), representing the core of a proof (in Beth/Hintikka style) of the first-order logic completeness theorem, independent of the concrete syntax or inference rules. This work is described in detail in the IJCAR 2014 publication by the authors. The abstract proof can be instantiated for a wide range of Gentzen and tableau systems as well as various flavors of FOL---e.g., with or without predicates, equality, or sorts. Here, we give only a toy example instantiation with classical propositional logic. A more serious instance---many-sorted FOL with equality---is described elsewhere [Blanchette and Popescu, FroCoS 2013]. notify = traytel@in.tum.de [Pop_Refinement] title = Pop-Refinement author = Alessandro Coglio date = 2014-07-03 topic = Computer science/Programming languages/Misc abstract = Pop-refinement is an approach to stepwise refinement, carried out inside an interactive theorem prover by constructing a monotonically decreasing sequence of predicates over deeply embedded target programs. The sequence starts with a predicate that characterizes the possible implementations, and ends with a predicate that characterizes a unique program in explicit syntactic form. Pop-refinement enables more requirements (e.g. program-level and non-functional) to be captured in the initial specification and preserved through refinement. Security requirements expressed as hyperproperties (i.e. predicates over sets of traces) are always preserved by pop-refinement, unlike the popular notion of refinement as trace set inclusion. Two simple examples in Isabelle/HOL are presented, featuring program-level requirements, non-functional requirements, and hyperproperties. notify = coglio@kestrel.edu [VectorSpace] title = Vector Spaces author = Holden Lee date = 2014-08-29 topic = Mathematics/Algebra abstract = This formalisation of basic linear algebra is based completely on locales, building off HOL-Algebra. It includes basic definitions: linear combinations, span, linear independence; linear transformations; interpretation of function spaces as vector spaces; the direct sum of vector spaces, sum of subspaces; the replacement theorem; existence of bases in finite-dimensional; vector spaces, definition of dimension; the rank-nullity theorem. Some concepts are actually defined and proved for modules as they also apply there. Infinite-dimensional vector spaces are supported, but dimension is only supported for finite-dimensional vector spaces. The proofs are standard; the proofs of the replacement theorem and rank-nullity theorem roughly follow the presentation in Linear Algebra by Friedberg, Insel, and Spence. The rank-nullity theorem generalises the existing development in the Archive of Formal Proof (originally using type classes, now using a mix of type classes and locales). notify = holdenl@princeton.edu [Special_Function_Bounds] title = Real-Valued Special Functions: Upper and Lower Bounds author = Lawrence C. Paulson date = 2014-08-29 topic = Mathematics/Analysis abstract = This development proves upper and lower bounds for several familiar real-valued functions. For sin, cos, exp and sqrt, it defines and verifies infinite families of upper and lower bounds, mostly based on Taylor series expansions. For arctan, ln and exp, it verifies a finite collection of upper and lower bounds, originally obtained from the functions' continued fraction expansions using the computer algebra system Maple. A common theme in these proofs is to take the difference between a function and its approximation, which should be zero at one point, and then consider the sign of the derivative. The immediate purpose of this development is to verify axioms used by MetiTarski, an automatic theorem prover for real-valued special functions. Crucial to MetiTarski's operation is the provision of upper and lower bounds for each function of interest. notify = lp15@cam.ac.uk [Landau_Symbols] title = Landau Symbols author = Manuel Eberl date = 2015-07-14 topic = Mathematics/Analysis abstract = This entry provides Landau symbols to describe and reason about the asymptotic growth of functions for sufficiently large inputs. A number of simplification procedures are provided for additional convenience: cancelling of dominated terms in sums under a Landau symbol, cancelling of common factors in products, and a decision procedure for Landau expressions containing products of powers of functions like x, ln(x), ln(ln(x)) etc. notify = manuel@pruvisto.org [Error_Function] title = The Error Function author = Manuel Eberl topic = Mathematics/Analysis date = 2018-02-06 notify = manuel@pruvisto.org abstract =

This entry provides the definitions and basic properties of the complex and real error function erf and the complementary error function erfc. Additionally, it gives their full asymptotic expansions.

[Akra_Bazzi] title = The Akra-Bazzi theorem and the Master theorem author = Manuel Eberl date = 2015-07-14 topic = Mathematics/Analysis abstract = This article contains a formalisation of the Akra-Bazzi method based on a proof by Leighton. It is a generalisation of the well-known Master Theorem for analysing the complexity of Divide & Conquer algorithms. We also include a generalised version of the Master theorem based on the Akra-Bazzi theorem, which is easier to apply than the Akra-Bazzi theorem itself.

Some proof methods that facilitate applying the Master theorem are also included. For a more detailed explanation of the formalisation and the proof methods, see the accompanying paper (publication forthcoming). notify = manuel@pruvisto.org [Dirichlet_Series] title = Dirichlet Series author = Manuel Eberl topic = Mathematics/Number theory date = 2017-10-12 notify = manuel@pruvisto.org abstract = This entry is a formalisation of much of Chapters 2, 3, and 11 of Apostol's “Introduction to Analytic Number Theory”. This includes:

  • Definitions and basic properties for several number-theoretic functions (Euler's φ, Möbius μ, Liouville's λ, the divisor function σ, von Mangoldt's Λ)
  • Executable code for most of these functions, the most efficient implementations using the factoring algorithm by Thiemann et al.
  • Dirichlet products and formal Dirichlet series
  • Analytic results connecting convergent formal Dirichlet series to complex functions
  • Euler product expansions
  • Asymptotic estimates of number-theoretic functions including the density of squarefree integers and the average number of divisors of a natural number
These results are useful as a basis for developing more number-theoretic results, such as the Prime Number Theorem. [Gauss_Sums] title = Gauss Sums and the Pólya–Vinogradov Inequality author = Rodrigo Raya , Manuel Eberl topic = Mathematics/Number theory date = 2019-12-10 notify = manuel.eberl@tum.de abstract =

This article provides a full formalisation of Chapter 8 of Apostol's Introduction to Analytic Number Theory. Subjects that are covered are:

  • periodic arithmetic functions and their finite Fourier series
  • (generalised) Ramanujan sums
  • Gauss sums and separable characters
  • induced moduli and primitive characters
  • the Pólya—Vinogradov inequality
[Zeta_Function] title = The Hurwitz and Riemann ζ Functions author = Manuel Eberl topic = Mathematics/Number theory, Mathematics/Analysis date = 2017-10-12 notify = manuel@pruvisto.org abstract =

This entry builds upon the results about formal and analytic Dirichlet series to define the Hurwitz ζ function ζ(a,s) and, based on that, the Riemann ζ function ζ(s). This is done by first defining them for ℜ(z) > 1 and then successively extending the domain to the left using the Euler–MacLaurin formula.

Apart from the most basic facts such as analyticity, the following results are provided:

  • the Stieltjes constants and the Laurent expansion of ζ(s) at s = 1
  • the non-vanishing of ζ(s) for ℜ(z) ≥ 1
  • the relationship between ζ(a,s) and Γ
  • the special values at negative integers and positive even integers
  • Hurwitz's formula and the reflection formula for ζ(s)
  • the Hadjicostas–Chapman formula

The entry also contains Euler's analytic proof of the infinitude of primes, based on the fact that ζ(s) has a pole at s = 1.

[Linear_Recurrences] title = Linear Recurrences author = Manuel Eberl topic = Mathematics/Analysis date = 2017-10-12 notify = manuel@pruvisto.org abstract =

Linear recurrences with constant coefficients are an interesting class of recurrence equations that can be solved explicitly. The most famous example are certainly the Fibonacci numbers with the equation f(n) = f(n-1) + f(n - 2) and the quite non-obvious closed form (φn - (-φ)-n) / √5 where φ is the golden ratio.

In this work, I build on existing tools in Isabelle – such as formal power series and polynomial factorisation algorithms – to develop a theory of these recurrences and derive a fully executable solver for them that can be exported to programming languages like Haskell.

[Van_der_Waerden] title = Van der Waerden's Theorem author = Katharina Kreuzer , Manuel Eberl topic = Mathematics/Combinatorics date = 2021-06-22 notify = kreuzerk@in.tum.de, manuel@pruvisto.org abstract = This article formalises the proof of Van der Waerden's Theorem from Ramsey theory. Van der Waerden's Theorem states that for integers $k$ and $l$ there exists a number $N$ which guarantees that if an integer interval of length at least $N$ is coloured with $k$ colours, there will always be an arithmetic progression of length $l$ of the same colour in said interval. The proof goes along the lines of \cite{Swan}. The smallest number $N_{k,l}$ fulfilling Van der Waerden's Theorem is then called the Van der Waerden Number. Finding the Van der Waerden Number is still an open problem for most values of $k$ and $l$. [Lambert_W] title = The Lambert W Function on the Reals author = Manuel Eberl topic = Mathematics/Analysis date = 2020-04-24 notify = manuel@pruvisto.org abstract =

The Lambert W function is a multi-valued function defined as the inverse function of xx ex. Besides numerous applications in combinatorics, physics, and engineering, it also frequently occurs when solving equations containing both ex and x, or both x and log x.

This article provides a definition of the two real-valued branches W0(x) and W-1(x) and proves various properties such as basic identities and inequalities, monotonicity, differentiability, asymptotic expansions, and the MacLaurin series of W0(x) at x = 0.

[Cartan_FP] title = The Cartan Fixed Point Theorems author = Lawrence C. Paulson date = 2016-03-08 topic = Mathematics/Analysis abstract = The Cartan fixed point theorems concern the group of holomorphic automorphisms on a connected open set of Cn. Ciolli et al. have formalised the one-dimensional case of these theorems in HOL Light. This entry contains their proofs, ported to Isabelle/HOL. Thus it addresses the authors' remark that "it would be important to write a formal proof in a language that can be read by both humans and machines". notify = lp15@cam.ac.uk [Gauss_Jordan] title = Gauss-Jordan Algorithm and Its Applications author = Jose Divasón , Jesús Aransay topic = Computer science/Algorithms/Mathematical date = 2014-09-03 abstract = The Gauss-Jordan algorithm states that any matrix over a field can be transformed by means of elementary row operations to a matrix in reduced row echelon form. The formalization is based on the Rank Nullity Theorem entry of the AFP and on the HOL-Multivariate-Analysis session of Isabelle, where matrices are represented as functions over finite types. We have set up the code generator to make this representation executable. In order to improve the performance, a refinement to immutable arrays has been carried out. We have formalized some of the applications of the Gauss-Jordan algorithm. Thanks to this development, the following facts can be computed over matrices whose elements belong to a field: Ranks, Determinants, Inverses, Bases and dimensions and Solutions of systems of linear equations. Code can be exported to SML and Haskell. notify = jose.divasonm@unirioja.es, jesus-maria.aransay@unirioja.es [Echelon_Form] title = Echelon Form author = Jose Divasón , Jesús Aransay topic = Computer science/Algorithms/Mathematical, Mathematics/Algebra date = 2015-02-12 abstract = We formalize an algorithm to compute the Echelon Form of a matrix. We have proved its existence over Bézout domains and made it executable over Euclidean domains, such as the integer ring and the univariate polynomials over a field. This allows us to compute determinants, inverses and characteristic polynomials of matrices. The work is based on the HOL-Multivariate Analysis library, and on both the Gauss-Jordan and Cayley-Hamilton AFP entries. As a by-product, some algebraic structures have been implemented (principal ideal domains, Bézout domains...). The algorithm has been refined to immutable arrays and code can be generated to functional languages as well. notify = jose.divasonm@unirioja.es, jesus-maria.aransay@unirioja.es [QR_Decomposition] title = QR Decomposition author = Jose Divasón , Jesús Aransay topic = Computer science/Algorithms/Mathematical, Mathematics/Algebra date = 2015-02-12 abstract = QR decomposition is an algorithm to decompose a real matrix A into the product of two other matrices Q and R, where Q is orthogonal and R is invertible and upper triangular. The algorithm is useful for the least squares problem; i.e., the computation of the best approximation of an unsolvable system of linear equations. As a side-product, the Gram-Schmidt process has also been formalized. A refinement using immutable arrays is presented as well. The development relies, among others, on the AFP entry "Implementing field extensions of the form Q[sqrt(b)]" by René Thiemann, which allows execution of the algorithm using symbolic computations. Verified code can be generated and executed using floats as well. extra-history = Change history: [2015-06-18]: The second part of the Fundamental Theorem of Linear Algebra has been generalized to more general inner product spaces. notify = jose.divasonm@unirioja.es, jesus-maria.aransay@unirioja.es [Hermite] title = Hermite Normal Form author = Jose Divasón , Jesús Aransay topic = Computer science/Algorithms/Mathematical, Mathematics/Algebra date = 2015-07-07 abstract = Hermite Normal Form is a canonical matrix analogue of Reduced Echelon Form, but involving matrices over more general rings. In this work we formalise an algorithm to compute the Hermite Normal Form of a matrix by means of elementary row operations, taking advantage of the Echelon Form AFP entry. We have proven the correctness of such an algorithm and refined it to immutable arrays. Furthermore, we have also formalised the uniqueness of the Hermite Normal Form of a matrix. Code can be exported and some examples of execution involving integer matrices and polynomial matrices are presented as well. notify = jose.divasonm@unirioja.es, jesus-maria.aransay@unirioja.es [Imperative_Insertion_Sort] title = Imperative Insertion Sort author = Christian Sternagel date = 2014-09-25 topic = Computer science/Algorithms abstract = The insertion sort algorithm of Cormen et al. (Introduction to Algorithms) is expressed in Imperative HOL and proved to be correct and terminating. For this purpose we also provide a theory about imperative loop constructs with accompanying induction/invariant rules for proving partial and total correctness. Furthermore, the formalized algorithm is fit for code generation. notify = lp15@cam.ac.uk [Stream_Fusion_Code] title = Stream Fusion in HOL with Code Generation author = Andreas Lochbihler , Alexandra Maximova date = 2014-10-10 topic = Computer science/Functional programming abstract = Stream Fusion is a system for removing intermediate list data structures from functional programs, in particular Haskell. This entry adapts stream fusion to Isabelle/HOL and its code generator. We define stream types for finite and possibly infinite lists and stream versions for most of the fusible list functions in the theories List and Coinductive_List, and prove them correct with respect to the conversion functions between lists and streams. The Stream Fusion transformation itself is implemented as a simproc in the preprocessor of the code generator. [Brian Huffman's AFP entry formalises stream fusion in HOLCF for the domain of lazy lists to prove the GHC compiler rewrite rules correct. In contrast, this work enables Isabelle's code generator to perform stream fusion itself. To that end, it covers both finite and coinductive lists from the HOL library and the Coinductive entry. The fusible list functions require specification and proof principles different from Huffman's.] notify = mail@andreas-lochbihler.de [Case_Labeling] title = Generating Cases from Labeled Subgoals author = Lars Noschinski date = 2015-07-21 topic = Tools, Computer science/Programming languages/Misc abstract = Isabelle/Isar provides named cases to structure proofs. This article contains an implementation of a proof method casify, which can be used to easily extend proof tools with support for named cases. Such a proof tool must produce labeled subgoals, which are then interpreted by casify.

As examples, this work contains verification condition generators producing named cases for three languages: The Hoare language from HOL/Library, a monadic language for computations with failure (inspired by the AutoCorres tool), and a language of conditional expressions. These VCGs are demonstrated by a number of example programs. notify = noschinl@gmail.com [DPT-SAT-Solver] title = A Fast SAT Solver for Isabelle in Standard ML topic = Tools author = Armin Heller <> date = 2009-12-09 abstract = This contribution contains a fast SAT solver for Isabelle written in Standard ML. By loading the theory DPT_SAT_Solver, the SAT solver installs itself (under the name ``dptsat'') and certain Isabelle tools like Refute will start using it automatically. This is a port of the DPT (Decision Procedure Toolkit) SAT Solver written in OCaml. notify = jasmin.blanchette@gmail.com [Rep_Fin_Groups] title = Representations of Finite Groups topic = Mathematics/Algebra author = Jeremy Sylvestre date = 2015-08-12 abstract = We provide a formal framework for the theory of representations of finite groups, as modules over the group ring. Along the way, we develop the general theory of groups (relying on the group_add class for the basics), modules, and vector spaces, to the extent required for theory of group representations. We then provide formal proofs of several important introductory theorems in the subject, including Maschke's theorem, Schur's lemma, and Frobenius reciprocity. We also prove that every irreducible representation is isomorphic to a submodule of the group ring, leading to the fact that for a finite group there are only finitely many isomorphism classes of irreducible representations. In all of this, no restriction is made on the characteristic of the ring or field of scalars until the definition of a group representation, and then the only restriction made is that the characteristic must not divide the order of the group. notify = jsylvest@ualberta.ca [Noninterference_Inductive_Unwinding] title = The Inductive Unwinding Theorem for CSP Noninterference Security topic = Computer science/Security author = Pasquale Noce date = 2015-08-18 abstract =

The necessary and sufficient condition for CSP noninterference security stated by the Ipurge Unwinding Theorem is expressed in terms of a pair of event lists varying over the set of process traces. This does not render it suitable for the subsequent application of rule induction in the case of a process defined inductively, since rule induction may rather be applied to a single variable ranging over an inductively defined set.

Starting from the Ipurge Unwinding Theorem, this paper derives a necessary and sufficient condition for CSP noninterference security that involves a single event list varying over the set of process traces, and is thus suitable for rule induction; hence its name, Inductive Unwinding Theorem. Similarly to the Ipurge Unwinding Theorem, the new theorem only requires to consider individual accepted and refused events for each process trace, and applies to the general case of a possibly intransitive noninterference policy. Specific variants of this theorem are additionally proven for deterministic processes and trace set processes.

notify = pasquale.noce.lavoro@gmail.com [Password_Authentication_Protocol] title = Verification of a Diffie-Hellman Password-based Authentication Protocol by Extending the Inductive Method author = Pasquale Noce topic = Computer science/Security date = 2017-01-03 notify = pasquale.noce.lavoro@gmail.com abstract = This paper constructs a formal model of a Diffie-Hellman password-based authentication protocol between a user and a smart card, and proves its security. The protocol provides for the dispatch of the user's password to the smart card on a secure messaging channel established by means of Password Authenticated Connection Establishment (PACE), where the mapping method being used is Chip Authentication Mapping. By applying and suitably extending Paulson's Inductive Method, this paper proves that the protocol establishes trustworthy secure messaging channels, preserves the secrecy of users' passwords, and provides an effective mutual authentication service. What is more, these security properties turn out to hold independently of the secrecy of the PACE authentication key. [Jordan_Normal_Form] title = Matrices, Jordan Normal Forms, and Spectral Radius Theory topic = Mathematics/Algebra author = René Thiemann , Akihisa Yamada contributors = Alexander Bentkamp date = 2015-08-21 abstract =

Matrix interpretations are useful as measure functions in termination proving. In order to use these interpretations also for complexity analysis, the growth rate of matrix powers has to examined. Here, we formalized a central result of spectral radius theory, namely that the growth rate is polynomially bounded if and only if the spectral radius of a matrix is at most one.

To formally prove this result we first studied the growth rates of matrices in Jordan normal form, and prove the result that every complex matrix has a Jordan normal form using a constructive prove via Schur decomposition.

The whole development is based on a new abstract type for matrices, which is also executable by a suitable setup of the code generator. It completely subsumes our former AFP-entry on executable matrices, and its main advantage is its close connection to the HMA-representation which allowed us to easily adapt existing proofs on determinants.

All the results have been applied to improve CeTA, our certifier to validate termination and complexity proof certificates.

extra-history = Change history: [2016-01-07]: Added Schur-decomposition, Gram-Schmidt orthogonalization, uniqueness of Jordan normal forms
[2018-04-17]: Integrated lemmas from deep-learning AFP-entry of Alexander Bentkamp notify = rene.thiemann@uibk.ac.at, ayamada@trs.cm.is.nagoya-u.ac.jp [LTL_to_DRA] title = Converting Linear Temporal Logic to Deterministic (Generalized) Rabin Automata topic = Computer science/Automata and formal languages author = Salomon Sickert date = 2015-09-04 abstract = Recently, Javier Esparza and Jan Kretinsky proposed a new method directly translating linear temporal logic (LTL) formulas to deterministic (generalized) Rabin automata. Compared to the existing approaches of constructing a non-deterministic Buechi-automaton in the first step and then applying a determinization procedure (e.g. some variant of Safra's construction) in a second step, this new approach preservers a relation between the formula and the states of the resulting automaton. While the old approach produced a monolithic structure, the new method is compositional. Furthermore, in some cases the resulting automata are much smaller than the automata generated by existing approaches. In order to ensure the correctness of the construction, this entry contains a complete formalisation and verification of the translation. Furthermore from this basis executable code is generated. extra-history = Change history: [2015-09-23]: Enable code export for the eager unfolding optimisation and reduce running time of the generated tool. Moreover, add support for the mlton SML compiler.
[2016-03-24]: Make use of the LTL entry and include the simplifier. notify = sickert@in.tum.de [Timed_Automata] title = Timed Automata author = Simon Wimmer date = 2016-03-08 topic = Computer science/Automata and formal languages abstract = Timed automata are a widely used formalism for modeling real-time systems, which is employed in a class of successful model checkers such as UPPAAL [LPY97], HyTech [HHWt97] or Kronos [Yov97]. This work formalizes the theory for the subclass of diagonal-free timed automata, which is sufficient to model many interesting problems. We first define the basic concepts and semantics of diagonal-free timed automata. Based on this, we prove two types of decidability results for the language emptiness problem. The first is the classic result of Alur and Dill [AD90, AD94], which uses a finite partitioning of the state space into so-called `regions`. Our second result focuses on an approach based on `Difference Bound Matrices (DBMs)`, which is practically used by model checkers. We prove the correctness of the basic forward analysis operations on DBMs. One of these operations is the Floyd-Warshall algorithm for the all-pairs shortest paths problem. To obtain a finite search space, a widening operation has to be used for this kind of analysis. We use Patricia Bouyer's [Bou04] approach to prove that this widening operation is correct in the sense that DBM-based forward analysis in combination with the widening operation also decides language emptiness. The interesting property of this proof is that the first decidability result is reused to obtain the second one. notify = wimmers@in.tum.de [Parity_Game] title = Positional Determinacy of Parity Games author = Christoph Dittmann date = 2015-11-02 topic = Mathematics/Games and economics, Mathematics/Graph theory abstract = We present a formalization of parity games (a two-player game on directed graphs) and a proof of their positional determinacy in Isabelle/HOL. This proof works for both finite and infinite games. notify = [Ergodic_Theory] title = Ergodic Theory author = Sebastien Gouezel contributors = Manuel Eberl date = 2015-12-01 topic = Mathematics/Probability theory abstract = Ergodic theory is the branch of mathematics that studies the behaviour of measure preserving transformations, in finite or infinite measure. It interacts both with probability theory (mainly through measure theory) and with geometry as a lot of interesting examples are from geometric origin. We implement the first definitions and theorems of ergodic theory, including notably Poicaré recurrence theorem for finite measure preserving systems (together with the notion of conservativity in general), induced maps, Kac's theorem, Birkhoff theorem (arguably the most important theorem in ergodic theory), and variations around it such as conservativity of the corresponding skew product, or Atkinson lemma. notify = sebastien.gouezel@univ-rennes1.fr, hoelzl@in.tum.de [Latin_Square] title = Latin Square author = Alexander Bentkamp date = 2015-12-02 topic = Mathematics/Combinatorics abstract = A Latin Square is a n x n table filled with integers from 1 to n where each number appears exactly once in each row and each column. A Latin Rectangle is a partially filled n x n table with r filled rows and n-r empty rows, such that each number appears at most once in each row and each column. The main result of this theory is that any Latin Rectangle can be completed to a Latin Square. notify = bentkamp@gmail.com [Deep_Learning] title = Expressiveness of Deep Learning author = Alexander Bentkamp date = 2016-11-10 topic = Computer science/Machine learning, Mathematics/Analysis abstract = Deep learning has had a profound impact on computer science in recent years, with applications to search engines, image recognition and language processing, bioinformatics, and more. Recently, Cohen et al. provided theoretical evidence for the superiority of deep learning over shallow learning. This formalization of their work simplifies and generalizes the original proof, while working around the limitations of the Isabelle type system. To support the formalization, I developed reusable libraries of formalized mathematics, including results about the matrix rank, the Lebesgue measure, and multivariate polynomials, as well as a library for tensor analysis. notify = bentkamp@gmail.com [Inductive_Inference] title = Some classical results in inductive inference of recursive functions author = Frank J. Balbach topic = Logic/Computability, Computer science/Machine learning date = 2020-08-31 notify = frank-balbach@gmx.de abstract =

This entry formalizes some classical concepts and results from inductive inference of recursive functions. In the basic setting a partial recursive function ("strategy") must identify ("learn") all functions from a set ("class") of recursive functions. To that end the strategy receives more and more values $f(0), f(1), f(2), \ldots$ of some function $f$ from the given class and in turn outputs descriptions of partial recursive functions, for example, Gödel numbers. The strategy is considered successful if the sequence of outputs ("hypotheses") converges to a description of $f$. A class of functions learnable in this sense is called "learnable in the limit". The set of all these classes is denoted by LIM.

Other types of inference considered are finite learning (FIN), behaviorally correct learning in the limit (BC), and some variants of LIM with restrictions on the hypotheses: total learning (TOTAL), consistent learning (CONS), and class-preserving learning (CP). The main results formalized are the proper inclusions $\mathrm{FIN} \subset \mathrm{CP} \subset \mathrm{TOTAL} \subset \mathrm{CONS} \subset \mathrm{LIM} \subset \mathrm{BC} \subset 2^{\mathcal{R}}$, where $\mathcal{R}$ is the set of all total recursive functions. Further results show that for all these inference types except CONS, strategies can be assumed to be total recursive functions; that all inference types but CP are closed under the subset relation between classes; and that no inference type is closed under the union of classes.

The above is based on a formalization of recursive functions heavily inspired by the Universal Turing Machine entry by Xu et al., but different in that it models partial functions with codomain nat option. The formalization contains a construction of a universal partial recursive function, without resorting to Turing machines, introduces decidability and recursive enumerability, and proves some standard results: existence of a Kleene normal form, the s-m-n theorem, Rice's theorem, and assorted fixed-point theorems (recursion theorems) by Kleene, Rogers, and Smullyan.

[Applicative_Lifting] title = Applicative Lifting author = Andreas Lochbihler , Joshua Schneider <> date = 2015-12-22 topic = Computer science/Functional programming abstract = Applicative functors augment computations with effects by lifting function application to types which model the effects. As the structure of the computation cannot depend on the effects, applicative expressions can be analysed statically. This allows us to lift universally quantified equations to the effectful types, as observed by Hinze. Thus, equational reasoning over effectful computations can be reduced to pure types.

This entry provides a package for registering applicative functors and two proof methods for lifting of equations over applicative functors. The first method normalises applicative expressions according to the laws of applicative functors. This way, equations whose two sides contain the same list of variables can be lifted to every applicative functor.

To lift larger classes of equations, the second method exploits a number of additional properties (e.g., commutativity of effects) provided the properties have been declared for the concrete applicative functor at hand upon registration.

We declare several types from the Isabelle library as applicative functors and illustrate the use of the methods with two examples: the lifting of the arithmetic type class hierarchy to streams and the verification of a relabelling function on binary trees. We also formalise and verify the normalisation algorithm used by the first proof method.

extra-history = Change history: [2016-03-03]: added formalisation of lifting with combinators
[2016-06-10]: implemented automatic derivation of lifted combinator reductions; support arbitrary lifted relations using relators; improved compatibility with locale interpretation (revision ec336f354f37)
notify = mail@andreas-lochbihler.de [Stern_Brocot] title = The Stern-Brocot Tree author = Peter Gammie , Andreas Lochbihler date = 2015-12-22 topic = Mathematics/Number theory abstract = The Stern-Brocot tree contains all rational numbers exactly once and in their lowest terms. We formalise the Stern-Brocot tree as a coinductive tree using recursive and iterative specifications, which we have proven equivalent, and show that it indeed contains all the numbers as stated. Following Hinze, we prove that the Stern-Brocot tree can be linearised looplessly into Stern's diatonic sequence (also known as Dijkstra's fusc function) and that it is a permutation of the Bird tree.

The reasoning stays at an abstract level by appealing to the uniqueness of solutions of guarded recursive equations and lifting algebraic laws point-wise to trees and streams using applicative functors.

notify = mail@andreas-lochbihler.de [Algebraic_Numbers] title = Algebraic Numbers in Isabelle/HOL topic = Mathematics/Algebra author = René Thiemann , Akihisa Yamada , Sebastiaan Joosten contributors = Manuel Eberl date = 2015-12-22 abstract = Based on existing libraries for matrices, factorization of rational polynomials, and Sturm's theorem, we formalized algebraic numbers in Isabelle/HOL. Our development serves as an implementation for real and complex numbers, and it admits to compute roots and completely factorize real and complex polynomials, provided that all coefficients are rational numbers. Moreover, we provide two implementations to display algebraic numbers, an injective and expensive one, or a faster but approximative version.

To this end, we mechanized several results on resultants, which also required us to prove that polynomials over a unique factorization domain form again a unique factorization domain.

extra-history = Change history: [2016-01-29]: Split off Polynomial Interpolation and Polynomial Factorization
[2017-04-16]: Use certified Berlekamp-Zassenhaus factorization, use subresultant algorithm for computing resultants, improved bisection algorithm notify = rene.thiemann@uibk.ac.at, ayamada@trs.cm.is.nagoya-u.ac.jp, sebastiaan.joosten@uibk.ac.at [Polynomial_Interpolation] title = Polynomial Interpolation topic = Mathematics/Algebra author = René Thiemann , Akihisa Yamada date = 2016-01-29 abstract = We formalized three algorithms for polynomial interpolation over arbitrary fields: Lagrange's explicit expression, the recursive algorithm of Neville and Aitken, and the Newton interpolation in combination with an efficient implementation of divided differences. Variants of these algorithms for integer polynomials are also available, where sometimes the interpolation can fail; e.g., there is no linear integer polynomial p such that p(0) = 0 and p(2) = 1. Moreover, for the Newton interpolation for integer polynomials, we proved that all intermediate results that are computed during the algorithm must be integers. This admits an early failure detection in the implementation. Finally, we proved the uniqueness of polynomial interpolation.

The development also contains improved code equations to speed up the division of integers in target languages. notify = rene.thiemann@uibk.ac.at, ayamada@trs.cm.is.nagoya-u.ac.jp [Polynomial_Factorization] title = Polynomial Factorization topic = Mathematics/Algebra author = René Thiemann , Akihisa Yamada date = 2016-01-29 abstract = Based on existing libraries for polynomial interpolation and matrices, we formalized several factorization algorithms for polynomials, including Kronecker's algorithm for integer polynomials, Yun's square-free factorization algorithm for field polynomials, and Berlekamp's algorithm for polynomials over finite fields. By combining the last one with Hensel's lifting, we derive an efficient factorization algorithm for the integer polynomials, which is then lifted for rational polynomials by mechanizing Gauss' lemma. Finally, we assembled a combined factorization algorithm for rational polynomials, which combines all the mentioned algorithms and additionally uses the explicit formula for roots of quadratic polynomials and a rational root test.

As side products, we developed division algorithms for polynomials over integral domains, as well as primality-testing and prime-factorization algorithms for integers. notify = rene.thiemann@uibk.ac.at, ayamada@trs.cm.is.nagoya-u.ac.jp [Cubic_Quartic_Equations] title = Solving Cubic and Quartic Equations author = René Thiemann topic = Mathematics/Analysis date = 2021-09-03 notify = rene.thiemann@uibk.ac.at abstract =

We formalize Cardano's formula to solve a cubic equation $$ax^3 + bx^2 + cx + d = 0,$$ as well as Ferrari's formula to solve a quartic equation. We further turn both formulas into executable algorithms based on the algebraic number implementation in the AFP. To this end we also slightly extended this library, namely by making the minimal polynomial of an algebraic number executable, and by defining and implementing $n$-th roots of complex numbers.

[Perron_Frobenius] title = Perron-Frobenius Theorem for Spectral Radius Analysis author = Jose Divasón , Ondřej Kunčar , René Thiemann , Akihisa Yamada notify = rene.thiemann@uibk.ac.at date = 2016-05-20 topic = Mathematics/Algebra abstract =

The spectral radius of a matrix A is the maximum norm of all eigenvalues of A. In previous work we already formalized that for a complex matrix A, the values in An grow polynomially in n if and only if the spectral radius is at most one. One problem with the above characterization is the determination of all complex eigenvalues. In case A contains only non-negative real values, a simplification is possible with the help of the Perron–Frobenius theorem, which tells us that it suffices to consider only the real eigenvalues of A, i.e., applying Sturm's method can decide the polynomial growth of An.

We formalize the Perron–Frobenius theorem based on a proof via Brouwer's fixpoint theorem, which is available in the HOL multivariate analysis (HMA) library. Since the results on the spectral radius is based on matrices in the Jordan normal form (JNF) library, we further develop a connection which allows us to easily transfer theorems between HMA and JNF. With this connection we derive the combined result: if A is a non-negative real matrix, and no real eigenvalue of A is strictly larger than one, then An is polynomially bounded in n.

extra-history = Change history: [2017-10-18]: added Perron-Frobenius theorem for irreducible matrices with generalization (revision bda1f1ce8a1c)
[2018-05-17]: prove conjecture of CPP'18 paper: Jordan blocks of spectral radius have maximum size (revision ffdb3794e5d5) [Stochastic_Matrices] title = Stochastic Matrices and the Perron-Frobenius Theorem author = René Thiemann topic = Mathematics/Algebra, Computer science/Automata and formal languages date = 2017-11-22 notify = rene.thiemann@uibk.ac.at abstract = Stochastic matrices are a convenient way to model discrete-time and finite state Markov chains. The Perron–Frobenius theorem tells us something about the existence and uniqueness of non-negative eigenvectors of a stochastic matrix. In this entry, we formalize stochastic matrices, link the formalization to the existing AFP-entry on Markov chains, and apply the Perron–Frobenius theorem to prove that stationary distributions always exist, and they are unique if the stochastic matrix is irreducible. [Formal_SSA] title = Verified Construction of Static Single Assignment Form author = Sebastian Ullrich , Denis Lohner date = 2016-02-05 topic = Computer science/Algorithms, Computer science/Programming languages/Transformations abstract =

We define a functional variant of the static single assignment (SSA) form construction algorithm described by Braun et al., which combines simplicity and efficiency. The definition is based on a general, abstract control flow graph representation using Isabelle locales.

We prove that the algorithm's output is semantically equivalent to the input according to a small-step semantics, and that it is in minimal SSA form for the common special case of reducible inputs. We then show the satisfiability of the locale assumptions by giving instantiations for a simple While language.

Furthermore, we use a generic instantiation based on typedefs in order to extract OCaml code and replace the unverified SSA construction algorithm of the CompCertSSA project with it.

A more detailed description of the verified SSA construction can be found in the paper Verified Construction of Static Single Assignment Form, CC 2016.

notify = denis.lohner@kit.edu [Minimal_SSA] title = Minimal Static Single Assignment Form author = Max Wagner , Denis Lohner topic = Computer science/Programming languages/Transformations date = 2017-01-17 notify = denis.lohner@kit.edu abstract =

This formalization is an extension to "Verified Construction of Static Single Assignment Form". In their work, the authors have shown that Braun et al.'s static single assignment (SSA) construction algorithm produces minimal SSA form for input programs with a reducible control flow graph (CFG). However Braun et al. also proposed an extension to their algorithm that they claim produces minimal SSA form even for irreducible CFGs.
In this formalization we support that claim by giving a mechanized proof.

As the extension of Braun et al.'s algorithm aims for removing so-called redundant strongly connected components of phi functions, we show that this suffices to guarantee minimality according to Cytron et al..

[PropResPI] title = Propositional Resolution and Prime Implicates Generation author = Nicolas Peltier notify = Nicolas.Peltier@imag.fr date = 2016-03-11 topic = Logic/General logic/Mechanization of proofs abstract = We provide formal proofs in Isabelle-HOL (using mostly structured Isar proofs) of the soundness and completeness of the Resolution rule in propositional logic. The completeness proofs take into account the usual redundancy elimination rules (tautology elimination and subsumption), and several refinements of the Resolution rule are considered: ordered resolution (with selection functions), positive and negative resolution, semantic resolution and unit resolution (the latter refinement is complete only for clause sets that are Horn- renamable). We also define a concrete procedure for computing saturated sets and establish its soundness and completeness. The clause sets are not assumed to be finite, so that the results can be applied to formulas obtained by grounding sets of first-order clauses (however, a total ordering among atoms is assumed to be given). Next, we show that the unrestricted Resolution rule is deductive- complete, in the sense that it is able to generate all (prime) implicates of any set of propositional clauses (i.e., all entailment- minimal, non-valid, clausal consequences of the considered set). The generation of prime implicates is an important problem, with many applications in artificial intelligence and verification (for abductive reasoning, knowledge compilation, diagnosis, debugging etc.). We also show that implicates can be computed in an incremental way, by fixing an ordering among all the atoms in the considered sets and resolving upon these atoms one by one in the considered order (with no backtracking). This feature is critical for the efficient computation of prime implicates. Building on these results, we provide a procedure for computing such implicates and establish its soundness and completeness. [SuperCalc] title = A Variant of the Superposition Calculus author = Nicolas Peltier notify = Nicolas.Peltier@imag.fr date = 2016-09-06 topic = Logic/Proof theory abstract = We provide a formalization of a variant of the superposition calculus, together with formal proofs of soundness and refutational completeness (w.r.t. the usual redundancy criteria based on clause ordering). This version of the calculus uses all the standard restrictions of the superposition rules, together with the following refinement, inspired by the basic superposition calculus: each clause is associated with a set of terms which are assumed to be in normal form -- thus any application of the replacement rule on these terms is blocked. The set is initially empty and terms may be added or removed at each inference step. The set of terms that are assumed to be in normal form includes any term introduced by previous unifiers as well as any term occurring in the parent clauses at a position that is smaller (according to some given ordering on positions) than a previously replaced term. The standard superposition calculus corresponds to the case where the set of irreducible terms is always empty. [Nominal2] title = Nominal 2 author = Christian Urban , Stefan Berghofer , Cezary Kaliszyk date = 2013-02-21 topic = Tools abstract =

Dealing with binders, renaming of bound variables, capture-avoiding substitution, etc., is very often a major problem in formal proofs, especially in proofs by structural and rule induction. Nominal Isabelle is designed to make such proofs easy to formalise: it provides an infrastructure for declaring nominal datatypes (that is alpha-equivalence classes) and for defining functions over them by structural recursion. It also provides induction principles that have Barendregt’s variable convention already built in.

This entry can be used as a more advanced replacement for HOL/Nominal in the Isabelle distribution.

notify = christian.urban@kcl.ac.uk [First_Welfare_Theorem] title = Microeconomics and the First Welfare Theorem author = Julian Parsert , Cezary Kaliszyk topic = Mathematics/Games and economics license = LGPL date = 2017-09-01 notify = julian.parsert@uibk.ac.at, cezary.kaliszyk@uibk.ac.at abstract = Economic activity has always been a fundamental part of society. Due to modern day politics, economic theory has gained even more influence on our lives. Thus we want models and theories to be as precise as possible. This can be achieved using certification with the help of formal proof technology. Hence we will use Isabelle/HOL to construct two economic models, that of the the pure exchange economy and a version of the Arrow-Debreu Model. We will prove that the First Theorem of Welfare Economics holds within both. The theorem is the mathematical formulation of Adam Smith's famous invisible hand and states that a group of self-interested and rational actors will eventually achieve an efficient allocation of goods and services. extra-history = Change history: [2018-06-17]: Added some lemmas and a theory file, also introduced Microeconomics folder.
[Noninterference_Sequential_Composition] title = Conservation of CSP Noninterference Security under Sequential Composition author = Pasquale Noce date = 2016-04-26 topic = Computer science/Security, Computer science/Concurrency/Process calculi abstract =

In his outstanding work on Communicating Sequential Processes, Hoare has defined two fundamental binary operations allowing to compose the input processes into another, typically more complex, process: sequential composition and concurrent composition. Particularly, the output of the former operation is a process that initially behaves like the first operand, and then like the second operand once the execution of the first one has terminated successfully, as long as it does.

This paper formalizes Hoare's definition of sequential composition and proves, in the general case of a possibly intransitive policy, that CSP noninterference security is conserved under this operation, provided that successful termination cannot be affected by confidential events and cannot occur as an alternative to other events in the traces of the first operand. Both of these assumptions are shown, by means of counterexamples, to be necessary for the theorem to hold.

notify = pasquale.noce.lavoro@gmail.com [Noninterference_Concurrent_Composition] title = Conservation of CSP Noninterference Security under Concurrent Composition author = Pasquale Noce notify = pasquale.noce.lavoro@gmail.com date = 2016-06-13 topic = Computer science/Security, Computer science/Concurrency/Process calculi abstract =

In his outstanding work on Communicating Sequential Processes, Hoare has defined two fundamental binary operations allowing to compose the input processes into another, typically more complex, process: sequential composition and concurrent composition. Particularly, the output of the latter operation is a process in which any event not shared by both operands can occur whenever the operand that admits the event can engage in it, whereas any event shared by both operands can occur just in case both can engage in it.

This paper formalizes Hoare's definition of concurrent composition and proves, in the general case of a possibly intransitive policy, that CSP noninterference security is conserved under this operation. This result, along with the previous analogous one concerning sequential composition, enables the construction of more and more complex processes enforcing noninterference security by composing, sequentially or concurrently, simpler secure processes, whose security can in turn be proven using either the definition of security, or unwinding theorems.

[ROBDD] title = Algorithms for Reduced Ordered Binary Decision Diagrams author = Julius Michaelis , Maximilian Haslbeck , Peter Lammich , Lars Hupel date = 2016-04-27 topic = Computer science/Algorithms, Computer science/Data structures abstract = We present a verified and executable implementation of ROBDDs in Isabelle/HOL. Our implementation relates pointer-based computation in the Heap monad to operations on an abstract definition of boolean functions. Internally, we implemented the if-then-else combinator in a recursive fashion, following the Shannon decomposition of the argument functions. The implementation mixes and adapts known techniques and is built with efficiency in mind. notify = bdd@liftm.de, haslbecm@in.tum.de [No_FTL_observers] title = No Faster-Than-Light Observers author = Mike Stannett , István Németi date = 2016-04-28 topic = Mathematics/Physics abstract = We provide a formal proof within First Order Relativity Theory that no observer can travel faster than the speed of light. Originally reported in Stannett & Németi (2014) "Using Isabelle/HOL to verify first-order relativity theory", Journal of Automated Reasoning 52(4), pp. 361-378. notify = m.stannett@sheffield.ac.uk [Schutz_Spacetime] title = Schutz' Independent Axioms for Minkowski Spacetime author = Richard Schmoetten , Jake Palmer , Jacques Fleuriot topic = Mathematics/Physics, Mathematics/Geometry date = 2021-07-27 notify = s1311325@sms.ed.ac.uk abstract = This is a formalisation of Schutz' system of axioms for Minkowski spacetime published under the name "Independent axioms for Minkowski space-time" in 1997, as well as most of the results in the third chapter ("Temporal Order on a Path") of the above monograph. Many results are proven here that cannot be found in Schutz, either preceding the theorem they are needed for, or within their own thematic section. [Groebner_Bases] title = Gröbner Bases Theory author = Fabian Immler , Alexander Maletzky date = 2016-05-02 topic = Mathematics/Algebra, Computer science/Algorithms/Mathematical abstract = This formalization is concerned with the theory of Gröbner bases in (commutative) multivariate polynomial rings over fields, originally developed by Buchberger in his 1965 PhD thesis. Apart from the statement and proof of the main theorem of the theory, the formalization also implements Buchberger's algorithm for actually computing Gröbner bases as a tail-recursive function, thus allowing to effectively decide ideal membership in finitely generated polynomial ideals. Furthermore, all functions can be executed on a concrete representation of multivariate polynomials as association lists. extra-history = Change history: [2019-04-18]: Specialized Gröbner bases to less abstract representation of polynomials, where power-products are represented as polynomial mappings.
notify = alexander.maletzky@risc.jku.at [Nullstellensatz] title = Hilbert's Nullstellensatz author = Alexander Maletzky topic = Mathematics/Algebra, Mathematics/Geometry date = 2019-06-16 notify = alexander.maletzky@risc-software.at abstract = This entry formalizes Hilbert's Nullstellensatz, an important theorem in algebraic geometry that can be viewed as the generalization of the Fundamental Theorem of Algebra to multivariate polynomials: If a set of (multivariate) polynomials over an algebraically closed field has no common zero, then the ideal it generates is the entire polynomial ring. The formalization proves several equivalent versions of this celebrated theorem: the weak Nullstellensatz, the strong Nullstellensatz (connecting algebraic varieties and radical ideals), and the field-theoretic Nullstellensatz. The formalization follows Chapter 4.1. of Ideals, Varieties, and Algorithms by Cox, Little and O'Shea. [Bell_Numbers_Spivey] title = Spivey's Generalized Recurrence for Bell Numbers author = Lukas Bulwahn date = 2016-05-04 topic = Mathematics/Combinatorics abstract = This entry defines the Bell numbers as the cardinality of set partitions for a carrier set of given size, and derives Spivey's generalized recurrence relation for Bell numbers following his elegant and intuitive combinatorial proof.

As the set construction for the combinatorial proof requires construction of three intermediate structures, the main difficulty of the formalization is handling the overall combinatorial argument in a structured way. The introduced proof structure allows us to compose the combinatorial argument from its subparts, and supports to keep track how the detailed proof steps are related to the overall argument. To obtain this structure, this entry uses set monad notation for the set construction's definition, introduces suitable predicates and rules, and follows a repeating structure in its Isar proof. notify = lukas.bulwahn@gmail.com [Randomised_Social_Choice] title = Randomised Social Choice Theory author = Manuel Eberl date = 2016-05-05 topic = Mathematics/Games and economics abstract = This work contains a formalisation of basic Randomised Social Choice, including Stochastic Dominance and Social Decision Schemes (SDSs) along with some of their most important properties (Anonymity, Neutrality, ex-post- and SD-Efficiency, SD-Strategy-Proofness) and two particular SDSs – Random Dictatorship and Random Serial Dictatorship (with proofs of the properties that they satisfy). Many important properties of these concepts are also proven – such as the two equivalent characterisations of Stochastic Dominance and the fact that SD-efficiency of a lottery only depends on the support. The entry also provides convenient commands to define Preference Profiles, prove their well-formedness, and automatically derive restrictions that sufficiently nice SDSs need to satisfy on the defined profiles. Currently, the formalisation focuses on weak preferences and Stochastic Dominance, but it should be easy to extend it to other domains – such as strict preferences – or other lottery extensions – such as Bilinear Dominance or Pairwise Comparison. notify = manuel@pruvisto.org [SDS_Impossibility] title = The Incompatibility of SD-Efficiency and SD-Strategy-Proofness author = Manuel Eberl date = 2016-05-04 topic = Mathematics/Games and economics abstract = This formalisation contains the proof that there is no anonymous and neutral Social Decision Scheme for at least four voters and alternatives that fulfils both SD-Efficiency and SD-Strategy- Proofness. The proof is a fully structured and quasi-human-redable one. It was derived from the (unstructured) SMT proof of the case for exactly four voters and alternatives by Brandl et al. Their proof relies on an unverified translation of the original problem to SMT, and the proof that lifts the argument for exactly four voters and alternatives to the general case is also not machine-checked. In this Isabelle proof, on the other hand, all of these steps are fully proven and machine-checked. This is particularly important seeing as a previously published informal proof of a weaker statement contained a mistake in precisely this lifting step. notify = manuel@pruvisto.org [Median_Of_Medians_Selection] title = The Median-of-Medians Selection Algorithm author = Manuel Eberl topic = Computer science/Algorithms date = 2017-12-21 notify = manuel@pruvisto.org abstract =

This entry provides an executable functional implementation of the Median-of-Medians algorithm for selecting the k-th smallest element of an unsorted list deterministically in linear time. The size bounds for the recursive call that lead to the linear upper bound on the run-time of the algorithm are also proven.

[Mason_Stothers] title = The Mason–Stothers Theorem author = Manuel Eberl topic = Mathematics/Algebra date = 2017-12-21 notify = manuel@pruvisto.org abstract =

This article provides a formalisation of Snyder’s simple and elegant proof of the Mason–Stothers theorem, which is the polynomial analogue of the famous abc Conjecture for integers. Remarkably, Snyder found this very elegant proof when he was still a high-school student.

In short, the statement of the theorem is that three non-zero coprime polynomials A, B, C over a field which sum to 0 and do not all have vanishing derivatives fulfil max{deg(A), deg(B), deg(C)} < deg(rad(ABC)) where the rad(P) denotes the radical of P, i. e. the product of all unique irreducible factors of P.

This theorem also implies a kind of polynomial analogue of Fermat’s Last Theorem for polynomials: except for trivial cases, An + Bn + Cn = 0 implies n ≤ 2 for coprime polynomials A, B, C over a field.

[FLP] title = A Constructive Proof for FLP author = Benjamin Bisping , Paul-David Brodmann , Tim Jungnickel , Christina Rickmann , Henning Seidler , Anke Stüber , Arno Wilhelm-Weidner , Kirstin Peters , Uwe Nestmann date = 2016-05-18 topic = Computer science/Concurrency abstract = The impossibility of distributed consensus with one faulty process is a result with important consequences for real world distributed systems e.g., commits in replicated databases. Since proofs are not immune to faults and even plausible proofs with a profound formalism can conclude wrong results, we validate the fundamental result named FLP after Fischer, Lynch and Paterson. We present a formalization of distributed systems and the aforementioned consensus problem. Our proof is based on Hagen Völzer's paper "A constructive proof for FLP". In addition to the enhanced confidence in the validity of Völzer's proof, we contribute the missing gaps to show the correctness in Isabelle/HOL. We clarify the proof details and even prove fairness of the infinite execution that contradicts consensus. Our Isabelle formalization can also be reused for further proofs of properties of distributed systems. notify = henning.seidler@mailbox.tu-berlin.de [IMAP-CRDT] title = The IMAP CmRDT author = Tim Jungnickel , Lennart Oldenburg <>, Matthias Loibl <> topic = Computer science/Algorithms/Distributed, Computer science/Data structures date = 2017-11-09 notify = tim.jungnickel@tu-berlin.de abstract = We provide our Isabelle/HOL formalization of a Conflict-free Replicated Datatype for Internet Message Access Protocol commands. We show that Strong Eventual Consistency (SEC) is guaranteed by proving the commutativity of concurrent operations. We base our formalization on the recently proposed "framework for establishing Strong Eventual Consistency for Conflict-free Replicated Datatypes" (AFP.CRDT) from Gomes et al. Hence, we provide an additional example of how the recently proposed framework can be used to design and prove CRDTs. [Incredible_Proof_Machine] title = The meta theory of the Incredible Proof Machine author = Joachim Breitner , Denis Lohner date = 2016-05-20 topic = Logic/Proof theory abstract = The Incredible Proof Machine is an interactive visual theorem prover which represents proofs as port graphs. We model this proof representation in Isabelle, and prove that it is just as powerful as natural deduction. notify = mail@joachim-breitner.de [Word_Lib] title = Finite Machine Word Library author = Joel Beeren<>, Matthew Fernandez<>, Xin Gao<>, Gerwin Klein , Rafal Kolanski<>, Japheth Lim<>, Corey Lewis<>, Daniel Matichuk<>, Thomas Sewell<> notify = kleing@unsw.edu.au date = 2016-06-09 topic = Computer science/Data structures abstract = This entry contains an extension to the Isabelle library for fixed-width machine words. In particular, the entry adds quickcheck setup for words, printing as hexadecimals, additional operations, reasoning about alignment, signed words, enumerations of words, normalisation of word numerals, and an extensive library of properties about generic fixed-width words, as well as an instantiation of many of these to the commonly used 32 and 64-bit bases. [Catalan_Numbers] title = Catalan Numbers author = Manuel Eberl notify = manuel@pruvisto.org date = 2016-06-21 topic = Mathematics/Combinatorics abstract =

In this work, we define the Catalan numbers Cn and prove several equivalent definitions (including some closed-form formulae). We also show one of their applications (counting the number of binary trees of size n), prove the asymptotic growth approximation Cn ∼ 4n / (√π · n1.5), and provide reasonably efficient executable code to compute them.

The derivation of the closed-form formulae uses algebraic manipulations of the ordinary generating function of the Catalan numbers, and the asymptotic approximation is then done using generalised binomial coefficients and the Gamma function. Thanks to these highly non-elementary mathematical tools, the proofs are very short and simple.

[Fisher_Yates] title = Fisher–Yates shuffle author = Manuel Eberl notify = manuel@pruvisto.org date = 2016-09-30 topic = Computer science/Algorithms abstract =

This work defines and proves the correctness of the Fisher–Yates algorithm for shuffling – i.e. producing a random permutation – of a list. The algorithm proceeds by traversing the list and in each step swapping the current element with a random element from the remaining list.

[Bertrands_Postulate] title = Bertrand's postulate author = Julian Biendarra<>, Manuel Eberl contributors = Lawrence C. Paulson topic = Mathematics/Number theory date = 2017-01-17 notify = manuel@pruvisto.org abstract =

Bertrand's postulate is an early result on the distribution of prime numbers: For every positive integer n, there exists a prime number that lies strictly between n and 2n. The proof is ported from John Harrison's formalisation in HOL Light. It proceeds by first showing that the property is true for all n greater than or equal to 600 and then showing that it also holds for all n below 600 by case distinction.

[Rewriting_Z] title = The Z Property author = Bertram Felgenhauer<>, Julian Nagele<>, Vincent van Oostrom<>, Christian Sternagel notify = bertram.felgenhauer@uibk.ac.at, julian.nagele@uibk.ac.at, c.sternagel@gmail.com date = 2016-06-30 topic = Logic/Rewriting abstract = We formalize the Z property introduced by Dehornoy and van Oostrom. First we show that for any abstract rewrite system, Z implies confluence. Then we give two examples of proofs using Z: confluence of lambda-calculus with respect to beta-reduction and confluence of combinatory logic. [Resolution_FOL] title = The Resolution Calculus for First-Order Logic author = Anders Schlichtkrull notify = andschl@dtu.dk date = 2016-06-30 topic = Logic/General logic/Mechanization of proofs abstract = This theory is a formalization of the resolution calculus for first-order logic. It is proven sound and complete. The soundness proof uses the substitution lemma, which shows a correspondence between substitutions and updates to an environment. The completeness proof uses semantic trees, i.e. trees whose paths are partial Herbrand interpretations. It employs Herbrand's theorem in a formulation which states that an unsatisfiable set of clauses has a finite closed semantic tree. It also uses the lifting lemma which lifts resolution derivation steps from the ground world up to the first-order world. The theory is presented in a paper in the Journal of Automated Reasoning [Sch18] which extends a paper presented at the International Conference on Interactive Theorem Proving [Sch16]. An earlier version was presented in an MSc thesis [Sch15]. The formalization mostly follows textbooks by Ben-Ari [BA12], Chang and Lee [CL73], and Leitsch [Lei97]. The theory is part of the IsaFoL project [IsaFoL].

[Sch18] Anders Schlichtkrull. "Formalization of the Resolution Calculus for First-Order Logic". Journal of Automated Reasoning, 2018.
[Sch16] Anders Schlichtkrull. "Formalization of the Resolution Calculus for First-Order Logic". In: ITP 2016. Vol. 9807. LNCS. Springer, 2016.
[Sch15] Anders Schlichtkrull. "Formalization of Resolution Calculus in Isabelle". https://people.compute.dtu.dk/andschl/Thesis.pdf. MSc thesis. Technical University of Denmark, 2015.
[BA12] Mordechai Ben-Ari. Mathematical Logic for Computer Science. 3rd. Springer, 2012.
[CL73] Chin-Liang Chang and Richard Char-Tung Lee. Symbolic Logic and Mechanical Theorem Proving. 1st. Academic Press, Inc., 1973.
[Lei97] Alexander Leitsch. The Resolution Calculus. Texts in theoretical computer science. Springer, 1997.
[IsaFoL] IsaFoL authors. IsaFoL: Isabelle Formalization of Logic. https://bitbucket.org/jasmin_blanchette/isafol. extra-history = Change history: [2018-01-24]: added several new versions of the soundness and completeness theorems as described in the paper [Sch18].
[2018-03-20]: added a concrete instance of the unification and completeness theorems using the First-Order Terms AFP-entry from IsaFoR as described in the papers [Sch16] and [Sch18]. [Surprise_Paradox] title = Surprise Paradox author = Joachim Breitner notify = mail@joachim-breitner.de date = 2016-07-17 topic = Logic/Proof theory abstract = In 1964, Fitch showed that the paradox of the surprise hanging can be resolved by showing that the judge’s verdict is inconsistent. His formalization builds on Gödel’s coding of provability. In this theory, we reproduce his proof in Isabelle, building on Paulson’s formalisation of Gödel’s incompleteness theorems. [Ptolemys_Theorem] title = Ptolemy's Theorem author = Lukas Bulwahn notify = lukas.bulwahn@gmail.com date = 2016-08-07 topic = Mathematics/Geometry abstract = This entry provides an analytic proof to Ptolemy's Theorem using polar form transformation and trigonometric identities. In this formalization, we use ideas from John Harrison's HOL Light formalization and the proof sketch on the Wikipedia entry of Ptolemy's Theorem. This theorem is the 95th theorem of the Top 100 Theorems list. [Falling_Factorial_Sum] title = The Falling Factorial of a Sum author = Lukas Bulwahn topic = Mathematics/Combinatorics date = 2017-12-22 notify = lukas.bulwahn@gmail.com abstract = This entry shows that the falling factorial of a sum can be computed with an expression using binomial coefficients and the falling factorial of its summands. The entry provides three different proofs: a combinatorial proof, an induction proof and an algebraic proof using the Vandermonde identity. The three formalizations try to follow their informal presentations from a Mathematics Stack Exchange page as close as possible. The induction and algebraic formalization end up to be very close to their informal presentation, whereas the combinatorial proof first requires the introduction of list interleavings, and significant more detail than its informal presentation. [InfPathElimination] title = Infeasible Paths Elimination by Symbolic Execution Techniques: Proof of Correctness and Preservation of Paths author = Romain Aissat<>, Frederic Voisin<>, Burkhart Wolff notify = wolff@lri.fr date = 2016-08-18 topic = Computer science/Programming languages/Static analysis abstract = TRACER is a tool for verifying safety properties of sequential C programs. TRACER attempts at building a finite symbolic execution graph which over-approximates the set of all concrete reachable states and the set of feasible paths. We present an abstract framework for TRACER and similar CEGAR-like systems. The framework provides 1) a graph- transformation based method for reducing the feasible paths in control-flow graphs, 2) a model for symbolic execution, subsumption, predicate abstraction and invariant generation. In this framework we formally prove two key properties: correct construction of the symbolic states and preservation of feasible paths. The framework focuses on core operations, leaving to concrete prototypes to “fit in” heuristics for combining them. The accompanying paper (published in ITP 2016) can be found at https://www.lri.fr/∼wolff/papers/conf/2016-itp-InfPathsNSE.pdf. [Stirling_Formula] title = Stirling's formula author = Manuel Eberl notify = manuel@pruvisto.org date = 2016-09-01 topic = Mathematics/Analysis abstract =

This work contains a proof of Stirling's formula both for the factorial $n! \sim \sqrt{2\pi n} (n/e)^n$ on natural numbers and the real Gamma function $\Gamma(x)\sim \sqrt{2\pi/x} (x/e)^x$. The proof is based on work by Graham Jameson.

This is then extended to the full asymptotic expansion $$\log\Gamma(z) = \big(z - \tfrac{1}{2}\big)\log z - z + \tfrac{1}{2}\log(2\pi) + \sum_{k=1}^{n-1} \frac{B_{k+1}}{k(k+1)} z^{-k}\\ {} - \frac{1}{n} \int_0^\infty B_n([t])(t + z)^{-n}\,\text{d}t$$ uniformly for all complex $z\neq 0$ in the cone $\text{arg}(z)\leq \alpha$ for any $\alpha\in(0,\pi)$, with which the above asymptotic relation for Γ is also extended to complex arguments.

[Lp] title = Lp spaces author = Sebastien Gouezel notify = sebastien.gouezel@univ-rennes1.fr date = 2016-10-05 topic = Mathematics/Analysis abstract = Lp is the space of functions whose p-th power is integrable. It is one of the most fundamental Banach spaces that is used in analysis and probability. We develop a framework for function spaces, and then implement the Lp spaces in this framework using the existing integration theory in Isabelle/HOL. Our development contains most fundamental properties of Lp spaces, notably the Hölder and Minkowski inequalities, completeness of Lp, duality, stability under almost sure convergence, multiplication of functions in Lp and Lq, stability under conditional expectation. [Berlekamp_Zassenhaus] title = The Factorization Algorithm of Berlekamp and Zassenhaus author = Jose Divasón , Sebastiaan Joosten , René Thiemann , Akihisa Yamada notify = rene.thiemann@uibk.ac.at date = 2016-10-14 topic = Mathematics/Algebra abstract =

We formalize the Berlekamp-Zassenhaus algorithm for factoring square-free integer polynomials in Isabelle/HOL. We further adapt an existing formalization of Yun’s square-free factorization algorithm to integer polynomials, and thus provide an efficient and certified factorization algorithm for arbitrary univariate polynomials.

The algorithm first performs a factorization in the prime field GF(p) and then performs computations in the integer ring modulo p^k, where both p and k are determined at runtime. Since a natural modeling of these structures via dependent types is not possible in Isabelle/HOL, we formalize the whole algorithm using Isabelle’s recent addition of local type definitions.

Through experiments we verify that our algorithm factors polynomials of degree 100 within seconds.

[Allen_Calculus] title = Allen's Interval Calculus author = Fadoua Ghourabi <> notify = fadouaghourabi@gmail.com date = 2016-09-29 topic = Logic/General logic/Temporal logic, Mathematics/Order abstract = Allen’s interval calculus is a qualitative temporal representation of time events. Allen introduced 13 binary relations that describe all the possible arrangements between two events, i.e. intervals with non-zero finite length. The compositions are pertinent to reasoning about knowledge of time. In particular, a consistency problem of relation constraints is commonly solved with a guideline from these compositions. We formalize the relations together with an axiomatic system. We proof the validity of the 169 compositions of these relations. We also define nests as the sets of intervals that share a meeting point. We prove that nests give the ordering properties of points without introducing a new datatype for points. [1] J.F. Allen. Maintaining Knowledge about Temporal Intervals. In Commun. ACM, volume 26, pages 832–843, 1983. [2] J. F. Allen and P. J. Hayes. A Common-sense Theory of Time. In Proceedings of the 9th International Joint Conference on Artificial Intelligence (IJCAI’85), pages 528–531, 1985. [Source_Coding_Theorem] title = Source Coding Theorem author = Quentin Hibon , Lawrence C. Paulson notify = qh225@cl.cam.ac.uk date = 2016-10-19 topic = Mathematics/Probability theory abstract = This document contains a proof of the necessary condition on the code rate of a source code, namely that this code rate is bounded by the entropy of the source. This represents one half of Shannon's source coding theorem, which is itself an equivalence. [Buffons_Needle] title = Buffon's Needle Problem author = Manuel Eberl topic = Mathematics/Probability theory, Mathematics/Geometry date = 2017-06-06 notify = manuel@pruvisto.org abstract = In the 18th century, Georges-Louis Leclerc, Comte de Buffon posed and later solved the following problem, which is often called the first problem ever solved in geometric probability: Given a floor divided into vertical strips of the same width, what is the probability that a needle thrown onto the floor randomly will cross two strips? This entry formally defines the problem in the case where the needle's position is chosen uniformly at random in a single strip around the origin (which is equivalent to larger arrangements due to symmetry). It then provides proofs of the simple solution in the case where the needle's length is no greater than the width of the strips and the more complicated solution in the opposite case. [SPARCv8] title = A formal model for the SPARCv8 ISA and a proof of non-interference for the LEON3 processor author = Zhe Hou , David Sanan , Alwen Tiu , Yang Liu notify = zhe.hou@ntu.edu.sg, sanan@ntu.edu.sg date = 2016-10-19 topic = Computer science/Security, Computer science/Hardware abstract = We formalise the SPARCv8 instruction set architecture (ISA) which is used in processors such as LEON3. Our formalisation can be specialised to any SPARCv8 CPU, here we use LEON3 as a running example. Our model covers the operational semantics for all the instructions in the integer unit of the SPARCv8 architecture and it supports Isabelle code export, which effectively turns the Isabelle model into a SPARCv8 CPU simulator. We prove the language-based non-interference property for the LEON3 processor. Our model is based on deterministic monad, which is a modified version of the non-deterministic monad from NICTA/l4v. [Separata] title = Separata: Isabelle tactics for Separation Algebra author = Zhe Hou , David Sanan , Alwen Tiu , Rajeev Gore , Ranald Clouston notify = zhe.hou@ntu.edu.sg date = 2016-11-16 topic = Computer science/Programming languages/Logics, Tools abstract = We bring the labelled sequent calculus $LS_{PASL}$ for propositional abstract separation logic to Isabelle. The tactics given here are directly applied on an extension of the Separation Algebra in the AFP. In addition to the cancellative separation algebra, we further consider some useful properties in the heap model of separation logic, such as indivisible unit, disjointness, and cross-split. The tactics are essentially a proof search procedure for the calculus $LS_{PASL}$. We wrap the tactics in an Isabelle method called separata, and give a few examples of separation logic formulae which are provable by separata. [LOFT] title = LOFT — Verified Migration of Linux Firewalls to SDN author = Julius Michaelis , Cornelius Diekmann notify = isabelleopenflow@liftm.de date = 2016-10-21 topic = Computer science/Networks abstract = We present LOFT — Linux firewall OpenFlow Translator, a system that transforms the main routing table and FORWARD chain of iptables of a Linux-based firewall into a set of static OpenFlow rules. Our implementation is verified against a model of a simplified Linux-based router and we can directly show how much of the original functionality is preserved. [Stable_Matching] title = Stable Matching author = Peter Gammie notify = peteg42@gmail.com date = 2016-10-24 topic = Mathematics/Games and economics abstract = We mechanize proofs of several results from the matching with contracts literature, which generalize those of the classical two-sided matching scenarios that go by the name of stable marriage. Our focus is on game theoretic issues. Along the way we develop executable algorithms for computing optimal stable matches. [Modal_Logics_for_NTS] title = Modal Logics for Nominal Transition Systems author = Tjark Weber , Lars-Henrik Eriksson , Joachim Parrow , Johannes Borgström , Ramunas Gutkovas notify = tjark.weber@it.uu.se date = 2016-10-25 topic = Computer science/Concurrency/Process calculi, Logic/General logic/Modal logic abstract = We formalize a uniform semantic substrate for a wide variety of process calculi where states and action labels can be from arbitrary nominal sets. A Hennessy-Milner logic for these systems is defined, and proved adequate for bisimulation equivalence. A main novelty is the construction of an infinitary nominal data type to model formulas with (finitely supported) infinite conjunctions and actions that may contain binding names. The logic is generalized to treat different bisimulation variants such as early, late and open in a systematic way. extra-history = Change history: [2017-01-29]: Formalization of weak bisimilarity added (revision c87cc2057d9c) [Abs_Int_ITP2012] title = Abstract Interpretation of Annotated Commands author = Tobias Nipkow notify = nipkow@in.tum.de date = 2016-11-23 topic = Computer science/Programming languages/Static analysis abstract = This is the Isabelle formalization of the material decribed in the eponymous ITP 2012 paper. It develops a generic abstract interpreter for a while-language, including widening and narrowing. The collecting semantics and the abstract interpreter operate on annotated commands: the program is represented as a syntax tree with the semantic information directly embedded, without auxiliary labels. The aim of the formalization is simplicity, not efficiency or precision. This is motivated by the inclusion of the material in a theorem prover based course on semantics. A similar (but more polished) development is covered in the book Concrete Semantics. [Complx] title = COMPLX: A Verification Framework for Concurrent Imperative Programs author = Sidney Amani<>, June Andronick<>, Maksym Bortin<>, Corey Lewis<>, Christine Rizkallah<>, Joseph Tuong<> notify = sidney.amani@data61.csiro.au, corey.lewis@data61.csiro.au date = 2016-11-29 topic = Computer science/Programming languages/Logics, Computer science/Programming languages/Language definitions abstract = We propose a concurrency reasoning framework for imperative programs, based on the Owicki-Gries (OG) foundational shared-variable concurrency method. Our framework combines the approaches of Hoare-Parallel, a formalisation of OG in Isabelle/HOL for a simple while-language, and Simpl, a generic imperative language embedded in Isabelle/HOL, allowing formal reasoning on C programs. We define the Complx language, extending the syntax and semantics of Simpl with support for parallel composition and synchronisation. We additionally define an OG logic, which we prove sound w.r.t. the semantics, and a verification condition generator, both supporting involved low-level imperative constructs such as function calls and abrupt termination. We illustrate our framework on an example that features exceptions, guards and function calls. We aim to then target concurrent operating systems, such as the interruptible eChronos embedded operating system for which we already have a model-level OG proof using Hoare-Parallel. extra-history = Change history: [2017-01-13]: Improve VCG for nested parallels and sequential sections (revision 30739dbc3dcb) [Paraconsistency] title = Paraconsistency author = Anders Schlichtkrull , Jørgen Villadsen topic = Logic/General logic/Paraconsistent logics date = 2016-12-07 notify = andschl@dtu.dk, jovi@dtu.dk abstract = Paraconsistency is about handling inconsistency in a coherent way. In classical and intuitionistic logic everything follows from an inconsistent theory. A paraconsistent logic avoids the explosion. Quite a few applications in computer science and engineering are discussed in the Intelligent Systems Reference Library Volume 110: Towards Paraconsistent Engineering (Springer 2016). We formalize a paraconsistent many-valued logic that we motivated and described in a special issue on logical approaches to paraconsistency (Journal of Applied Non-Classical Logics 2005). We limit ourselves to the propositional fragment of the higher-order logic. The logic is based on so-called key equalities and has a countably infinite number of truth values. We prove theorems in the logic using the definition of validity. We verify truth tables and also counterexamples for non-theorems. We prove meta-theorems about the logic and finally we investigate a case study. [Proof_Strategy_Language] title = Proof Strategy Language author = Yutaka Nagashima<> topic = Tools date = 2016-12-20 notify = Yutaka.Nagashima@data61.csiro.au abstract = Isabelle includes various automatic tools for finding proofs under certain conditions. However, for each conjecture, knowing which automation to use, and how to tweak its parameters, is currently labour intensive. We have developed a language, PSL, designed to capture high level proof strategies. PSL offloads the construction of human-readable fast-to-replay proof scripts to automatic search, making use of search-time information about each conjecture. Our preliminary evaluations show that PSL reduces the labour cost of interactive theorem proving. This submission contains the implementation of PSL and an example theory file, Example.thy, showing how to write poof strategies in PSL. [Concurrent_Ref_Alg] title = Concurrent Refinement Algebra and Rely Quotients author = Julian Fell , Ian J. Hayes , Andrius Velykis topic = Computer science/Concurrency date = 2016-12-30 notify = Ian.Hayes@itee.uq.edu.au abstract = The concurrent refinement algebra developed here is designed to provide a foundation for rely/guarantee reasoning about concurrent programs. The algebra builds on a complete lattice of commands by providing sequential composition, parallel composition and a novel weak conjunction operator. The weak conjunction operator coincides with the lattice supremum providing its arguments are non-aborting, but aborts if either of its arguments do. Weak conjunction provides an abstract version of a guarantee condition as a guarantee process. We distinguish between models that distribute sequential composition over non-deterministic choice from the left (referred to as being conjunctive in the refinement calculus literature) and those that don't. Least and greatest fixed points of monotone functions are provided to allow recursion and iteration operators to be added to the language. Additional iteration laws are available for conjunctive models. The rely quotient of processes c and i is the process that, if executed in parallel with i implements c. It represents an abstract version of a rely condition generalised to a process. [FOL_Harrison] title = First-Order Logic According to Harrison author = Alexander Birch Jensen , Anders Schlichtkrull , Jørgen Villadsen topic = Logic/General logic/Mechanization of proofs date = 2017-01-01 notify = aleje@dtu.dk, andschl@dtu.dk, jovi@dtu.dk abstract =

We present a certified declarative first-order prover with equality based on John Harrison's Handbook of Practical Logic and Automated Reasoning, Cambridge University Press, 2009. ML code reflection is used such that the entire prover can be executed within Isabelle as a very simple interactive proof assistant. As examples we consider Pelletier's problems 1-46.

Reference: Programming and Verifying a Declarative First-Order Prover in Isabelle/HOL. Alexander Birch Jensen, John Bruntse Larsen, Anders Schlichtkrull & Jørgen Villadsen. AI Communications 31:281-299 2018. https://content.iospress.com/articles/ai-communications/aic764

See also: Students' Proof Assistant (SPA). https://github.com/logic-tools/spa

extra-history = Change history: [2018-07-21]: Proof of Pelletier's problem 34 (Andrews's Challenge) thanks to Asta Halkjær From. [Bernoulli] title = Bernoulli Numbers author = Lukas Bulwahn, Manuel Eberl topic = Mathematics/Analysis, Mathematics/Number theory date = 2017-01-24 notify = manuel@pruvisto.org abstract =

Bernoulli numbers were first discovered in the closed-form expansion of the sum 1m + 2m + … + nm for a fixed m and appear in many other places. This entry provides three different definitions for them: a recursive one, an explicit one, and one through their exponential generating function.

In addition, we prove some basic facts, e.g. their relation to sums of powers of integers and that all odd Bernoulli numbers except the first are zero, and some advanced facts like their relationship to the Riemann zeta function on positive even integers.

We also prove the correctness of the Akiyama–Tanigawa algorithm for computing Bernoulli numbers with reasonable efficiency, and we define the periodic Bernoulli polynomials (which appear e.g. in the Euler–MacLaurin summation formula and the expansion of the log-Gamma function) and prove their basic properties.

[Stone_Relation_Algebras] title = Stone Relation Algebras author = Walter Guttmann topic = Mathematics/Algebra date = 2017-02-07 notify = walter.guttmann@canterbury.ac.nz abstract = We develop Stone relation algebras, which generalise relation algebras by replacing the underlying Boolean algebra structure with a Stone algebra. We show that finite matrices over extended real numbers form an instance. As a consequence, relation-algebraic concepts and methods can be used for reasoning about weighted graphs. We also develop a fixpoint calculus and apply it to compare different definitions of reflexive-transitive closures in semirings. extra-history = Change history: [2017-07-05]: generalised extended reals to linear orders (revision b8e703159177) [Stone_Kleene_Relation_Algebras] title = Stone-Kleene Relation Algebras author = Walter Guttmann topic = Mathematics/Algebra date = 2017-07-06 notify = walter.guttmann@canterbury.ac.nz abstract = We develop Stone-Kleene relation algebras, which expand Stone relation algebras with a Kleene star operation to describe reachability in weighted graphs. Many properties of the Kleene star arise as a special case of a more general theory of iteration based on Conway semirings extended by simulation axioms. This includes several theorems representing complex program transformations. We formally prove the correctness of Conway's automata-based construction of the Kleene star of a matrix. We prove numerous results useful for reasoning about weighted graphs. [Abstract_Soundness] title = Abstract Soundness author = Jasmin Christian Blanchette , Andrei Popescu , Dmitriy Traytel topic = Logic/Proof theory date = 2017-02-10 notify = jasmin.blanchette@gmail.com abstract = A formalized coinductive account of the abstract development of Brotherston, Gorogiannis, and Petersen [APLAS 2012], in a slightly more general form since we work with arbitrary infinite proofs, which may be acyclic. This work is described in detail in an article by the authors, published in 2017 in the Journal of Automated Reasoning. The abstract proof can be instantiated for various formalisms, including first-order logic with inductive predicates. [Differential_Dynamic_Logic] title = Differential Dynamic Logic author = Brandon Bohrer topic = Logic/General logic/Modal logic, Computer science/Programming languages/Logics date = 2017-02-13 notify = bbohrer@cs.cmu.edu abstract = We formalize differential dynamic logic, a logic for proving properties of hybrid systems. The proof calculus in this formalization is based on the uniform substitution principle. We show it is sound with respect to our denotational semantics, which provides increased confidence in the correctness of the KeYmaera X theorem prover based on this calculus. As an application, we include a proof term checker embedded in Isabelle/HOL with several example proofs. Published in: Brandon Bohrer, Vincent Rahli, Ivana Vukotic, Marcus Völp, André Platzer: Formally verified differential dynamic logic. CPP 2017. [Syntax_Independent_Logic] title = Syntax-Independent Logic Infrastructure author = Andrei Popescu , Dmitriy Traytel topic = Logic/Proof theory date = 2020-09-16 notify = a.popescu@sheffield.ac.uk, traytel@di.ku.dk abstract = We formalize a notion of logic whose terms and formulas are kept abstract. In particular, logical connectives, substitution, free variables, and provability are not defined, but characterized by their general properties as locale assumptions. Based on this abstract characterization, we develop further reusable reasoning infrastructure. For example, we define parallel substitution (along with proving its characterizing theorems) from single-point substitution. Similarly, we develop a natural deduction style proof system starting from the abstract Hilbert-style one. These one-time efforts benefit different concrete logics satisfying our locales' assumptions. We instantiate the syntax-independent logic infrastructure to Robinson arithmetic (also known as Q) in the AFP entry Robinson_Arithmetic and to hereditarily finite set theory in the AFP entries Goedel_HFSet_Semantic and Goedel_HFSet_Semanticless, which are part of our formalization of Gödel's Incompleteness Theorems described in our CADE-27 paper A Formally Verified Abstract Account of Gödel's Incompleteness Theorems. [Goedel_Incompleteness] title = An Abstract Formalization of Gödel's Incompleteness Theorems author = Andrei Popescu , Dmitriy Traytel topic = Logic/Proof theory date = 2020-09-16 notify = a.popescu@sheffield.ac.uk, traytel@di.ku.dk abstract = We present an abstract formalization of Gödel's incompleteness theorems. We analyze sufficient conditions for the theorems' applicability to a partially specified logic. Our abstract perspective enables a comparison between alternative approaches from the literature. These include Rosser's variation of the first theorem, Jeroslow's variation of the second theorem, and the Swierczkowski–Paulson semantics-based approach. This AFP entry is the main entry point to the results described in our CADE-27 paper A Formally Verified Abstract Account of Gödel's Incompleteness Theorems. As part of our abstract formalization's validation, we instantiate our locales twice in the separate AFP entries Goedel_HFSet_Semantic and Goedel_HFSet_Semanticless. [Goedel_HFSet_Semantic] title = From Abstract to Concrete Gödel's Incompleteness Theorems—Part I author = Andrei Popescu , Dmitriy Traytel topic = Logic/Proof theory date = 2020-09-16 notify = a.popescu@sheffield.ac.uk, traytel@di.ku.dk abstract = We validate an abstract formulation of Gödel's First and Second Incompleteness Theorems from a separate AFP entry by instantiating them to the case of finite sound extensions of the Hereditarily Finite (HF) Set theory, i.e., FOL theories extending the HF Set theory with a finite set of axioms that are sound in the standard model. The concrete results had been previously formalised in an AFP entry by Larry Paulson; our instantiation reuses the infrastructure developed in that entry. [Goedel_HFSet_Semanticless] title = From Abstract to Concrete Gödel's Incompleteness Theorems—Part II author = Andrei Popescu , Dmitriy Traytel topic = Logic/Proof theory date = 2020-09-16 notify = a.popescu@sheffield.ac.uk, traytel@di.ku.dk abstract = We validate an abstract formulation of Gödel's Second Incompleteness Theorem from a separate AFP entry by instantiating it to the case of finite consistent extensions of the Hereditarily Finite (HF) Set theory, i.e., consistent FOL theories extending the HF Set theory with a finite set of axioms. The instantiation draws heavily on infrastructure previously developed by Larry Paulson in his direct formalisation of the concrete result. It strengthens Paulson's formalization of Gödel's Second from that entry by not assuming soundness, and in fact not relying on any notion of model or semantic interpretation. The strengthening was obtained by first replacing some of Paulson’s semantic arguments with proofs within his HF calculus, and then plugging in some of Paulson's (modified) lemmas to instantiate our soundness-free Gödel's Second locale. [Robinson_Arithmetic] title = Robinson Arithmetic author = Andrei Popescu , Dmitriy Traytel topic = Logic/Proof theory date = 2020-09-16 notify = a.popescu@sheffield.ac.uk, traytel@di.ku.dk abstract = We instantiate our syntax-independent logic infrastructure developed in a separate AFP entry to the FOL theory of Robinson arithmetic (also known as Q). The latter was formalised using Nominal Isabelle by adapting Larry Paulson’s formalization of the Hereditarily Finite Set theory. [Elliptic_Curves_Group_Law] title = The Group Law for Elliptic Curves author = Stefan Berghofer topic = Computer science/Security/Cryptography date = 2017-02-28 notify = berghofe@in.tum.de abstract = We prove the group law for elliptic curves in Weierstrass form over fields of characteristic greater than 2. In addition to affine coordinates, we also formalize projective coordinates, which allow for more efficient computations. By specializing the abstract formalization to prime fields, we can apply the curve operations to parameters used in standard security protocols. [Example-Submission] title = Example Submission author = Gerwin Klein topic = Mathematics/Analysis, Mathematics/Number theory date = 2004-02-25 notify = kleing@cse.unsw.edu.au abstract =

This is an example submission to the Archive of Formal Proofs. It shows submission requirements and explains the structure of a simple typical submission.

Note that you can use HTML tags and LaTeX formulae like $\sum_{n=1}^\infty \frac{1}{n^2} = \frac{\pi^2}{6}$ in the abstract. Display formulae like $$ \int_0^1 x^{-x}\,\text{d}x = \sum_{n=1}^\infty n^{-n}$$ are also possible. Please read the submission guidelines before using this.

extra-no-index = no-index: true [CRDT] title = A framework for establishing Strong Eventual Consistency for Conflict-free Replicated Datatypes author = Victor B. F. Gomes , Martin Kleppmann, Dominic P. Mulligan, Alastair R. Beresford topic = Computer science/Algorithms/Distributed, Computer science/Data structures date = 2017-07-07 notify = vb358@cam.ac.uk, dominic.p.mulligan@googlemail.com abstract = In this work, we focus on the correctness of Conflict-free Replicated Data Types (CRDTs), a class of algorithm that provides strong eventual consistency guarantees for replicated data. We develop a modular and reusable framework for verifying the correctness of CRDT algorithms. We avoid correctness issues that have dogged previous mechanised proofs in this area by including a network model in our formalisation, and proving that our theorems hold in all possible network behaviours. Our axiomatic network model is a standard abstraction that accurately reflects the behaviour of real-world computer networks. Moreover, we identify an abstract convergence theorem, a property of order relations, which provides a formal definition of strong eventual consistency. We then obtain the first machine-checked correctness theorems for three concrete CRDTs: the Replicated Growable Array, the Observed-Remove Set, and an Increment-Decrement Counter. [HOLCF-Prelude] title = HOLCF-Prelude author = Joachim Breitner, Brian Huffman<>, Neil Mitchell<>, Christian Sternagel topic = Computer science/Functional programming date = 2017-07-15 notify = c.sternagel@gmail.com, joachim@cis.upenn.edu, hupel@in.tum.de abstract = The Isabelle/HOLCF-Prelude is a formalization of a large part of Haskell's standard prelude in Isabelle/HOLCF. We use it to prove the correctness of the Eratosthenes' Sieve, in its self-referential implementation commonly used to showcase Haskell's laziness; prove correctness of GHC's "fold/build" rule and related rewrite rules; and certify a number of hints suggested by HLint. [Decl_Sem_Fun_PL] title = Declarative Semantics for Functional Languages author = Jeremy Siek topic = Computer science/Programming languages date = 2017-07-21 notify = jsiek@indiana.edu abstract = We present a semantics for an applied call-by-value lambda-calculus that is compositional, extensional, and elementary. We present four different views of the semantics: 1) as a relational (big-step) semantics that is not operational but instead declarative, 2) as a denotational semantics that does not use domain theory, 3) as a non-deterministic interpreter, and 4) as a variant of the intersection type systems of the Torino group. We prove that the semantics is correct by showing that it is sound and complete with respect to operational semantics on programs and that is sound with respect to contextual equivalence. We have not yet investigated whether it is fully abstract. We demonstrate that this approach to semantics is useful with three case studies. First, we use the semantics to prove correctness of a compiler optimization that inlines function application. Second, we adapt the semantics to the polymorphic lambda-calculus extended with general recursion and prove semantic type soundness. Third, we adapt the semantics to the call-by-value lambda-calculus with mutable references.
The paper that accompanies these Isabelle theories is available on arXiv. [DynamicArchitectures] title = Dynamic Architectures author = Diego Marmsoler topic = Computer science/System description languages date = 2017-07-28 notify = diego.marmsoler@tum.de abstract = The architecture of a system describes the system's overall organization into components and connections between those components. With the emergence of mobile computing, dynamic architectures have become increasingly important. In such architectures, components may appear or disappear, and connections may change over time. In the following we mechanize a theory of dynamic architectures and verify the soundness of a corresponding calculus. Therefore, we first formalize the notion of configuration traces as a model for dynamic architectures. Then, the behavior of single components is formalized in terms of behavior traces and an operator is introduced and studied to extract the behavior of a single component out of a given configuration trace. Then, behavior trace assertions are introduced as a temporal specification technique to specify behavior of components. Reasoning about component behavior in a dynamic context is formalized in terms of a calculus for dynamic architectures. Finally, the soundness of the calculus is verified by introducing an alternative interpretation for behavior trace assertions over configuration traces and proving the rules of the calculus. Since projection may lead to finite as well as infinite behavior traces, they are formalized in terms of coinductive lists. Thus, our theory is based on Lochbihler's formalization of coinductive lists. The theory may be applied to verify properties for dynamic architectures. extra-history = Change history: [2018-06-07]: adding logical operators to specify configuration traces (revision 09178f08f050)
[Stewart_Apollonius] title = Stewart's Theorem and Apollonius' Theorem author = Lukas Bulwahn topic = Mathematics/Geometry date = 2017-07-31 notify = lukas.bulwahn@gmail.com abstract = This entry formalizes the two geometric theorems, Stewart's and Apollonius' theorem. Stewart's Theorem relates the length of a triangle's cevian to the lengths of the triangle's two sides. Apollonius' Theorem is a specialisation of Stewart's theorem, restricting the cevian to be the median. The proof applies the law of cosines, some basic geometric facts about triangles and then simply transforms the terms algebraically to yield the conjectured relation. The formalization in Isabelle can closely follow the informal proofs described in the Wikipedia articles of those two theorems. [LambdaMu] title = The LambdaMu-calculus author = Cristina Matache , Victor B. F. Gomes , Dominic P. Mulligan topic = Computer science/Programming languages/Lambda calculi, Logic/General logic/Lambda calculus date = 2017-08-16 notify = victorborgesfg@gmail.com, dominic.p.mulligan@googlemail.com abstract = The propositions-as-types correspondence is ordinarily presented as linking the metatheory of typed λ-calculi and the proof theory of intuitionistic logic. Griffin observed that this correspondence could be extended to classical logic through the use of control operators. This observation set off a flurry of further research, leading to the development of Parigots λμ-calculus. In this work, we formalise λμ- calculus in Isabelle/HOL and prove several metatheoretical properties such as type preservation and progress. [Orbit_Stabiliser] title = Orbit-Stabiliser Theorem with Application to Rotational Symmetries author = Jonas Rädle topic = Mathematics/Algebra date = 2017-08-20 notify = jonas.raedle@tum.de abstract = The Orbit-Stabiliser theorem is a basic result in the algebra of groups that factors the order of a group into the sizes of its orbits and stabilisers. We formalize the notion of a group action and the related concepts of orbits and stabilisers. This allows us to prove the orbit-stabiliser theorem. In the second part of this work, we formalize the tetrahedral group and use the orbit-stabiliser theorem to prove that there are twelve (orientation-preserving) rotations of the tetrahedron. [PLM] title = Representation and Partial Automation of the Principia Logico-Metaphysica in Isabelle/HOL author = Daniel Kirchner topic = Logic/Philosophical aspects date = 2017-09-17 notify = daniel@ekpyron.org abstract =

We present an embedding of the second-order fragment of the Theory of Abstract Objects as described in Edward Zalta's upcoming work Principia Logico-Metaphysica (PLM) in the automated reasoning framework Isabelle/HOL. The Theory of Abstract Objects is a metaphysical theory that reifies property patterns, as they for example occur in the abstract reasoning of mathematics, as abstract objects and provides an axiomatic framework that allows to reason about these objects. It thereby serves as a fundamental metaphysical theory that can be used to axiomatize and describe a wide range of philosophical objects, such as Platonic forms or Leibniz' concepts, and has the ambition to function as a foundational theory of mathematics. The target theory of our embedding as described in chapters 7-9 of PLM employs a modal relational type theory as logical foundation for which a representation in functional type theory is known to be challenging.

Nevertheless we arrive at a functioning representation of the theory in the functional logic of Isabelle/HOL based on a semantical representation of an Aczel-model of the theory. Based on this representation we construct an implementation of the deductive system of PLM which allows to automatically and interactively find and verify theorems of PLM.

Our work thereby supports the concept of shallow semantical embeddings of logical systems in HOL as a universal tool for logical reasoning as promoted by Christoph Benzmüller.

The most notable result of the presented work is the discovery of a previously unknown paradox in the formulation of the Theory of Abstract Objects. The embedding of the theory in Isabelle/HOL played a vital part in this discovery. Furthermore it was possible to immediately offer several options to modify the theory to guarantee its consistency. Thereby our work could provide a significant contribution to the development of a proper grounding for object theory.

[KD_Tree] title = Multidimensional Binary Search Trees author = Martin Rau<> topic = Computer science/Data structures date = 2019-05-30 notify = martin.rau@tum.de, mrtnrau@googlemail.com abstract = This entry provides a formalization of multidimensional binary trees, also known as k-d trees. It includes a balanced build algorithm as well as the nearest neighbor algorithm and the range search algorithm. It is based on the papers Multidimensional binary search trees used for associative searching and An Algorithm for Finding Best Matches in Logarithmic Expected Time. extra-history = Change history: [2020-15-04]: Change representation of k-dimensional points from 'list' to HOL-Analysis.Finite_Cartesian_Product 'vec'. Update proofs to incorporate HOL-Analysis 'dist' and 'cbox' primitives. [Closest_Pair_Points] title = Closest Pair of Points Algorithms author = Martin Rau , Tobias Nipkow topic = Computer science/Algorithms/Geometry date = 2020-01-13 notify = martin.rau@tum.de, nipkow@in.tum.de abstract = This entry provides two related verified divide-and-conquer algorithms solving the fundamental Closest Pair of Points problem in Computational Geometry. Functional correctness and the optimal running time of O(n log n) are proved. Executable code is generated which is empirically competitive with handwritten reference implementations. extra-history = Change history: [2020-14-04]: Incorporate Time_Monad of the AFP entry Root_Balanced_Tree. [Approximation_Algorithms] title = Verified Approximation Algorithms author = Robin Eßmann , Tobias Nipkow , Simon Robillard , Ujkan Sulejmani<> topic = Computer science/Algorithms/Approximation date = 2020-01-16 notify = nipkow@in.tum.de abstract = We present the first formal verification of approximation algorithms for NP-complete optimization problems: vertex cover, set cover, independent set, center selection, load balancing, and bin packing. The proofs correct incompletenesses in existing proofs and improve the approximation ratio in one case. A detailed description of our work (excluding center selection) has been published in the proceedings of IJCAR 2020. [Diophantine_Eqns_Lin_Hom] title = Homogeneous Linear Diophantine Equations author = Florian Messner , Julian Parsert , Jonas Schöpf , Christian Sternagel topic = Computer science/Algorithms/Mathematical, Mathematics/Number theory, Tools license = LGPL date = 2017-10-14 notify = c.sternagel@gmail.com, julian.parsert@gmail.com abstract = We formalize the theory of homogeneous linear diophantine equations, focusing on two main results: (1) an abstract characterization of minimal complete sets of solutions, and (2) an algorithm computing them. Both, the characterization and the algorithm are based on previous work by Huet. Our starting point is a simple but inefficient variant of Huet's lexicographic algorithm incorporating improved bounds due to Clausen and Fortenbacher. We proceed by proving its soundness and completeness. Finally, we employ code equations to obtain a reasonably efficient implementation. Thus, we provide a formally verified solver for homogeneous linear diophantine equations. [Winding_Number_Eval] title = Evaluate Winding Numbers through Cauchy Indices author = Wenda Li topic = Mathematics/Analysis date = 2017-10-17 notify = wl302@cam.ac.uk, liwenda1990@hotmail.com abstract = In complex analysis, the winding number measures the number of times a path (counterclockwise) winds around a point, while the Cauchy index can approximate how the path winds. This entry provides a formalisation of the Cauchy index, which is then shown to be related to the winding number. In addition, this entry also offers a tactic that enables users to evaluate the winding number by calculating Cauchy indices. [Count_Complex_Roots] title = Count the Number of Complex Roots author = Wenda Li topic = Mathematics/Analysis date = 2017-10-17 notify = wl302@cam.ac.uk, liwenda1990@hotmail.com abstract = Based on evaluating Cauchy indices through remainder sequences, this entry provides an effective procedure to count the number of complex roots (with multiplicity) of a polynomial within various shapes (e.g., rectangle, circle and half-plane). Potential applications of this entry include certified complex root isolation (of a polynomial) and testing the Routh-Hurwitz stability criterion (i.e., to check whether all the roots of some characteristic polynomial have negative real parts). extra-history = Change history: [2021-10-26]: resolved the roots-on-the-border problem in the rectangular case (revision 82a159e398cf). [Buchi_Complementation] title = Büchi Complementation author = Julian Brunner topic = Computer science/Automata and formal languages date = 2017-10-19 notify = brunnerj@in.tum.de abstract = This entry provides a verified implementation of rank-based Büchi Complementation. The verification is done in three steps:
  1. Definition of odd rankings and proof that an automaton rejects a word iff there exists an odd ranking for it.
  2. Definition of the complement automaton and proof that it accepts exactly those words for which there is an odd ranking.
  3. Verified implementation of the complement automaton using the Isabelle Collections Framework.
[Transition_Systems_and_Automata] title = Transition Systems and Automata author = Julian Brunner topic = Computer science/Automata and formal languages date = 2017-10-19 notify = brunnerj@in.tum.de abstract = This entry provides a very abstract theory of transition systems that can be instantiated to express various types of automata. A transition system is typically instantiated by providing a set of initial states, a predicate for enabled transitions, and a transition execution function. From this, it defines the concepts of finite and infinite paths as well as the set of reachable states, among other things. Many useful theorems, from basic path manipulation rules to coinduction and run construction rules, are proven in this abstract transition system context. The library comes with instantiations for DFAs, NFAs, and Büchi automata. [Kuratowski_Closure_Complement] title = The Kuratowski Closure-Complement Theorem author = Peter Gammie , Gianpaolo Gioiosa<> topic = Mathematics/Topology date = 2017-10-26 notify = peteg42@gmail.com abstract = We discuss a topological curiosity discovered by Kuratowski (1922): the fact that the number of distinct operators on a topological space generated by compositions of closure and complement never exceeds 14, and is exactly 14 in the case of R. In addition, we prove a theorem due to Chagrov (1982) that classifies topological spaces according to the number of such operators they support. [Hybrid_Multi_Lane_Spatial_Logic] title = Hybrid Multi-Lane Spatial Logic author = Sven Linker topic = Logic/General logic/Modal logic date = 2017-11-06 notify = s.linker@liverpool.ac.uk abstract = We present a semantic embedding of a spatio-temporal multi-modal logic, specifically defined to reason about motorway traffic, into Isabelle/HOL. The semantic model is an abstraction of a motorway, emphasising local spatial properties, and parameterised by the types of sensors deployed in the vehicles. We use the logic to define controller constraints to ensure safety, i.e., the absence of collisions on the motorway. After proving safety with a restrictive definition of sensors, we relax these assumptions and show how to amend the controller constraints to still guarantee safety. [Dirichlet_L] title = Dirichlet L-Functions and Dirichlet's Theorem author = Manuel Eberl topic = Mathematics/Number theory, Mathematics/Algebra date = 2017-12-21 notify = manuel@pruvisto.org abstract =

This article provides a formalisation of Dirichlet characters and Dirichlet L-functions including proofs of their basic properties – most notably their analyticity, their areas of convergence, and their non-vanishing for ℜ(s) ≥ 1. All of this is built in a very high-level style using Dirichlet series. The proof of the non-vanishing follows a very short and elegant proof by Newman, which we attempt to reproduce faithfully in a similar level of abstraction in Isabelle.

This also leads to a relatively short proof of Dirichlet’s Theorem, which states that, if h and n are coprime, there are infinitely many primes p with ph (mod n).

[Symmetric_Polynomials] title = Symmetric Polynomials author = Manuel Eberl topic = Mathematics/Algebra date = 2018-09-25 notify = manuel@pruvisto.org abstract =

A symmetric polynomial is a polynomial in variables X1,…,Xn that does not discriminate between its variables, i. e. it is invariant under any permutation of them. These polynomials are important in the study of the relationship between the coefficients of a univariate polynomial and its roots in its algebraic closure.

This article provides a definition of symmetric polynomials and the elementary symmetric polynomials e1,…,en and proofs of their basic properties, including three notable ones:

  • Vieta's formula, which gives an explicit expression for the k-th coefficient of a univariate monic polynomial in terms of its roots x1,…,xn, namely ck = (-1)n-k en-k(x1,…,xn).
  • Second, the Fundamental Theorem of Symmetric Polynomials, which states that any symmetric polynomial is itself a uniquely determined polynomial combination of the elementary symmetric polynomials.
  • Third, as a corollary of the previous two, that given a polynomial over some ring R, any symmetric polynomial combination of its roots is also in R even when the roots are not.

Both the symmetry property itself and the witness for the Fundamental Theorem are executable.

[Taylor_Models] title = Taylor Models author = Christoph Traut<>, Fabian Immler topic = Computer science/Algorithms/Mathematical, Computer science/Data structures, Mathematics/Analysis, Mathematics/Algebra date = 2018-01-08 notify = immler@in.tum.de abstract = We present a formally verified implementation of multivariate Taylor models. Taylor models are a form of rigorous polynomial approximation, consisting of an approximation polynomial based on Taylor expansions, combined with a rigorous bound on the approximation error. Taylor models were introduced as a tool to mitigate the dependency problem of interval arithmetic. Our implementation automatically computes Taylor models for the class of elementary functions, expressed by composition of arithmetic operations and basic functions like exp, sin, or square root. [Green] title = An Isabelle/HOL formalisation of Green's Theorem author = Mohammad Abdulaziz , Lawrence C. Paulson topic = Mathematics/Analysis date = 2018-01-11 notify = mohammad.abdulaziz8@gmail.com, lp15@cam.ac.uk abstract = We formalise a statement of Green’s theorem—the first formalisation to our knowledge—in Isabelle/HOL. The theorem statement that we formalise is enough for most applications, especially in physics and engineering. Our formalisation is made possible by a novel proof that avoids the ubiquitous line integral cancellation argument. This eliminates the need to formalise orientations and region boundaries explicitly with respect to the outwards-pointing normal vector. Instead we appeal to a homological argument about equivalences between paths. [AI_Planning_Languages_Semantics] title = AI Planning Languages Semantics author = Mohammad Abdulaziz , Peter Lammich topic = Computer science/Artificial intelligence date = 2020-10-29 notify = mohammad.abdulaziz8@gmail.com abstract = This is an Isabelle/HOL formalisation of the semantics of the multi-valued planning tasks language that is used by the planning system Fast-Downward, the STRIPS fragment of the Planning Domain Definition Language (PDDL), and the STRIPS soundness meta-theory developed by Vladimir Lifschitz. It also contains formally verified checkers for checking the well-formedness of problems specified in either language as well the correctness of potential solutions. The formalisation in this entry was described in an earlier publication. [Verified_SAT_Based_AI_Planning] title = Verified SAT-Based AI Planning author = Mohammad Abdulaziz , Friedrich Kurz <> topic = Computer science/Artificial intelligence date = 2020-10-29 notify = mohammad.abdulaziz8@gmail.com abstract = We present an executable formally verified SAT encoding of classical AI planning that is based on the encodings by Kautz and Selman and the one by Rintanen et al. The encoding was experimentally tested and shown to be usable for reasonably sized standard AI planning benchmarks. We also use it as a reference to test a state-of-the-art SAT-based planner, showing that it sometimes falsely claims that problems have no solutions of certain lengths. The formalisation in this submission was described in an independent publication. [Gromov_Hyperbolicity] title = Gromov Hyperbolicity author = Sebastien Gouezel<> topic = Mathematics/Geometry date = 2018-01-16 notify = sebastien.gouezel@univ-rennes1.fr abstract = A geodesic metric space is Gromov hyperbolic if all its geodesic triangles are thin, i.e., every side is contained in a fixed thickening of the two other sides. While this definition looks innocuous, it has proved extremely important and versatile in modern geometry since its introduction by Gromov. We formalize the basic classical properties of Gromov hyperbolic spaces, notably the Morse lemma asserting that quasigeodesics are close to geodesics, the invariance of hyperbolicity under quasi-isometries, we define and study the Gromov boundary and its associated distance, and prove that a quasi-isometry between Gromov hyperbolic spaces extends to a homeomorphism of the boundaries. We also prove a less classical theorem, by Bonk and Schramm, asserting that a Gromov hyperbolic space embeds isometrically in a geodesic Gromov-hyperbolic space. As the original proof uses a transfinite sequence of Cauchy completions, this is an interesting formalization exercise. Along the way, we introduce basic material on isometries, quasi-isometries, Lipschitz maps, geodesic spaces, the Hausdorff distance, the Cauchy completion of a metric space, and the exponential on extended real numbers. [Ordered_Resolution_Prover] title = Formalization of Bachmair and Ganzinger's Ordered Resolution Prover author = Anders Schlichtkrull , Jasmin Christian Blanchette , Dmitriy Traytel , Uwe Waldmann topic = Logic/General logic/Mechanization of proofs date = 2018-01-18 notify = andschl@dtu.dk, j.c.blanchette@vu.nl abstract = This Isabelle/HOL formalization covers Sections 2 to 4 of Bachmair and Ganzinger's "Resolution Theorem Proving" chapter in the Handbook of Automated Reasoning. This includes soundness and completeness of unordered and ordered variants of ground resolution with and without literal selection, the standard redundancy criterion, a general framework for refutational theorem proving, and soundness and completeness of an abstract first-order prover. [Chandy_Lamport] title = A Formal Proof of The Chandy--Lamport Distributed Snapshot Algorithm author = Ben Fiedler , Dmitriy Traytel topic = Computer science/Algorithms/Distributed date = 2020-07-21 notify = ben.fiedler@inf.ethz.ch, traytel@inf.ethz.ch abstract = We provide a suitable distributed system model and implementation of the Chandy--Lamport distributed snapshot algorithm [ACM Transactions on Computer Systems, 3, 63-75, 1985]. Our main result is a formal termination and correctness proof of the Chandy--Lamport algorithm and its use in stable property detection. [BNF_Operations] title = Operations on Bounded Natural Functors author = Jasmin Christian Blanchette , Andrei Popescu , Dmitriy Traytel topic = Tools date = 2017-12-19 notify = jasmin.blanchette@gmail.com,uuomul@yahoo.com,traytel@inf.ethz.ch abstract = This entry formalizes the closure property of bounded natural functors (BNFs) under seven operations. These operations and the corresponding proofs constitute the core of Isabelle's (co)datatype package. To be close to the implemented tactics, the proofs are deliberately formulated as detailed apply scripts. The (co)datatypes together with (co)induction principles and (co)recursors are byproducts of the fixpoint operations LFP and GFP. Composition of BNFs is subdivided into four simpler operations: Compose, Kill, Lift, and Permute. The N2M operation provides mutual (co)induction principles and (co)recursors for nested (co)datatypes. [LLL_Basis_Reduction] title = A verified LLL algorithm author = Ralph Bottesch <>, Jose Divasón , Maximilian Haslbeck , Sebastiaan Joosten , René Thiemann , Akihisa Yamada<> topic = Computer science/Algorithms/Mathematical, Mathematics/Algebra date = 2018-02-02 notify = ralph.bottesch@uibk.ac.at, jose.divason@unirioja.es, maximilian.haslbeck@uibk.ac.at, s.j.c.joosten@utwente.nl, rene.thiemann@uibk.ac.at, ayamada@trs.cm.is.nagoya-u.ac.jp abstract = The Lenstra-Lenstra-Lovász basis reduction algorithm, also known as LLL algorithm, is an algorithm to find a basis with short, nearly orthogonal vectors of an integer lattice. Thereby, it can also be seen as an approximation to solve the shortest vector problem (SVP), which is an NP-hard problem, where the approximation quality solely depends on the dimension of the lattice, but not the lattice itself. The algorithm also possesses many applications in diverse fields of computer science, from cryptanalysis to number theory, but it is specially well-known since it was used to implement the first polynomial-time algorithm to factor polynomials. In this work we present the first mechanized soundness proof of the LLL algorithm to compute short vectors in lattices. The formalization follows a textbook by von zur Gathen and Gerhard. extra-history = Change history: [2018-04-16]: Integrated formal complexity bounds (Haslbeck, Thiemann) [2018-05-25]: Integrated much faster LLL implementation based on integer arithmetic (Bottesch, Haslbeck, Thiemann) [LLL_Factorization] title = A verified factorization algorithm for integer polynomials with polynomial complexity author = Jose Divasón , Sebastiaan Joosten , René Thiemann , Akihisa Yamada topic = Mathematics/Algebra date = 2018-02-06 notify = jose.divason@unirioja.es, s.j.c.joosten@utwente.nl, rene.thiemann@uibk.ac.at, ayamada@trs.cm.is.nagoya-u.ac.jp abstract = Short vectors in lattices and factors of integer polynomials are related. Each factor of an integer polynomial belongs to a certain lattice. When factoring polynomials, the condition that we are looking for an irreducible polynomial means that we must look for a small element in a lattice, which can be done by a basis reduction algorithm. In this development we formalize this connection and thereby one main application of the LLL basis reduction algorithm: an algorithm to factor square-free integer polynomials which runs in polynomial time. The work is based on our previous Berlekamp–Zassenhaus development, where the exponential reconstruction phase has been replaced by the polynomial-time basis reduction algorithm. Thanks to this formalization we found a serious flaw in a textbook. [Treaps] title = Treaps author = Maximilian Haslbeck , Manuel Eberl , Tobias Nipkow topic = Computer science/Data structures date = 2018-02-06 notify = manuel@pruvisto.org abstract =

A Treap is a binary tree whose nodes contain pairs consisting of some payload and an associated priority. It must have the search-tree property w.r.t. the payloads and the heap property w.r.t. the priorities. Treaps are an interesting data structure that is related to binary search trees (BSTs) in the following way: if one forgets all the priorities of a treap, the resulting BST is exactly the same as if one had inserted the elements into an empty BST in order of ascending priority. This means that a treap behaves like a BST where we can pretend the elements were inserted in a different order from the one in which they were actually inserted.

In particular, by choosing these priorities at random upon insertion of an element, we can pretend that we inserted the elements in random order, so that the shape of the resulting tree is that of a random BST no matter in what order we insert the elements. This is the main result of this formalisation.

[Skip_Lists] title = Skip Lists author = Max W. Haslbeck , Manuel Eberl topic = Computer science/Data structures date = 2020-01-09 notify = max.haslbeck@gmx.de abstract =

Skip lists are sorted linked lists enhanced with shortcuts and are an alternative to binary search trees. A skip lists consists of multiple levels of sorted linked lists where a list on level n is a subsequence of the list on level n − 1. In the ideal case, elements are skipped in such a way that a lookup in a skip lists takes O(log n) time. In a randomised skip list the skipped elements are choosen randomly.

This entry contains formalized proofs of the textbook results about the expected height and the expected length of a search path in a randomised skip list.

[Mersenne_Primes] title = Mersenne primes and the Lucas–Lehmer test author = Manuel Eberl topic = Mathematics/Number theory date = 2020-01-17 notify = manuel@pruvisto.org abstract =

This article provides formal proofs of basic properties of Mersenne numbers, i. e. numbers of the form 2n - 1, and especially of Mersenne primes.

In particular, an efficient, verified, and executable version of the Lucas–Lehmer test is developed. This test decides primality for Mersenne numbers in time polynomial in n.

[Hoare_Time] title = Hoare Logics for Time Bounds author = Maximilian P. L. Haslbeck , Tobias Nipkow topic = Computer science/Programming languages/Logics date = 2018-02-26 notify = haslbema@in.tum.de abstract = We study three different Hoare logics for reasoning about time bounds of imperative programs and formalize them in Isabelle/HOL: a classical Hoare like logic due to Nielson, a logic with potentials due to Carbonneaux et al. and a separation logic following work by Atkey, Chaguérand and Pottier. These logics are formally shown to be sound and complete. Verification condition generators are developed and are shown sound and complete too. We also consider variants of the systems where we abstract from multiplicative constants in the running time bounds, thus supporting a big-O style of reasoning. Finally we compare the expressive power of the three systems. [Architectural_Design_Patterns] title = A Theory of Architectural Design Patterns author = Diego Marmsoler topic = Computer science/System description languages date = 2018-03-01 notify = diego.marmsoler@tum.de abstract = The following document formalizes and verifies several architectural design patterns. Each pattern specification is formalized in terms of a locale where the locale assumptions correspond to the assumptions which a pattern poses on an architecture. Thus, pattern specifications may build on top of each other by interpreting the corresponding locale. A pattern is verified using the framework provided by the AFP entry Dynamic Architectures. Currently, the document consists of formalizations of 4 different patterns: the singleton, the publisher subscriber, the blackboard pattern, and the blockchain pattern. Thereby, the publisher component of the publisher subscriber pattern is modeled as an instance of the singleton pattern and the blackboard pattern is modeled as an instance of the publisher subscriber pattern. In general, this entry provides the first steps towards an overall theory of architectural design patterns. extra-history = Change history: [2018-05-25]: changing the major assumption for blockchain architectures from alternative minings to relative mining frequencies (revision 5043c5c71685)
[2019-04-08]: adapting the terminology: honest instead of trusted, dishonest instead of untrusted (revision 7af3431a22ae) [Weight_Balanced_Trees] title = Weight-Balanced Trees author = Tobias Nipkow , Stefan Dirix<> topic = Computer science/Data structures date = 2018-03-13 notify = nipkow@in.tum.de abstract = This theory provides a verified implementation of weight-balanced trees following the work of Hirai and Yamamoto who proved that all parameters in a certain range are valid, i.e. guarantee that insertion and deletion preserve weight-balance. Instead of a general theorem we provide parameterized proofs of preservation of the invariant that work for many (all?) valid parameters. [Fishburn_Impossibility] title = The Incompatibility of Fishburn-Strategyproofness and Pareto-Efficiency author = Felix Brandt , Manuel Eberl , Christian Saile , Christian Stricker topic = Mathematics/Games and economics date = 2018-03-22 notify = manuel@pruvisto.org abstract =

This formalisation contains the proof that there is no anonymous Social Choice Function for at least three agents and alternatives that fulfils both Pareto-Efficiency and Fishburn-Strategyproofness. It was derived from a proof of Brandt et al., which relies on an unverified translation of a fixed finite instance of the original problem to SAT. This Isabelle proof contains a machine-checked version of both the statement for exactly three agents and alternatives and the lifting to the general case.

[BNF_CC] title = Bounded Natural Functors with Covariance and Contravariance author = Andreas Lochbihler , Joshua Schneider topic = Computer science/Functional programming, Tools date = 2018-04-24 notify = mail@andreas-lochbihler.de, joshua.schneider@inf.ethz.ch abstract = Bounded natural functors (BNFs) provide a modular framework for the construction of (co)datatypes in higher-order logic. Their functorial operations, the mapper and relator, are restricted to a subset of the parameters, namely those where recursion can take place. For certain applications, such as free theorems, data refinement, quotients, and generalised rewriting, it is desirable that these operations do not ignore the other parameters. In this article, we formalise the generalisation BNFCC that extends the mapper and relator to covariant and contravariant parameters. We show that
  1. BNFCCs are closed under functor composition and least and greatest fixpoints,
  2. subtypes inherit the BNFCC structure under conditions that generalise those for the BNF case, and
  3. BNFCCs preserve quotients under mild conditions.
These proofs are carried out for abstract BNFCCs similar to the AFP entry BNF Operations. In addition, we apply the BNFCC theory to several concrete functors. [Modular_Assembly_Kit_Security] title = An Isabelle/HOL Formalization of the Modular Assembly Kit for Security Properties author = Oliver Bračevac , Richard Gay , Sylvia Grewe , Heiko Mantel , Henning Sudbrock , Markus Tasch topic = Computer science/Security date = 2018-05-07 notify = tasch@mais.informatik.tu-darmstadt.de abstract = The "Modular Assembly Kit for Security Properties" (MAKS) is a framework for both the definition and verification of possibilistic information-flow security properties at the specification-level. MAKS supports the uniform representation of a wide range of possibilistic information-flow properties and provides support for the verification of such properties via unwinding results and compositionality results. We provide a formalization of this framework in Isabelle/HOL. [AxiomaticCategoryTheory] title = Axiom Systems for Category Theory in Free Logic author = Christoph Benzmüller , Dana Scott topic = Mathematics/Category theory date = 2018-05-23 notify = c.benzmueller@gmail.com abstract = This document provides a concise overview on the core results of our previous work on the exploration of axioms systems for category theory. Extending the previous studies (http://arxiv.org/abs/1609.01493) we include one further axiomatic theory in our experiments. This additional theory has been suggested by Mac Lane in 1948. We show that the axioms proposed by Mac Lane are equivalent to the ones we studied before, which includes an axioms set suggested by Scott in the 1970s and another axioms set proposed by Freyd and Scedrov in 1990, which we slightly modified to remedy a minor technical issue. [OpSets] title = OpSets: Sequential Specifications for Replicated Datatypes author = Martin Kleppmann , Victor B. F. Gomes , Dominic P. Mulligan , Alastair R. Beresford topic = Computer science/Algorithms/Distributed, Computer science/Data structures date = 2018-05-10 notify = vb358@cam.ac.uk abstract = We introduce OpSets, an executable framework for specifying and reasoning about the semantics of replicated datatypes that provide eventual consistency in a distributed system, and for mechanically verifying algorithms that implement these datatypes. Our approach is simple but expressive, allowing us to succinctly specify a variety of abstract datatypes, including maps, sets, lists, text, graphs, trees, and registers. Our datatypes are also composable, enabling the construction of complex data structures. To demonstrate the utility of OpSets for analysing replication algorithms, we highlight an important correctness property for collaborative text editing that has traditionally been overlooked; algorithms that do not satisfy this property can exhibit awkward interleaving of text. We use OpSets to specify this correctness property and prove that although one existing replication algorithm satisfies this property, several other published algorithms do not. [Irrationality_J_Hancl] title = Irrational Rapidly Convergent Series author = Angeliki Koutsoukou-Argyraki , Wenda Li topic = Mathematics/Number theory, Mathematics/Analysis date = 2018-05-23 notify = ak2110@cam.ac.uk, wl302@cam.ac.uk abstract = We formalize with Isabelle/HOL a proof of a theorem by J. Hancl asserting the irrationality of the sum of a series consisting of rational numbers, built up by sequences that fulfill certain properties. Even though the criterion is a number theoretic result, the proof makes use only of analytical arguments. We also formalize a corollary of the theorem for a specific series fulfilling the assumptions of the theorem. [Optimal_BST] title = Optimal Binary Search Trees author = Tobias Nipkow , Dániel Somogyi <> topic = Computer science/Algorithms, Computer science/Data structures date = 2018-05-27 notify = nipkow@in.tum.de abstract = This article formalizes recursive algorithms for the construction of optimal binary search trees given fixed access frequencies. We follow Knuth (1971), Yao (1980) and Mehlhorn (1984). The algorithms are memoized with the help of the AFP article Monadification, Memoization and Dynamic Programming, thus yielding dynamic programming algorithms. [Projective_Geometry] title = Projective Geometry author = Anthony Bordg topic = Mathematics/Geometry date = 2018-06-14 notify = apdb3@cam.ac.uk abstract = We formalize the basics of projective geometry. In particular, we give a proof of the so-called Hessenberg's theorem in projective plane geometry. We also provide a proof of the so-called Desargues's theorem based on an axiomatization of (higher) projective space geometry using the notion of rank of a matroid. This last approach allows to handle incidence relations in an homogeneous way dealing only with points and without the need of talking explicitly about lines, planes or any higher entity. [Localization_Ring] title = The Localization of a Commutative Ring author = Anthony Bordg topic = Mathematics/Algebra date = 2018-06-14 notify = apdb3@cam.ac.uk abstract = We formalize the localization of a commutative ring R with respect to a multiplicative subset (i.e. a submonoid of R seen as a multiplicative monoid). This localization is itself a commutative ring and we build the natural homomorphism of rings from R to its localization. [Minsky_Machines] title = Minsky Machines author = Bertram Felgenhauer<> topic = Logic/Computability date = 2018-08-14 notify = int-e@gmx.de abstract =

We formalize undecidablity results for Minsky machines. To this end, we also formalize recursive inseparability.

We start by proving that Minsky machines can compute arbitrary primitive recursive and recursive functions. We then show that there is a deterministic Minsky machine with one argument and two final states such that the set of inputs that are accepted in one state is recursively inseparable from the set of inputs that are accepted in the other state.

As a corollary, the set of Minsky configurations that reach the first state but not the second recursively inseparable from the set of Minsky configurations that reach the second state but not the first. In particular both these sets are undecidable.

We do not prove that recursive functions can simulate Minsky machines.

[Neumann_Morgenstern_Utility] title = Von-Neumann-Morgenstern Utility Theorem author = Julian Parsert, Cezary Kaliszyk topic = Mathematics/Games and economics license = LGPL date = 2018-07-04 notify = julian.parsert@uibk.ac.at, cezary.kaliszyk@uibk.ac.at abstract = Utility functions form an essential part of game theory and economics. In order to guarantee the existence of utility functions most of the time sufficient properties are assumed in an axiomatic manner. One famous and very common set of such assumptions is that of expected utility theory. Here, the rationality, continuity, and independence of preferences is assumed. The von-Neumann-Morgenstern Utility theorem shows that these assumptions are necessary and sufficient for an expected utility function to exists. This theorem was proven by Neumann and Morgenstern in ``Theory of Games and Economic Behavior'' which is regarded as one of the most influential works in game theory. The formalization includes formal definitions of the underlying concepts including continuity and independence of preferences. [Simplex] title = An Incremental Simplex Algorithm with Unsatisfiable Core Generation author = Filip Marić , Mirko Spasić , René Thiemann topic = Computer science/Algorithms/Optimization date = 2018-08-24 notify = rene.thiemann@uibk.ac.at abstract = We present an Isabelle/HOL formalization and total correctness proof for the incremental version of the Simplex algorithm which is used in most state-of-the-art SMT solvers. It supports extraction of satisfying assignments, extraction of minimal unsatisfiable cores, incremental assertion of constraints and backtracking. The formalization relies on stepwise program refinement, starting from a simple specification, going through a number of refinement steps, and ending up in a fully executable functional implementation. Symmetries present in the algorithm are handled with special care. [Budan_Fourier] title = The Budan-Fourier Theorem and Counting Real Roots with Multiplicity author = Wenda Li topic = Mathematics/Analysis date = 2018-09-02 notify = wl302@cam.ac.uk, liwenda1990@hotmail.com abstract = This entry is mainly about counting and approximating real roots (of a polynomial) with multiplicity. We have first formalised the Budan-Fourier theorem: given a polynomial with real coefficients, we can calculate sign variations on Fourier sequences to over-approximate the number of real roots (counting multiplicity) within an interval. When all roots are known to be real, the over-approximation becomes tight: we can utilise this theorem to count real roots exactly. It is also worth noting that Descartes' rule of sign is a direct consequence of the Budan-Fourier theorem, and has been included in this entry. In addition, we have extended previous formalised Sturm's theorem to count real roots with multiplicity, while the original Sturm's theorem only counts distinct real roots. Compared to the Budan-Fourier theorem, our extended Sturm's theorem always counts roots exactly but may suffer from greater computational cost. [Quaternions] title = Quaternions author = Lawrence C. Paulson topic = Mathematics/Algebra, Mathematics/Geometry date = 2018-09-05 notify = lp15@cam.ac.uk abstract = This theory is inspired by the HOL Light development of quaternions, but follows its own route. Quaternions are developed coinductively, as in the existing formalisation of the complex numbers. Quaternions are quickly shown to belong to the type classes of real normed division algebras and real inner product spaces. And therefore they inherit a great body of facts involving algebraic laws, limits, continuity, etc., which must be proved explicitly in the HOL Light version. The development concludes with the geometric interpretation of the product of imaginary quaternions. [Octonions] title = Octonions author = Angeliki Koutsoukou-Argyraki topic = Mathematics/Algebra, Mathematics/Geometry date = 2018-09-14 notify = ak2110@cam.ac.uk abstract = We develop the basic theory of Octonions, including various identities and properties of the octonions and of the octonionic product, a description of 7D isometries and representations of orthogonal transformations. To this end we first develop the theory of the vector cross product in 7 dimensions. The development of the theory of Octonions is inspired by that of the theory of Quaternions by Lawrence Paulson. However, we do not work within the type class real_algebra_1 because the octonionic product is not associative. [Aggregation_Algebras] title = Aggregation Algebras author = Walter Guttmann topic = Mathematics/Algebra date = 2018-09-15 notify = walter.guttmann@canterbury.ac.nz abstract = We develop algebras for aggregation and minimisation for weight matrices and for edge weights in graphs. We verify the correctness of Prim's and Kruskal's minimum spanning tree algorithms based on these algebras. We also show numerous instances of these algebras based on linearly ordered commutative semigroups. extra-history = Change history: [2020-12-09]: moved Hoare logic to HOL-Hoare, moved spanning trees to Relational_Minimum_Spanning_Trees (revision dbb9bfaf4283) [Prime_Number_Theorem] title = The Prime Number Theorem author = Manuel Eberl , Lawrence C. Paulson topic = Mathematics/Number theory date = 2018-09-19 notify = manuel@pruvisto.org abstract =

This article provides a short proof of the Prime Number Theorem in several equivalent forms, most notably π(x) ~ x/ln x where π(x) is the number of primes no larger than x. It also defines other basic number-theoretic functions related to primes like Chebyshev's functions ϑ and ψ and the “n-th prime number” function pn. We also show various bounds and relationship between these functions are shown. Lastly, we derive Mertens' First and Second Theorem, i. e. ∑px ln p/p = ln x + O(1) and ∑px 1/p = ln ln x + M + O(1/ln x). We also give explicit bounds for the remainder terms.

The proof of the Prime Number Theorem builds on a library of Dirichlet series and analytic combinatorics. We essentially follow the presentation by Newman. The core part of the proof is a Tauberian theorem for Dirichlet series, which is proven using complex analysis and then used to strengthen Mertens' First Theorem to ∑px ln p/p = ln x + c + o(1).

A variant of this proof has been formalised before by Harrison in HOL Light, and formalisations of Selberg's elementary proof exist both by Avigad et al. in Isabelle and by Carneiro in Metamath. The advantage of the analytic proof is that, while it requires more powerful mathematical tools, it is considerably shorter and clearer. This article attempts to provide a short and clear formalisation of all components of that proof using the full range of mathematical machinery available in Isabelle, staying as close as possible to Newman's simple paper proof.

[Signature_Groebner] title = Signature-Based Gröbner Basis Algorithms author = Alexander Maletzky topic = Mathematics/Algebra, Computer science/Algorithms/Mathematical date = 2018-09-20 notify = alexander.maletzky@risc.jku.at abstract =

This article formalizes signature-based algorithms for computing Gröbner bases. Such algorithms are, in general, superior to other algorithms in terms of efficiency, and have not been formalized in any proof assistant so far. The present development is both generic, in the sense that most known variants of signature-based algorithms are covered by it, and effectively executable on concrete input thanks to Isabelle's code generator. Sample computations of benchmark problems show that the verified implementation of signature-based algorithms indeed outperforms the existing implementation of Buchberger's algorithm in Isabelle/HOL.

Besides total correctness of the algorithms, the article also proves that under certain conditions they a-priori detect and avoid all useless zero-reductions, and always return 'minimal' (in some sense) Gröbner bases if an input parameter is chosen in the right way.

The formalization follows the recent survey article by Eder and Faugère.

[Factored_Transition_System_Bounding] title = Upper Bounding Diameters of State Spaces of Factored Transition Systems author = Friedrich Kurz <>, Mohammad Abdulaziz topic = Computer science/Automata and formal languages, Mathematics/Graph theory date = 2018-10-12 notify = friedrich.kurz@tum.de, mohammad.abdulaziz@in.tum.de abstract = A completeness threshold is required to guarantee the completeness of planning as satisfiability, and bounded model checking of safety properties. One valid completeness threshold is the diameter of the underlying transition system. The diameter is the maximum element in the set of lengths of all shortest paths between pairs of states. The diameter is not calculated exactly in our setting, where the transition system is succinctly described using a (propositionally) factored representation. Rather, an upper bound on the diameter is calculated compositionally, by bounding the diameters of small abstract subsystems, and then composing those. We port a HOL4 formalisation of a compositional algorithm for computing a relatively tight upper bound on the system diameter. This compositional algorithm exploits acyclicity in the state space to achieve compositionality, and it was introduced by Abdulaziz et. al. The formalisation that we port is described as a part of another paper by Abdulaziz et. al. As a part of this porting we developed a libray about transition systems, which shall be of use in future related mechanisation efforts. [Smooth_Manifolds] title = Smooth Manifolds author = Fabian Immler , Bohua Zhan topic = Mathematics/Analysis, Mathematics/Topology date = 2018-10-22 notify = immler@in.tum.de, bzhan@ios.ac.cn abstract = We formalize the definition and basic properties of smooth manifolds in Isabelle/HOL. Concepts covered include partition of unity, tangent and cotangent spaces, and the fundamental theorem of path integrals. We also examine some concrete manifolds such as spheres and projective spaces. The formalization makes extensive use of the analysis and linear algebra libraries in Isabelle/HOL, in particular its “types-to-sets” mechanism. [Matroids] title = Matroids author = Jonas Keinholz<> topic = Mathematics/Combinatorics date = 2018-11-16 notify = manuel@pruvisto.org abstract =

This article defines the combinatorial structures known as Independence Systems and Matroids and provides basic concepts and theorems related to them. These structures play an important role in combinatorial optimisation, e. g. greedy algorithms such as Kruskal's algorithm. The development is based on Oxley's `What is a Matroid?'.

[Graph_Saturation] title = Graph Saturation author = Sebastiaan J. C. Joosten<> topic = Logic/Rewriting, Mathematics/Graph theory date = 2018-11-23 notify = sjcjoosten@gmail.com abstract = This is an Isabelle/HOL formalisation of graph saturation, closely following a paper by the author on graph saturation. Nine out of ten lemmas of the original paper are proven in this formalisation. The formalisation additionally includes two theorems that show the main premise of the paper: that consistency and entailment are decided through graph saturation. This formalisation does not give executable code, and it did not implement any of the optimisations suggested in the paper. [Functional_Ordered_Resolution_Prover] title = A Verified Functional Implementation of Bachmair and Ganzinger's Ordered Resolution Prover author = Anders Schlichtkrull , Jasmin Christian Blanchette , Dmitriy Traytel topic = Logic/General logic/Mechanization of proofs date = 2018-11-23 notify = andschl@dtu.dk,j.c.blanchette@vu.nl,traytel@inf.ethz.ch abstract = This Isabelle/HOL formalization refines the abstract ordered resolution prover presented in Section 4.3 of Bachmair and Ganzinger's "Resolution Theorem Proving" chapter in the Handbook of Automated Reasoning. The result is a functional implementation of a first-order prover. [Auto2_HOL] title = Auto2 Prover author = Bohua Zhan topic = Tools date = 2018-11-20 notify = bzhan@ios.ac.cn abstract = Auto2 is a saturation-based heuristic prover for higher-order logic, implemented as a tactic in Isabelle. This entry contains the instantiation of auto2 for Isabelle/HOL, along with two basic examples: solutions to some of the Pelletier’s problems, and elementary number theory of primes. [Order_Lattice_Props] title = Properties of Orderings and Lattices author = Georg Struth topic = Mathematics/Order date = 2018-12-11 notify = g.struth@sheffield.ac.uk abstract = These components add further fundamental order and lattice-theoretic concepts and properties to Isabelle's libraries. They follow by and large the introductory sections of the Compendium of Continuous Lattices, covering directed and filtered sets, down-closed and up-closed sets, ideals and filters, Galois connections, closure and co-closure operators. Some emphasis is on duality and morphisms between structures, as in the Compendium. To this end, three ad-hoc approaches to duality are compared. [Quantales] title = Quantales author = Georg Struth topic = Mathematics/Algebra date = 2018-12-11 notify = g.struth@sheffield.ac.uk abstract = These mathematical components formalise basic properties of quantales, together with some important models, constructions, and concepts, including quantic nuclei and conuclei. [Transformer_Semantics] title = Transformer Semantics author = Georg Struth topic = Mathematics/Algebra, Computer science/Semantics date = 2018-12-11 notify = g.struth@sheffield.ac.uk abstract = These mathematical components formalise predicate transformer semantics for programs, yet currently only for partial correctness and in the absence of faults. A first part for isotone (or monotone), Sup-preserving and Inf-preserving transformers follows Back and von Wright's approach, with additional emphasis on the quantalic structure of algebras of transformers. The second part develops Sup-preserving and Inf-preserving predicate transformers from the powerset monad, via its Kleisli category and Eilenberg-Moore algebras, with emphasis on adjunctions and dualities, as well as isomorphisms between relations, state transformers and predicate transformers. [Concurrent_Revisions] title = Formalization of Concurrent Revisions author = Roy Overbeek topic = Computer science/Concurrency date = 2018-12-25 notify = Roy.Overbeek@cwi.nl abstract = Concurrent revisions is a concurrency control model developed by Microsoft Research. It has many interesting properties that distinguish it from other well-known models such as transactional memory. One of these properties is determinacy: programs written within the model always produce the same outcome, independent of scheduling activity. The concurrent revisions model has an operational semantics, with an informal proof of determinacy. This document contains an Isabelle/HOL formalization of this semantics and the proof of determinacy. [Core_DOM] title = A Formal Model of the Document Object Model author = Achim D. Brucker , Michael Herzberg topic = Computer science/Data structures date = 2018-12-26 notify = adbrucker@0x5f.org abstract = In this AFP entry, we formalize the core of the Document Object Model (DOM). At its core, the DOM defines a tree-like data structure for representing documents in general and HTML documents in particular. It is the heart of any modern web browser. Formalizing the key concepts of the DOM is a prerequisite for the formal reasoning over client-side JavaScript programs and for the analysis of security concepts in modern web browsers. We present a formalization of the core DOM, with focus on the node-tree and the operations defined on node-trees, in Isabelle/HOL. We use the formalization to verify the functional correctness of the most important functions defined in the DOM standard. Moreover, our formalization is 1) extensible, i.e., can be extended without the need of re-proving already proven properties and 2) executable, i.e., we can generate executable code from our specification. [Core_SC_DOM] title = The Safely Composable DOM author = Achim D. Brucker , Michael Herzberg topic = Computer science/Data structures date = 2020-09-28 notify = adbrucker@0x5f.org, mail@michael-herzberg.de abstract = In this AFP entry, we formalize the core of the Safely Composable Document Object Model (SC DOM). The SC DOM improve the standard DOM (as formalized in the AFP entry "Core DOM") by strengthening the tree boundaries set by shadow roots: in the SC DOM, the shadow root is a sub-class of the document class (instead of a base class). This modifications also results in changes to some API methods (e.g., getOwnerDocument) to return the nearest shadow root rather than the document root. As a result, many API methods that, when called on a node inside a shadow tree, would previously ``break out'' and return or modify nodes that are possibly outside the shadow tree, now stay within its boundaries. This change in behavior makes programs that operate on shadow trees more predictable for the developer and allows them to make more assumptions about other code accessing the DOM. [Shadow_SC_DOM] title = A Formal Model of the Safely Composable Document Object Model with Shadow Roots author = Achim D. Brucker , Michael Herzberg topic = Computer science/Data structures date = 2020-09-28 notify = adbrucker@0x5f.org, mail@michael-herzberg.de abstract = In this AFP entry, we extend our formalization of the safely composable DOM with Shadow Roots. This is a proposal for Shadow Roots with stricter safety guarantess than the standard compliant formalization (see "Shadow DOM"). Shadow Roots are a recent proposal of the web community to support a component-based development approach for client-side web applications. Shadow roots are a significant extension to the DOM standard and, as web standards are condemned to be backward compatible, such extensions often result in complex specification that may contain unwanted subtleties that can be detected by a formalization. Our Isabelle/HOL formalization is, in the sense of object-orientation, an extension of our formalization of the core DOM and enjoys the same basic properties, i.e., it is extensible, i.e., can be extended without the need of re-proving already proven properties and executable, i.e., we can generate executable code from our specification. We exploit the executability to show that our formalization complies to the official standard of the W3C, respectively, the WHATWG. [SC_DOM_Components] title = A Formalization of Safely Composable Web Components author = Achim D. Brucker , Michael Herzberg topic = Computer science/Data structures date = 2020-09-28 notify = adbrucker@0x5f.org, mail@michael-herzberg.de abstract = While the (safely composable) DOM with shadow trees provide the technical basis for defining web components, it does neither defines the concept of web components nor specifies the safety properties that web components should guarantee. Consequently, the standard also does not discuss how or even if the methods for modifying the DOM respect component boundaries. In AFP entry, we present a formally verified model of safely composable web components and define safety properties which ensure that different web components can only interact with each other using well-defined interfaces. Moreover, our verification of the application programming interface (API) of the DOM revealed numerous invariants that implementations of the DOM API need to preserve to ensure the integrity of components. In comparison to the strict standard compliance formalization of Web Components in the AFP entry "DOM_Components", the notion of components in this entry (based on "SC_DOM" and "Shadow_SC_DOM") provides much stronger safety guarantees. [Store_Buffer_Reduction] title = A Reduction Theorem for Store Buffers author = Ernie Cohen , Norbert Schirmer topic = Computer science/Concurrency date = 2019-01-07 notify = norbert.schirmer@web.de abstract = When verifying a concurrent program, it is usual to assume that memory is sequentially consistent. However, most modern multiprocessors depend on store buffering for efficiency, and provide native sequential consistency only at a substantial performance penalty. To regain sequential consistency, a programmer has to follow an appropriate programming discipline. However, naïve disciplines, such as protecting all shared accesses with locks, are not flexible enough for building high-performance multiprocessor software. We present a new discipline for concurrent programming under TSO (total store order, with store buffer forwarding). It does not depend on concurrency primitives, such as locks. Instead, threads use ghost operations to acquire and release ownership of memory addresses. A thread can write to an address only if no other thread owns it, and can read from an address only if it owns it or it is shared and the thread has flushed its store buffer since it last wrote to an address it did not own. This discipline covers both coarse-grained concurrency (where data is protected by locks) as well as fine-grained concurrency (where atomic operations race to memory). We formalize this discipline in Isabelle/HOL, and prove that if every execution of a program in a system without store buffers follows the discipline, then every execution of the program with store buffers is sequentially consistent. Thus, we can show sequential consistency under TSO by ordinary assertional reasoning about the program, without having to consider store buffers at all. [IMP2] title = IMP2 – Simple Program Verification in Isabelle/HOL author = Peter Lammich , Simon Wimmer topic = Computer science/Programming languages/Logics, Computer science/Algorithms date = 2019-01-15 notify = lammich@in.tum.de abstract = IMP2 is a simple imperative language together with Isabelle tooling to create a program verification environment in Isabelle/HOL. The tools include a C-like syntax, a verification condition generator, and Isabelle commands for the specification of programs. The framework is modular, i.e., it allows easy reuse of already proved programs within larger programs. This entry comes with a quickstart guide and a large collection of examples, spanning basic algorithms with simple proofs to more advanced algorithms and proof techniques like data refinement. Some highlights from the examples are:
  • Bisection Square Root,
  • Extended Euclid,
  • Exponentiation by Squaring,
  • Binary Search,
  • Insertion Sort,
  • Quicksort,
  • Depth First Search.
The abstract syntax and semantics are very simple and well-documented. They are suitable to be used in a course, as extension to the IMP language which comes with the Isabelle distribution. While this entry is limited to a simple imperative language, the ideas could be extended to more sophisticated languages. [Farkas] title = Farkas' Lemma and Motzkin's Transposition Theorem author = Ralph Bottesch , Max W. Haslbeck , René Thiemann topic = Mathematics/Algebra date = 2019-01-17 notify = rene.thiemann@uibk.ac.at abstract = We formalize a proof of Motzkin's transposition theorem and Farkas' lemma in Isabelle/HOL. Our proof is based on the formalization of the simplex algorithm which, given a set of linear constraints, either returns a satisfying assignment to the problem or detects unsatisfiability. By reusing facts about the simplex algorithm we show that a set of linear constraints is unsatisfiable if and only if there is a linear combination of the constraints which evaluates to a trivially unsatisfiable inequality. [Auto2_Imperative_HOL] title = Verifying Imperative Programs using Auto2 author = Bohua Zhan topic = Computer science/Algorithms, Computer science/Data structures date = 2018-12-21 notify = bzhan@ios.ac.cn abstract = This entry contains the application of auto2 to verifying functional and imperative programs. Algorithms and data structures that are verified include linked lists, binary search trees, red-black trees, interval trees, priority queue, quicksort, union-find, Dijkstra's algorithm, and a sweep-line algorithm for detecting rectangle intersection. The imperative verification is based on Imperative HOL and its separation logic framework. A major goal of this work is to set up automation in order to reduce the length of proof that the user needs to provide, both for verifying functional programs and for working with separation logic. [UTP] title = Isabelle/UTP: Mechanised Theory Engineering for Unifying Theories of Programming author = Simon Foster , Frank Zeyda<>, Yakoub Nemouchi , Pedro Ribeiro<>, Burkhart Wolff topic = Computer science/Programming languages/Logics date = 2019-02-01 notify = simon.foster@york.ac.uk abstract = Isabelle/UTP is a mechanised theory engineering toolkit based on Hoare and He’s Unifying Theories of Programming (UTP). UTP enables the creation of denotational, algebraic, and operational semantics for different programming languages using an alphabetised relational calculus. We provide a semantic embedding of the alphabetised relational calculus in Isabelle/HOL, including new type definitions, relational constructors, automated proof tactics, and accompanying algebraic laws. Isabelle/UTP can be used to both capture laws of programming for different languages, and put these fundamental theorems to work in the creation of associated verification tools, using calculi like Hoare logics. This document describes the relational core of the UTP in Isabelle/HOL. [HOL-CSP] title = HOL-CSP Version 2.0 author = Safouan Taha , Lina Ye , Burkhart Wolff topic = Computer science/Concurrency/Process calculi, Computer science/Semantics date = 2019-04-26 notify = wolff@lri.fr abstract = This is a complete formalization of the work of Hoare and Roscoe on the denotational semantics of the Failure/Divergence Model of CSP. It follows essentially the presentation of CSP in Roscoe’s Book ”Theory and Practice of Concurrency” [8] and the semantic details in a joint Paper of Roscoe and Brooks ”An improved failures model for communicating processes". The present work is based on a prior formalization attempt, called HOL-CSP 1.0, done in 1997 by H. Tej and B. Wolff with the Isabelle proof technology available at that time. This work revealed minor, but omnipresent foundational errors in key concepts like the process invariant. The present version HOL-CSP profits from substantially improved libraries (notably HOLCF), improved automated proof techniques, and structured proof techniques in Isar and is substantially shorter but more complete. [Probabilistic_Prime_Tests] title = Probabilistic Primality Testing author = Daniel Stüwe<>, Manuel Eberl topic = Mathematics/Number theory date = 2019-02-11 notify = manuel@pruvisto.org abstract =

The most efficient known primality tests are probabilistic in the sense that they use randomness and may, with some probability, mistakenly classify a composite number as prime – but never a prime number as composite. Examples of this are the Miller–Rabin test, the Solovay–Strassen test, and (in most cases) Fermat's test.

This entry defines these three tests and proves their correctness. It also develops some of the number-theoretic foundations, such as Carmichael numbers and the Jacobi symbol with an efficient executable algorithm to compute it.

[Kruskal] title = Kruskal's Algorithm for Minimum Spanning Forest author = Maximilian P.L. Haslbeck , Peter Lammich , Julian Biendarra<> topic = Computer science/Algorithms/Graph date = 2019-02-14 notify = haslbema@in.tum.de, lammich@in.tum.de abstract = This Isabelle/HOL formalization defines a greedy algorithm for finding a minimum weight basis on a weighted matroid and proves its correctness. This algorithm is an abstract version of Kruskal's algorithm. We interpret the abstract algorithm for the cycle matroid (i.e. forests in a graph) and refine it to imperative executable code using an efficient union-find data structure. Our formalization can be instantiated for different graph representations. We provide instantiations for undirected graphs and symmetric directed graphs. [List_Inversions] title = The Inversions of a List author = Manuel Eberl topic = Computer science/Algorithms date = 2019-02-01 notify = manuel@pruvisto.org abstract =

This entry defines the set of inversions of a list, i.e. the pairs of indices that violate sortedness. It also proves the correctness of the well-known O(n log n) divide-and-conquer algorithm to compute the number of inversions.

[Prime_Distribution_Elementary] title = Elementary Facts About the Distribution of Primes author = Manuel Eberl topic = Mathematics/Number theory date = 2019-02-21 notify = manuel@pruvisto.org abstract =

This entry is a formalisation of Chapter 4 (and parts of Chapter 3) of Apostol's Introduction to Analytic Number Theory. The main topics that are addressed are properties of the distribution of prime numbers that can be shown in an elementary way (i. e. without the Prime Number Theorem), the various equivalent forms of the PNT (which imply each other in elementary ways), and consequences that follow from the PNT in elementary ways. The latter include, most notably, asymptotic bounds for the number of distinct prime factors of n, the divisor function d(n), Euler's totient function φ(n), and lcm(1,…,n).

[Safe_OCL] title = Safe OCL author = Denis Nikiforov <> topic = Computer science/Programming languages/Language definitions license = LGPL date = 2019-03-09 notify = denis.nikif@gmail.com abstract =

The theory is a formalization of the OCL type system, its abstract syntax and expression typing rules. The theory does not define a concrete syntax and a semantics. In contrast to Featherweight OCL, it is based on a deep embedding approach. The type system is defined from scratch, it is not based on the Isabelle HOL type system.

The Safe OCL distincts nullable and non-nullable types. Also the theory gives a formal definition of safe navigation operations. The Safe OCL typing rules are much stricter than rules given in the OCL specification. It allows one to catch more errors on a type checking phase.

The type theory presented is four-layered: classes, basic types, generic types, errorable types. We introduce the following new types: non-nullable types (T[1]), nullable types (T[?]), OclSuper. OclSuper is a supertype of all other types (basic types, collections, tuples). This type allows us to define a total supremum function, so types form an upper semilattice. It allows us to define rich expression typing rules in an elegant manner.

The Preliminaries Chapter of the theory defines a number of helper lemmas for transitive closures and tuples. It defines also a generic object model independent from OCL. It allows one to use the theory as a reference for formalization of analogous languages.

[QHLProver] title = Quantum Hoare Logic author = Junyi Liu<>, Bohua Zhan , Shuling Wang<>, Shenggang Ying<>, Tao Liu<>, Yangjia Li<>, Mingsheng Ying<>, Naijun Zhan<> topic = Computer science/Programming languages/Logics, Computer science/Semantics date = 2019-03-24 notify = bzhan@ios.ac.cn abstract = We formalize quantum Hoare logic as given in [1]. In particular, we specify the syntax and denotational semantics of a simple model of quantum programs. Then, we write down the rules of quantum Hoare logic for partial correctness, and show the soundness and completeness of the resulting proof system. As an application, we verify the correctness of Grover’s algorithm. [Transcendence_Series_Hancl_Rucki] title = The Transcendence of Certain Infinite Series author = Angeliki Koutsoukou-Argyraki , Wenda Li topic = Mathematics/Analysis, Mathematics/Number theory date = 2019-03-27 notify = wl302@cam.ac.uk, ak2110@cam.ac.uk abstract = We formalize the proofs of two transcendence criteria by J. Hančl and P. Rucki that assert the transcendence of the sums of certain infinite series built up by sequences that fulfil certain properties. Both proofs make use of Roth's celebrated theorem on diophantine approximations to algebraic numbers from 1955 which we implement as an assumption without having formalised its proof. [Binding_Syntax_Theory] title = A General Theory of Syntax with Bindings author = Lorenzo Gheri , Andrei Popescu topic = Computer science/Programming languages/Lambda calculi, Computer science/Functional programming, Logic/General logic/Mechanization of proofs date = 2019-04-06 notify = a.popescu@mdx.ac.uk, lor.gheri@gmail.com abstract = We formalize a theory of syntax with bindings that has been developed and refined over the last decade to support several large formalization efforts. Terms are defined for an arbitrary number of constructors of varying numbers of inputs, quotiented to alpha-equivalence and sorted according to a binding signature. The theory includes many properties of the standard operators on terms: substitution, swapping and freshness. It also includes bindings-aware induction and recursion principles and support for semantic interpretation. This work has been presented in the ITP 2017 paper “A Formalized General Theory of Syntax with Bindings”. [LTL_Master_Theorem] title = A Compositional and Unified Translation of LTL into ω-Automata author = Benedikt Seidl , Salomon Sickert topic = Computer science/Automata and formal languages date = 2019-04-16 notify = benedikt.seidl@tum.de, s.sickert@tum.de abstract = We present a formalisation of the unified translation approach of linear temporal logic (LTL) into ω-automata from [1]. This approach decomposes LTL formulas into ``simple'' languages and allows a clear separation of concerns: first, we formalise the purely logical result yielding this decomposition; second, we instantiate this generic theory to obtain a construction for deterministic (state-based) Rabin automata (DRA). We extract from this particular instantiation an executable tool translating LTL to DRAs. To the best of our knowledge this is the first verified translation from LTL to DRAs that is proven to be double exponential in the worst case which asymptotically matches the known lower bound.

[1] Javier Esparza, Jan Kretínský, Salomon Sickert. One Theorem to Rule Them All: A Unified Translation of LTL into ω-Automata. LICS 2018 [LambdaAuth] title = Formalization of Generic Authenticated Data Structures author = Matthias Brun<>, Dmitriy Traytel topic = Computer science/Security, Computer science/Programming languages/Lambda calculi date = 2019-05-14 notify = traytel@inf.ethz.ch abstract = Authenticated data structures are a technique for outsourcing data storage and maintenance to an untrusted server. The server is required to produce an efficiently checkable and cryptographically secure proof that it carried out precisely the requested computation. Miller et al. introduced λ• (pronounced lambda auth)—a functional programming language with a built-in primitive authentication construct, which supports a wide range of user-specified authenticated data structures while guaranteeing certain correctness and security properties for all well-typed programs. We formalize λ• and prove its correctness and security properties. With Isabelle's help, we uncover and repair several mistakes in the informal proofs and lemma statements. Our findings are summarized in an ITP'19 paper. [IMP2_Binary_Heap] title = Binary Heaps for IMP2 author = Simon Griebel<> topic = Computer science/Data structures, Computer science/Algorithms date = 2019-06-13 notify = s.griebel@tum.de abstract = In this submission array-based binary minimum heaps are formalized. The correctness of the following heap operations is proved: insert, get-min, delete-min and make-heap. These are then used to verify an in-place heapsort. The formalization is based on IMP2, an imperative program verification framework implemented in Isabelle/HOL. The verified heap functions are iterative versions of the partly recursive functions found in "Algorithms and Data Structures – The Basic Toolbox" by K. Mehlhorn and P. Sanders and "Introduction to Algorithms" by T. H. Cormen, C. E. Leiserson, R. L. Rivest and C. Stein. [Groebner_Macaulay] title = Gröbner Bases, Macaulay Matrices and Dubé's Degree Bounds author = Alexander Maletzky topic = Mathematics/Algebra date = 2019-06-15 notify = alexander.maletzky@risc.jku.at abstract = This entry formalizes the connection between Gröbner bases and Macaulay matrices (sometimes also referred to as `generalized Sylvester matrices'). In particular, it contains a method for computing Gröbner bases, which proceeds by first constructing some Macaulay matrix of the initial set of polynomials, then row-reducing this matrix, and finally converting the result back into a set of polynomials. The output is shown to be a Gröbner basis if the Macaulay matrix constructed in the first step is sufficiently large. In order to obtain concrete upper bounds on the size of the matrix (and hence turn the method into an effectively executable algorithm), Dubé's degree bounds on Gröbner bases are utilized; consequently, they are also part of the formalization. [Linear_Inequalities] title = Linear Inequalities author = Ralph Bottesch , Alban Reynaud <>, René Thiemann topic = Mathematics/Algebra date = 2019-06-21 notify = rene.thiemann@uibk.ac.at abstract = We formalize results about linear inqualities, mainly from Schrijver's book. The main results are the proof of the fundamental theorem on linear inequalities, Farkas' lemma, Carathéodory's theorem, the Farkas-Minkowsky-Weyl theorem, the decomposition theorem of polyhedra, and Meyer's result that the integer hull of a polyhedron is a polyhedron itself. Several theorems include bounds on the appearing numbers, and in particular we provide an a-priori bound on mixed-integer solutions of linear inequalities. [Linear_Programming] title = Linear Programming author = Julian Parsert , Cezary Kaliszyk topic = Mathematics/Algebra date = 2019-08-06 notify = julian.parsert@gmail.com, cezary.kaliszyk@uibk.ac.at abstract = We use the previous formalization of the general simplex algorithm to formulate an algorithm for solving linear programs. We encode the linear programs using only linear constraints. Solving these constraints also solves the original linear program. This algorithm is proven to be sound by applying the weak duality theorem which is also part of this formalization. [Differential_Game_Logic] title = Differential Game Logic author = André Platzer topic = Computer science/Programming languages/Logics date = 2019-06-03 notify = aplatzer@cs.cmu.edu abstract = This formalization provides differential game logic (dGL), a logic for proving properties of hybrid game. In addition to the syntax and semantics, it formalizes a uniform substitution calculus for dGL. Church's uniform substitutions substitute a term or formula for a function or predicate symbol everywhere. The uniform substitutions for dGL also substitute hybrid games for a game symbol everywhere. We prove soundness of one-pass uniform substitutions and the axioms of differential game logic with respect to their denotational semantics. One-pass uniform substitutions are faster by postponing soundness-critical admissibility checks with a linear pass homomorphic application and regain soundness by a variable condition at the replacements. The formalization is based on prior non-mechanized soundness proofs for dGL. [BenOr_Kozen_Reif] title = The BKR Decision Procedure for Univariate Real Arithmetic author = Katherine Cordwell , Yong Kiam Tan , André Platzer topic = Computer science/Algorithms/Mathematical date = 2021-04-24 notify = kcordwel@cs.cmu.edu, yongkiat@cs.cmu.edu, aplatzer@cs.cmu.edu abstract = We formalize the univariate case of Ben-Or, Kozen, and Reif's decision procedure for first-order real arithmetic (the BKR algorithm). We also formalize the univariate case of Renegar's variation of the BKR algorithm. The two formalizations differ mathematically in minor ways (that have significant impact on the multivariate case), but are quite similar in proof structure. Both rely on sign-determination (finding the set of consistent sign assignments for a set of polynomials). The method used for sign-determination is similar to Tarski's original quantifier elimination algorithm (it stores key information in a matrix equation), but with a reduction step to keep complexity low. [Complete_Non_Orders] title = Complete Non-Orders and Fixed Points author = Akihisa Yamada , Jérémy Dubut topic = Mathematics/Order date = 2019-06-27 notify = akihisayamada@nii.ac.jp, dubut@nii.ac.jp abstract = We develop an Isabelle/HOL library of order-theoretic concepts, such as various completeness conditions and fixed-point theorems. We keep our formalization as general as possible: we reprove several well-known results about complete orders, often without any properties of ordering, thus complete non-orders. In particular, we generalize the Knaster–Tarski theorem so that we ensure the existence of a quasi-fixed point of monotone maps over complete non-orders, and show that the set of quasi-fixed points is complete under a mild condition—attractivity—which is implied by either antisymmetry or transitivity. This result generalizes and strengthens a result by Stauti and Maaden. Finally, we recover Kleene’s fixed-point theorem for omega-complete non-orders, again using attractivity to prove that Kleene’s fixed points are least quasi-fixed points. [Priority_Search_Trees] title = Priority Search Trees author = Peter Lammich , Tobias Nipkow topic = Computer science/Data structures date = 2019-06-25 notify = lammich@in.tum.de abstract = We present a new, purely functional, simple and efficient data structure combining a search tree and a priority queue, which we call a priority search tree. The salient feature of priority search trees is that they offer a decrease-key operation, something that is missing from other simple, purely functional priority queue implementations. Priority search trees can be implemented on top of any search tree. This entry does the implementation for red-black trees. This entry formalizes the first part of our ITP-2019 proof pearl Purely Functional, Simple and Efficient Priority Search Trees and Applications to Prim and Dijkstra. [Prim_Dijkstra_Simple] title = Purely Functional, Simple, and Efficient Implementation of Prim and Dijkstra author = Peter Lammich , Tobias Nipkow topic = Computer science/Algorithms/Graph date = 2019-06-25 notify = lammich@in.tum.de abstract = We verify purely functional, simple and efficient implementations of Prim's and Dijkstra's algorithms. This constitutes the first verification of an executable and even efficient version of Prim's algorithm. This entry formalizes the second part of our ITP-2019 proof pearl Purely Functional, Simple and Efficient Priority Search Trees and Applications to Prim and Dijkstra. [MFOTL_Monitor] title = Formalization of a Monitoring Algorithm for Metric First-Order Temporal Logic author = Joshua Schneider , Dmitriy Traytel topic = Computer science/Algorithms, Logic/General logic/Temporal logic, Computer science/Automata and formal languages date = 2019-07-04 notify = joshua.schneider@inf.ethz.ch, traytel@inf.ethz.ch abstract = A monitor is a runtime verification tool that solves the following problem: Given a stream of time-stamped events and a policy formulated in a specification language, decide whether the policy is satisfied at every point in the stream. We verify the correctness of an executable monitor for specifications given as formulas in metric first-order temporal logic (MFOTL), an expressive extension of linear temporal logic with real-time constraints and first-order quantification. The verified monitor implements a simplified variant of the algorithm used in the efficient MonPoly monitoring tool. The formalization is presented in a RV 2019 paper, which also compares the output of the verified monitor to that of other monitoring tools on randomly generated inputs. This case study revealed several errors in the optimized but unverified tools. extra-history = Change history: [2020-08-13]: added the formalization of the abstract slicing framework and joint data slicer (revision b1639ed541b7)
[FOL_Seq_Calc1] title = A Sequent Calculus for First-Order Logic author = Asta Halkjær From contributors = Alexander Birch Jensen , Anders Schlichtkrull , Jørgen Villadsen topic = Logic/Proof theory date = 2019-07-18 notify = ahfrom@dtu.dk abstract = This work formalizes soundness and completeness of a one-sided sequent calculus for first-order logic. The completeness is shown via a translation from a complete semantic tableau calculus, the proof of which is based on the First-Order Logic According to Fitting theory. The calculi and proof techniques are taken from Ben-Ari's Mathematical Logic for Computer Science. [Szpilrajn] title = Order Extension and Szpilrajn's Extension Theorem author = Peter Zeller , Lukas Stevens topic = Mathematics/Order date = 2019-07-27 notify = p_zeller@cs.uni-kl.de abstract = This entry is concerned with the principle of order extension, i.e. the extension of an order relation to a total order relation. To this end, we prove a more general version of Szpilrajn's extension theorem employing terminology from the book "Consistency, Choice, and Rationality" by Bossert and Suzumura. We also formalize theorem 2.7 of their book. extra-history = Change history: [2021-03-22]: (by Lukas Stevens) generalise Szpilrajn's extension theorem and add material from the book "Consistency, Choice, and Rationality" [TESL_Language] title = A Formal Development of a Polychronous Polytimed Coordination Language author = Hai Nguyen Van , Frédéric Boulanger , Burkhart Wolff topic = Computer science/System description languages, Computer science/Semantics, Computer science/Concurrency date = 2019-07-30 notify = frederic.boulanger@centralesupelec.fr, burkhart.wolff@lri.fr abstract = The design of complex systems involves different formalisms for modeling their different parts or aspects. The global model of a system may therefore consist of a coordination of concurrent sub-models that use different paradigms. We develop here a theory for a language used to specify the timed coordination of such heterogeneous subsystems by addressing the following issues:

  • the behavior of the sub-systems is observed only at a series of discrete instants,
  • events may occur in different sub-systems at unrelated times, leading to polychronous systems, which do not necessarily have a common base clock,
  • coordination between subsystems involves causality, so the occurrence of an event may enforce the occurrence of other events, possibly after a certain duration has elapsed or an event has occurred a given number of times,
  • the domain of time (discrete, rational, continuous...) may be different in the subsystems, leading to polytimed systems,
  • the time frames of different sub-systems may be related (for instance, time in a GPS satellite and in a GPS receiver on Earth are related although they are not the same).
Firstly, a denotational semantics of the language is defined. Then, in order to be able to incrementally check the behavior of systems, an operational semantics is given, with proofs of progress, soundness and completeness with regard to the denotational semantics. These proofs are made according to a setup that can scale up when new operators are added to the language. In order for specifications to be composed in a clean way, the language should be invariant by stuttering (i.e., adding observation instants at which nothing happens). The proof of this invariance is also given. [Stellar_Quorums] title = Stellar Quorum Systems author = Giuliano Losa topic = Computer science/Algorithms/Distributed date = 2019-08-01 notify = giuliano@galois.com abstract = We formalize the static properties of personal Byzantine quorum systems (PBQSs) and Stellar quorum systems, as described in the paper ``Stellar Consensus by Reduction'' (to appear at DISC 2019). [IMO2019] title = Selected Problems from the International Mathematical Olympiad 2019 author = Manuel Eberl topic = Mathematics/Misc date = 2019-08-05 notify = manuel@pruvisto.org abstract =

This entry contains formalisations of the answers to three of the six problem of the International Mathematical Olympiad 2019, namely Q1, Q4, and Q5.

The reason why these problems were chosen is that they are particularly amenable to formalisation: they can be solved with minimal use of libraries. The remaining three concern geometry and graph theory, which, in the author's opinion, are more difficult to formalise resp. require a more complex library.

[Adaptive_State_Counting] title = Formalisation of an Adaptive State Counting Algorithm author = Robert Sachtleben topic = Computer science/Automata and formal languages, Computer science/Algorithms date = 2019-08-16 notify = rob_sac@uni-bremen.de abstract = This entry provides a formalisation of a refinement of an adaptive state counting algorithm, used to test for reduction between finite state machines. The algorithm has been originally presented by Hierons in the paper Testing from a Non-Deterministic Finite State Machine Using Adaptive State Counting. Definitions for finite state machines and adaptive test cases are given and many useful theorems are derived from these. The algorithm is formalised using mutually recursive functions, for which it is proven that the generated test suite is sufficient to test for reduction against finite state machines of a certain fault domain. Additionally, the algorithm is specified in a simple WHILE-language and its correctness is shown using Hoare-logic. [Jacobson_Basic_Algebra] title = A Case Study in Basic Algebra author = Clemens Ballarin topic = Mathematics/Algebra date = 2019-08-30 notify = ballarin@in.tum.de abstract = The focus of this case study is re-use in abstract algebra. It contains locale-based formalisations of selected parts of set, group and ring theory from Jacobson's Basic Algebra leading to the respective fundamental homomorphism theorems. The study is not intended as a library base for abstract algebra. It rather explores an approach towards abstract algebra in Isabelle. [Hybrid_Systems_VCs] title = Verification Components for Hybrid Systems author = Jonathan Julian Huerta y Munive <> topic = Mathematics/Algebra, Mathematics/Analysis date = 2019-09-10 notify = jjhuertaymunive1@sheffield.ac.uk, jonjulian23@gmail.com abstract = These components formalise a semantic framework for the deductive verification of hybrid systems. They support reasoning about continuous evolutions of hybrid programs in the style of differential dynamics logic. Vector fields or flows model these evolutions, and their verification is done with invariants for the former or orbits for the latter. Laws of modal Kleene algebra or categorical predicate transformers implement the verification condition generation. Examples show the approach at work. extra-history = Change history: [2020-12-13]: added components based on Kleene algebras with tests. These implement differential Hoare logic (dH) and a Morgan-style differential refinement calculus (dR) for verification of hybrid programs. [Generic_Join] title = Formalization of Multiway-Join Algorithms author = Thibault Dardinier<> topic = Computer science/Algorithms date = 2019-09-16 notify = tdardini@student.ethz.ch, traytel@inf.ethz.ch abstract = Worst-case optimal multiway-join algorithms are recent seminal achievement of the database community. These algorithms compute the natural join of multiple relational databases and improve in the worst case over traditional query plan optimizations of nested binary joins. In 2014, Ngo, Ré, and Rudra gave a unified presentation of different multi-way join algorithms. We formalized and proved correct their "Generic Join" algorithm and extended it to support negative joins. [Aristotles_Assertoric_Syllogistic] title = Aristotle's Assertoric Syllogistic author = Angeliki Koutsoukou-Argyraki topic = Logic/Philosophical aspects date = 2019-10-08 notify = ak2110@cam.ac.uk abstract = We formalise with Isabelle/HOL some basic elements of Aristotle's assertoric syllogistic following the article from the Stanford Encyclopedia of Philosophy by Robin Smith. To this end, we use a set theoretic formulation (covering both individual and general predication). In particular, we formalise the deductions in the Figures and after that we present Aristotle's metatheoretical observation that all deductions in the Figures can in fact be reduced to either Barbara or Celarent. As the formal proofs prove to be straightforward, the interest of this entry lies in illustrating the functionality of Isabelle and high efficiency of Sledgehammer for simple exercises in philosophy. [VerifyThis2019] title = VerifyThis 2019 -- Polished Isabelle Solutions author = Peter Lammich<>, Simon Wimmer topic = Computer science/Algorithms date = 2019-10-16 notify = lammich@in.tum.de, wimmers@in.tum.de abstract = VerifyThis 2019 (http://www.pm.inf.ethz.ch/research/verifythis.html) was a program verification competition associated with ETAPS 2019. It was the 8th event in the VerifyThis competition series. In this entry, we present polished and completed versions of our solutions that we created during the competition. [ZFC_in_HOL] title = Zermelo Fraenkel Set Theory in Higher-Order Logic author = Lawrence C. Paulson topic = Logic/Set theory date = 2019-10-24 notify = lp15@cam.ac.uk abstract =

This entry is a new formalisation of ZFC set theory in Isabelle/HOL. It is logically equivalent to Obua's HOLZF; the point is to have the closest possible integration with the rest of Isabelle/HOL, minimising the amount of new notations and exploiting type classes.

There is a type V of sets and a function elts :: V => V set mapping a set to its elements. Classes simply have type V set, and a predicate identifies the small classes: those that correspond to actual sets. Type classes connected with orders and lattices are used to minimise the amount of new notation for concepts such as the subset relation, union and intersection. Basic concepts — Cartesian products, disjoint sums, natural numbers, functions, etc. — are formalised.

More advanced set-theoretic concepts, such as transfinite induction, ordinals, cardinals and the transitive closure of a set, are also provided. The definition of addition and multiplication for general sets (not just ordinals) follows Kirby.

The theory provides two type classes with the aim of facilitating developments that combine V with other Isabelle/HOL types: embeddable, the class of types that can be injected into V (including V itself as well as V*V, etc.), and small, the class of types that correspond to some ZF set.

extra-history = Change history: [2020-01-28]: Generalisation of the "small" predicate and order types to arbitrary sets; ordinal exponentiation; introduction of the coercion ord_of_nat :: "nat => V"; numerous new lemmas. (revision 6081d5be8d08) [Interval_Arithmetic_Word32] title = Interval Arithmetic on 32-bit Words author = Brandon Bohrer topic = Computer science/Data structures date = 2019-11-27 notify = bjbohrer@gmail.com, bbohrer@cs.cmu.edu abstract = Interval_Arithmetic implements conservative interval arithmetic computations, then uses this interval arithmetic to implement a simple programming language where all terms have 32-bit signed word values, with explicit infinities for terms outside the representable bounds. Our target use case is interpreters for languages that must have a well-understood low-level behavior. We include a formalization of bounded-length strings which are used for the identifiers of our language. Bounded-length identifiers are useful in some applications, for example the Differential_Dynamic_Logic article, where a Euclidean space indexed by identifiers demands that identifiers are finitely many. [Generalized_Counting_Sort] title = An Efficient Generalization of Counting Sort for Large, possibly Infinite Key Ranges author = Pasquale Noce topic = Computer science/Algorithms, Computer science/Functional programming date = 2019-12-04 notify = pasquale.noce.lavoro@gmail.com abstract = Counting sort is a well-known algorithm that sorts objects of any kind mapped to integer keys, or else to keys in one-to-one correspondence with some subset of the integers (e.g. alphabet letters). However, it is suitable for direct use, viz. not just as a subroutine of another sorting algorithm (e.g. radix sort), only if the key range is not significantly larger than the number of the objects to be sorted. This paper describes a tail-recursive generalization of counting sort making use of a bounded number of counters, suitable for direct use in case of a large, or even infinite key range of any kind, subject to the only constraint of being a subset of an arbitrary linear order. After performing a pen-and-paper analysis of how such algorithm has to be designed to maximize its efficiency, this paper formalizes the resulting generalized counting sort (GCsort) algorithm and then formally proves its correctness properties, namely that (a) the counters' number is maximized never exceeding the fixed upper bound, (b) objects are conserved, (c) objects get sorted, and (d) the algorithm is stable. [Poincare_Bendixson] title = The Poincaré-Bendixson Theorem author = Fabian Immler , Yong Kiam Tan topic = Mathematics/Analysis date = 2019-12-18 notify = fimmler@cs.cmu.edu, yongkiat@cs.cmu.edu abstract = The Poincaré-Bendixson theorem is a classical result in the study of (continuous) dynamical systems. Colloquially, it restricts the possible behaviors of planar dynamical systems: such systems cannot be chaotic. In practice, it is a useful tool for proving the existence of (limiting) periodic behavior in planar systems. The theorem is an interesting and challenging benchmark for formalized mathematics because proofs in the literature rely on geometric sketches and only hint at symmetric cases. It also requires a substantial background of mathematical theories, e.g., the Jordan curve theorem, real analysis, ordinary differential equations, and limiting (long-term) behavior of dynamical systems. [Isabelle_C] title = Isabelle/C author = Frédéric Tuong , Burkhart Wolff topic = Computer science/Programming languages/Language definitions, Computer science/Semantics, Tools date = 2019-10-22 notify = tuong@users.gforge.inria.fr, wolff@lri.fr abstract = We present a framework for C code in C11 syntax deeply integrated into the Isabelle/PIDE development environment. Our framework provides an abstract interface for verification back-ends to be plugged-in independently. Thus, various techniques such as deductive program verification or white-box testing can be applied to the same source, which is part of an integrated PIDE document model. Semantic back-ends are free to choose the supported C fragment and its semantics. In particular, they can differ on the chosen memory model or the specification mechanism for framing conditions. Our framework supports semantic annotations of C sources in the form of comments. Annotations serve to locally control back-end settings, and can express the term focus to which an annotation refers. Both the logical and the syntactic context are available when semantic annotations are evaluated. As a consequence, a formula in an annotation can refer both to HOL or C variables. Our approach demonstrates the degree of maturity and expressive power the Isabelle/PIDE sub-system has achieved in recent years. Our integration technique employs Lex and Yacc style grammars to ensure efficient deterministic parsing. This is the core-module of Isabelle/C; the AFP package for Clean and Clean_wrapper as well as AutoCorres and AutoCorres_wrapper (available via git) are applications of this front-end. [Zeta_3_Irrational] title = The Irrationality of ζ(3) author = Manuel Eberl topic = Mathematics/Number theory date = 2019-12-27 notify = manuel.eberl@tum.de abstract =

This article provides a formalisation of Beukers's straightforward analytic proof that ζ(3) is irrational. This was first proven by Apéry (which is why this result is also often called ‘Apéry's Theorem’) using a more algebraic approach. This formalisation follows Filaseta's presentation of Beukers's proof.

[Hybrid_Logic] title = Formalizing a Seligman-Style Tableau System for Hybrid Logic author = Asta Halkjær From topic = Logic/General logic/Modal logic date = 2019-12-20 notify = ahfrom@dtu.dk abstract = This work is a formalization of soundness and completeness proofs for a Seligman-style tableau system for hybrid logic. The completeness result is obtained via a synthetic approach using maximally consistent sets of tableau blocks. The formalization differs from previous work in a few ways. First, to avoid the need to backtrack in the construction of a tableau, the formalized system has no unnamed initial segment, and therefore no Name rule. Second, I show that the full Bridge rule is admissible in the system. Third, I start from rules restricted to only extend the branch with new formulas, including only witnessing diamonds that are not already witnessed, and show that the unrestricted rules are admissible. Similarly, I start from simpler versions of the @-rules and show that these are sufficient. The GoTo rule is restricted using a notion of potential such that each application consumes potential and potential is earned through applications of the remaining rules. I show that if a branch can be closed then it can be closed starting from a single unit. Finally, Nom is restricted by a fixed set of allowed nominals. The resulting system should be terminating. extra-history = Change history: [2020-06-03]: The fully restricted system has been shown complete by updating the synthetic completeness proof. [Bicategory] title = Bicategories author = Eugene W. Stark topic = Mathematics/Category theory date = 2020-01-06 notify = stark@cs.stonybrook.edu abstract =

Taking as a starting point the author's previous work on developing aspects of category theory in Isabelle/HOL, this article gives a compatible formalization of the notion of "bicategory" and develops a framework within which formal proofs of facts about bicategories can be given. The framework includes a number of basic results, including the Coherence Theorem, the Strictness Theorem, pseudofunctors and biequivalence, and facts about internal equivalences and adjunctions in a bicategory. As a driving application and demonstration of the utility of the framework, it is used to give a formal proof of a theorem, due to Carboni, Kasangian, and Street, that characterizes up to biequivalence the bicategories of spans in a category with pullbacks. The formalization effort necessitated the filling-in of many details that were not evident from the brief presentation in the original paper, as well as identifying a few minor corrections along the way.

Revisions made subsequent to the first version of this article added additional material on pseudofunctors, pseudonatural transformations, modifications, and equivalence of bicategories; the main thrust being to give a proof that a pseudofunctor is a biequivalence if and only if it can be extended to an equivalence of bicategories.

extra-history = Change history: [2020-02-15]: Move ConcreteCategory.thy from Bicategory to Category3 and use it systematically. Make other minor improvements throughout. (revision a51840d36867)
[2020-11-04]: Added new material on equivalence of bicategories, with associated changes. (revision 472cb2268826)
[2021-07-22]: Added new material: "concrete bicategories" and "bicategory of categories". (revision 49d3aa43c180)
[Subset_Boolean_Algebras] title = A Hierarchy of Algebras for Boolean Subsets author = Walter Guttmann , Bernhard Möller topic = Mathematics/Algebra date = 2020-01-31 notify = walter.guttmann@canterbury.ac.nz abstract = We present a collection of axiom systems for the construction of Boolean subalgebras of larger overall algebras. The subalgebras are defined as the range of a complement-like operation on a semilattice. This technique has been used, for example, with the antidomain operation, dynamic negation and Stone algebras. We present a common ground for these constructions based on a new equational axiomatisation of Boolean algebras. [Goodstein_Lambda] title = Implementing the Goodstein Function in λ-Calculus author = Bertram Felgenhauer topic = Logic/Rewriting date = 2020-02-21 notify = int-e@gmx.de abstract = In this formalization, we develop an implementation of the Goodstein function G in plain λ-calculus, linked to a concise, self-contained specification. The implementation works on a Church-encoded representation of countable ordinals. The initial conversion to hereditary base 2 is not covered, but the material is sufficient to compute the particular value G(16), and easily extends to other fixed arguments. [VeriComp] title = A Generic Framework for Verified Compilers author = Martin Desharnais topic = Computer science/Programming languages/Compiling date = 2020-02-10 notify = martin.desharnais@unibw.de abstract = This is a generic framework for formalizing compiler transformations. It leverages Isabelle/HOL’s locales to abstract over concrete languages and transformations. It states common definitions for language semantics, program behaviours, forward and backward simulations, and compilers. We provide generic operations, such as simulation and compiler composition, and prove general (partial) correctness theorems, resulting in reusable proof components. [Hello_World] title = Hello World author = Cornelius Diekmann , Lars Hupel topic = Computer science/Functional programming date = 2020-03-07 notify = diekmann@net.in.tum.de abstract = In this article, we present a formalization of the well-known "Hello, World!" code, including a formal framework for reasoning about IO. Our model is inspired by the handling of IO in Haskell. We start by formalizing the 🌍 and embrace the IO monad afterwards. Then we present a sample main :: IO (), followed by its proof of correctness. [WOOT_Strong_Eventual_Consistency] title = Strong Eventual Consistency of the Collaborative Editing Framework WOOT author = Emin Karayel , Edgar Gonzàlez topic = Computer science/Algorithms/Distributed date = 2020-03-25 notify = eminkarayel@google.com, edgargip@google.com, me@eminkarayel.de abstract = Commutative Replicated Data Types (CRDTs) are a promising new class of data structures for large-scale shared mutable content in applications that only require eventual consistency. The WithOut Operational Transforms (WOOT) framework is a CRDT for collaborative text editing introduced by Oster et al. (CSCW 2006) for which the eventual consistency property was verified only for a bounded model to date. We contribute a formal proof for WOOTs strong eventual consistency. [Furstenberg_Topology] title = Furstenberg's topology and his proof of the infinitude of primes author = Manuel Eberl topic = Mathematics/Number theory date = 2020-03-22 notify = manuel.eberl@tum.de abstract =

This article gives a formal version of Furstenberg's topological proof of the infinitude of primes. He defines a topology on the integers based on arithmetic progressions (or, equivalently, residue classes). Using some fairly obvious properties of this topology, the infinitude of primes is then easily obtained.

Apart from this, this topology is also fairly ‘nice’ in general: it is second countable, metrizable, and perfect. All of these (well-known) facts are formally proven, including an explicit metric for the topology given by Zulfeqarr.

[Saturation_Framework] title = A Comprehensive Framework for Saturation Theorem Proving author = Sophie Tourret topic = Logic/General logic/Mechanization of proofs date = 2020-04-09 notify = stourret@mpi-inf.mpg.de abstract = This Isabelle/HOL formalization is the companion of the technical report “A comprehensive framework for saturation theorem proving”, itself companion of the eponym IJCAR 2020 paper, written by Uwe Waldmann, Sophie Tourret, Simon Robillard and Jasmin Blanchette. It verifies a framework for formal refutational completeness proofs of abstract provers that implement saturation calculi, such as ordered resolution or superposition, and allows to model entire prover architectures in such a way that the static refutational completeness of a calculus immediately implies the dynamic refutational completeness of a prover implementing the calculus using a variant of the given clause loop. The technical report “A comprehensive framework for saturation theorem proving” is available on the Matryoshka website. The names of the Isabelle lemmas and theorems corresponding to the results in the report are indicated in the margin of the report. [Saturation_Framework_Extensions] title = Extensions to the Comprehensive Framework for Saturation Theorem Proving author = Jasmin Blanchette , Sophie Tourret topic = Logic/General logic/Mechanization of proofs date = 2020-08-25 notify = jasmin.blanchette@gmail.com abstract = This Isabelle/HOL formalization extends the AFP entry Saturation_Framework with the following contributions:
  • an application of the framework to prove Bachmair and Ganzinger's resolution prover RP refutationally complete, which was formalized in a more ad hoc fashion by Schlichtkrull et al. in the AFP entry Ordered_Resultion_Prover;
  • generalizations of various basic concepts formalized by Schlichtkrull et al., which were needed to verify RP and could be useful to formalize other calculi, such as superposition;
  • alternative proofs of fairness (and hence saturation and ultimately refutational completeness) for the given clause procedures GC and LGC, based on invariance.
[MFODL_Monitor_Optimized] title = Formalization of an Optimized Monitoring Algorithm for Metric First-Order Dynamic Logic with Aggregations author = Thibault Dardinier<>, Lukas Heimes<>, Martin Raszyk , Joshua Schneider , Dmitriy Traytel topic = Computer science/Algorithms, Logic/General logic/Modal logic, Computer science/Automata and formal languages date = 2020-04-09 notify = martin.raszyk@inf.ethz.ch, joshua.schneider@inf.ethz.ch, traytel@inf.ethz.ch abstract = A monitor is a runtime verification tool that solves the following problem: Given a stream of time-stamped events and a policy formulated in a specification language, decide whether the policy is satisfied at every point in the stream. We verify the correctness of an executable monitor for specifications given as formulas in metric first-order dynamic logic (MFODL), which combines the features of metric first-order temporal logic (MFOTL) and metric dynamic logic. Thus, MFODL supports real-time constraints, first-order parameters, and regular expressions. Additionally, the monitor supports aggregation operations such as count and sum. This formalization, which is described in a forthcoming paper at IJCAR 2020, significantly extends previous work on a verified monitor for MFOTL. Apart from the addition of regular expressions and aggregations, we implemented multi-way joins and a specialized sliding window algorithm to further optimize the monitor. extra-history = Change history: [2021-10-19]: corrected a mistake in the calculation of median aggregations (reported by Nicolas Kaletsch, revision 02b14c9bf3da)
[Sliding_Window_Algorithm] title = Formalization of an Algorithm for Greedily Computing Associative Aggregations on Sliding Windows author = Lukas Heimes<>, Dmitriy Traytel , Joshua Schneider<> topic = Computer science/Algorithms date = 2020-04-10 notify = heimesl@student.ethz.ch, traytel@inf.ethz.ch, joshua.schneider@inf.ethz.ch abstract = Basin et al.'s sliding window algorithm (SWA) is an algorithm for combining the elements of subsequences of a sequence with an associative operator. It is greedy and minimizes the number of operator applications. We formalize the algorithm and verify its functional correctness. We extend the algorithm with additional operations and provide an alternative interface to the slide operation that does not require the entire input sequence. [Lucas_Theorem] title = Lucas's Theorem author = Chelsea Edmonds topic = Mathematics/Number theory date = 2020-04-07 notify = cle47@cam.ac.uk abstract = This work presents a formalisation of a generating function proof for Lucas's theorem. We first outline extensions to the existing Formal Power Series (FPS) library, including an equivalence relation for coefficients modulo n, an alternate binomial theorem statement, and a formalised proof of the Freshman's dream (mod p) lemma. The second part of the work presents the formal proof of Lucas's Theorem. Working backwards, the formalisation first proves a well known corollary of the theorem which is easier to formalise, and then applies induction to prove the original theorem statement. The proof of the corollary aims to provide a good example of a formalised generating function equivalence proof using the FPS library. The final theorem statement is intended to be integrated into the formalised proof of Hilbert's 10th Problem. [ADS_Functor] title = Authenticated Data Structures As Functors author = Andreas Lochbihler , Ognjen Marić topic = Computer science/Data structures date = 2020-04-16 notify = andreas.lochbihler@digitalasset.com, mail@andreas-lochbihler.de abstract = Authenticated data structures allow several systems to convince each other that they are referring to the same data structure, even if each of them knows only a part of the data structure. Using inclusion proofs, knowledgeable systems can selectively share their knowledge with other systems and the latter can verify the authenticity of what is being shared. In this article, we show how to modularly define authenticated data structures, their inclusion proofs, and operations thereon as datatypes in Isabelle/HOL, using a shallow embedding. Modularity allows us to construct complicated trees from reusable building blocks, which we call Merkle functors. Merkle functors include sums, products, and function spaces and are closed under composition and least fixpoints. As a practical application, we model the hierarchical transactions of Canton, a practical interoperability protocol for distributed ledgers, as authenticated data structures. This is a first step towards formalizing the Canton protocol and verifying its integrity and security guarantees. [Power_Sum_Polynomials] title = Power Sum Polynomials author = Manuel Eberl topic = Mathematics/Algebra date = 2020-04-24 notify = manuel@pruvisto.org abstract =

This article provides a formalisation of the symmetric multivariate polynomials known as power sum polynomials. These are of the form pn(X1,…, Xk) = X1n + … + Xkn. A formal proof of the Girard–Newton Theorem is also given. This theorem relates the power sum polynomials to the elementary symmetric polynomials sk in the form of a recurrence relation (-1)k k sk = ∑i∈[0,k) (-1)i si pk-i .

As an application, this is then used to solve a generalised form of a puzzle given as an exercise in Dummit and Foote's Abstract Algebra: For k complex unknowns x1, …, xk, define pj := x1j + … + xkj. Then for each vector a ∈ ℂk, show that there is exactly one solution to the system p1 = a1, …, pk = ak up to permutation of the xi and determine the value of pi for i>k.

[Formal_Puiseux_Series] title = Formal Puiseux Series author = Manuel Eberl topic = Mathematics/Algebra date = 2021-02-17 notify = manuel@pruvisto.org abstract =

Formal Puiseux series are generalisations of formal power series and formal Laurent series that also allow for fractional exponents. They have the following general form: \[\sum_{i=N}^\infty a_{i/d} X^{i/d}\] where N is an integer and d is a positive integer.

This entry defines these series including their basic algebraic properties. Furthermore, it proves the Newton–Puiseux Theorem, namely that the Puiseux series over an algebraically closed field of characteristic 0 are also algebraically closed.

[Gaussian_Integers] title = Gaussian Integers author = Manuel Eberl topic = Mathematics/Number theory date = 2020-04-24 notify = manuel@pruvisto.org abstract =

The Gaussian integers are the subring ℤ[i] of the complex numbers, i. e. the ring of all complex numbers with integral real and imaginary part. This article provides a definition of this ring as well as proofs of various basic properties, such as that they form a Euclidean ring and a full classification of their primes. An executable (albeit not very efficient) factorisation algorithm is also provided.

Lastly, this Gaussian integer formalisation is used in two short applications:

  1. The characterisation of all positive integers that can be written as sums of two squares
  2. Euclid's formula for primitive Pythagorean triples

While elementary proofs for both of these are already available in the AFP, the theory of Gaussian integers provides more concise proofs and a more high-level view.

[Forcing] title = Formalization of Forcing in Isabelle/ZF author = Emmanuel Gunther , Miguel Pagano , Pedro Sánchez Terraf topic = Logic/Set theory date = 2020-05-06 notify = gunther@famaf.unc.edu.ar, pagano@famaf.unc.edu.ar, sterraf@famaf.unc.edu.ar abstract = We formalize the theory of forcing in the set theory framework of Isabelle/ZF. Under the assumption of the existence of a countable transitive model of ZFC, we construct a proper generic extension and show that the latter also satisfies ZFC. [Delta_System_Lemma] title = Cofinality and the Delta System Lemma author = Pedro Sánchez Terraf topic = Mathematics/Combinatorics, Logic/Set theory date = 2020-12-27 notify = sterraf@famaf.unc.edu.ar abstract = We formalize the basic results on cofinality of linearly ordered sets and ordinals and Šanin’s Lemma for uncountable families of finite sets. This last result is used to prove the countable chain condition for Cohen posets. We work in the set theory framework of Isabelle/ZF, using the Axiom of Choice as needed. [Recursion-Addition] title = Recursion Theorem in ZF author = Georgy Dunaev topic = Logic/Set theory date = 2020-05-11 notify = georgedunaev@gmail.com abstract = This document contains a proof of the recursion theorem. This is a mechanization of the proof of the recursion theorem from the text Introduction to Set Theory, by Karel Hrbacek and Thomas Jech. This implementation may be used as the basis for a model of Peano arithmetic in ZF. While recursion and the natural numbers are already available in Isabelle/ZF, this clean development is much easier to follow. [LTL_Normal_Form] title = An Efficient Normalisation Procedure for Linear Temporal Logic: Isabelle/HOL Formalisation author = Salomon Sickert topic = Computer science/Automata and formal languages, Logic/General logic/Temporal logic date = 2020-05-08 notify = s.sickert@tum.de abstract = In the mid 80s, Lichtenstein, Pnueli, and Zuck proved a classical theorem stating that every formula of Past LTL (the extension of LTL with past operators) is equivalent to a formula of the form $\bigwedge_{i=1}^n \mathbf{G}\mathbf{F} \varphi_i \vee \mathbf{F}\mathbf{G} \psi_i$, where $\varphi_i$ and $\psi_i$ contain only past operators. Some years later, Chang, Manna, and Pnueli built on this result to derive a similar normal form for LTL. Both normalisation procedures have a non-elementary worst-case blow-up, and follow an involved path from formulas to counter-free automata to star-free regular expressions and back to formulas. We improve on both points. We present an executable formalisation of a direct and purely syntactic normalisation procedure for LTL yielding a normal form, comparable to the one by Chang, Manna, and Pnueli, that has only a single exponential blow-up. [Matrices_for_ODEs] title = Matrices for ODEs author = Jonathan Julian Huerta y Munive topic = Mathematics/Analysis, Mathematics/Algebra date = 2020-04-19 notify = jonjulian23@gmail.com abstract = Our theories formalise various matrix properties that serve to establish existence, uniqueness and characterisation of the solution to affine systems of ordinary differential equations (ODEs). In particular, we formalise the operator and maximum norm of matrices. Then we use them to prove that square matrices form a Banach space, and in this setting, we show an instance of Picard-Lindelöf’s theorem for affine systems of ODEs. Finally, we use this formalisation to verify three simple hybrid programs. [Irrational_Series_Erdos_Straus] title = Irrationality Criteria for Series by Erdős and Straus author = Angeliki Koutsoukou-Argyraki , Wenda Li topic = Mathematics/Number theory, Mathematics/Analysis date = 2020-05-12 notify = ak2110@cam.ac.uk, wl302@cam.ac.uk, liwenda1990@hotmail.com abstract = We formalise certain irrationality criteria for infinite series of the form: \[\sum_{n=1}^\infty \frac{b_n}{\prod_{i=1}^n a_i} \] where $\{b_n\}$ is a sequence of integers and $\{a_n\}$ a sequence of positive integers with $a_n >1$ for all large n. The results are due to P. Erdős and E. G. Straus [1]. In particular, we formalise Theorem 2.1, Corollary 2.10 and Theorem 3.1. The latter is an application of Theorem 2.1 involving the prime numbers. [Knuth_Bendix_Order] title = A Formalization of Knuth–Bendix Orders author = Christian Sternagel , René Thiemann topic = Logic/Rewriting date = 2020-05-13 notify = c.sternagel@gmail.com, rene.thiemann@uibk.ac.at abstract = We define a generalized version of Knuth–Bendix orders, including subterm coefficient functions. For these orders we formalize several properties such as strong normalization, the subterm property, closure properties under substitutions and contexts, as well as ground totality. [Stateful_Protocol_Composition_and_Typing] title = Stateful Protocol Composition and Typing author = Andreas V. Hess , Sebastian Mödersheim , Achim D. Brucker topic = Computer science/Security date = 2020-04-08 notify = avhe@dtu.dk, andreasvhess@gmail.com, samo@dtu.dk, brucker@spamfence.net, andschl@dtu.dk abstract = We provide in this AFP entry several relative soundness results for security protocols. In particular, we prove typing and compositionality results for stateful protocols (i.e., protocols with mutable state that may span several sessions), and that focuses on reachability properties. Such results are useful to simplify protocol verification by reducing it to a simpler problem: Typing results give conditions under which it is safe to verify a protocol in a typed model where only "well-typed" attacks can occur whereas compositionality results allow us to verify a composed protocol by only verifying the component protocols in isolation. The conditions on the protocols under which the results hold are furthermore syntactic in nature allowing for full automation. The foundation presented here is used in another entry to provide fully automated and formalized security proofs of stateful protocols. [Automated_Stateful_Protocol_Verification] title = Automated Stateful Protocol Verification author = Andreas V. Hess , Sebastian Mödersheim , Achim D. Brucker , Anders Schlichtkrull topic = Computer science/Security, Tools date = 2020-04-08 notify = avhe@dtu.dk, andreasvhess@gmail.com, samo@dtu.dk, brucker@spamfence.net, andschl@dtu.dk abstract = In protocol verification we observe a wide spectrum from fully automated methods to interactive theorem proving with proof assistants like Isabelle/HOL. In this AFP entry, we present a fully-automated approach for verifying stateful security protocols, i.e., protocols with mutable state that may span several sessions. The approach supports reachability goals like secrecy and authentication. We also include a simple user-friendly transaction-based protocol specification language that is embedded into Isabelle. [Smith_Normal_Form] title = A verified algorithm for computing the Smith normal form of a matrix author = Jose Divasón topic = Mathematics/Algebra, Computer science/Algorithms/Mathematical date = 2020-05-23 notify = jose.divason@unirioja.es abstract = This work presents a formal proof in Isabelle/HOL of an algorithm to transform a matrix into its Smith normal form, a canonical matrix form, in a general setting: the algorithm is parameterized by operations to prove its existence over elementary divisor rings, while execution is guaranteed over Euclidean domains. We also provide a formal proof on some results about the generality of this algorithm as well as the uniqueness of the Smith normal form. Since Isabelle/HOL does not feature dependent types, the development is carried out switching conveniently between two different existing libraries: the Hermite normal form (based on HOL Analysis) and the Jordan normal form AFP entries. This permits to reuse results from both developments and it is done by means of the lifting and transfer package together with the use of local type definitions. [Nash_Williams] title = The Nash-Williams Partition Theorem author = Lawrence C. Paulson topic = Mathematics/Combinatorics date = 2020-05-16 notify = lp15@cam.ac.uk abstract = In 1965, Nash-Williams discovered a generalisation of the infinite form of Ramsey's theorem. Where the latter concerns infinite sets of n-element sets for some fixed n, the Nash-Williams theorem concerns infinite sets of finite sets (or lists) subject to a “no initial segment” condition. The present formalisation follows a monograph on Ramsey Spaces by Todorčević. [Safe_Distance] title = A Formally Verified Checker of the Safe Distance Traffic Rules for Autonomous Vehicles author = Albert Rizaldi , Fabian Immler topic = Computer science/Algorithms/Mathematical, Mathematics/Physics date = 2020-06-01 notify = albert.rizaldi@ntu.edu.sg, fimmler@andrew.cmu.edu, martin.rau@tum.de abstract = The Vienna Convention on Road Traffic defines the safe distance traffic rules informally. This could make autonomous vehicle liable for safe-distance-related accidents because there is no clear definition of how large a safe distance is. We provide a formally proven prescriptive definition of a safe distance, and checkers which can decide whether an autonomous vehicle is obeying the safe distance rule. Not only does our work apply to the domain of law, but it also serves as a specification for autonomous vehicle manufacturers and for online verification of path planners. [Relational_Paths] title = Relational Characterisations of Paths author = Walter Guttmann , Peter Höfner topic = Mathematics/Graph theory date = 2020-07-13 notify = walter.guttmann@canterbury.ac.nz, peter@hoefner-online.de abstract = Binary relations are one of the standard ways to encode, characterise and reason about graphs. Relation algebras provide equational axioms for a large fragment of the calculus of binary relations. Although relations are standard tools in many areas of mathematics and computing, researchers usually fall back to point-wise reasoning when it comes to arguments about paths in a graph. We present a purely algebraic way to specify different kinds of paths in Kleene relation algebras, which are relation algebras equipped with an operation for reflexive transitive closure. We study the relationship between paths with a designated root vertex and paths without such a vertex. Since we stay in first-order logic this development helps with mechanising proofs. To demonstrate the applicability of the algebraic framework we verify the correctness of three basic graph algorithms. [Amicable_Numbers] title = Amicable Numbers author = Angeliki Koutsoukou-Argyraki topic = Mathematics/Number theory date = 2020-08-04 notify = ak2110@cam.ac.uk abstract = This is a formalisation of Amicable Numbers, involving some relevant material including Euler's sigma function, some relevant definitions, results and examples as well as rules such as Thābit ibn Qurra's Rule, Euler's Rule, te Riele's Rule and Borho's Rule with breeders. [Ordinal_Partitions] title = Ordinal Partitions author = Lawrence C. Paulson topic = Mathematics/Combinatorics, Logic/Set theory date = 2020-08-03 notify = lp15@cam.ac.uk abstract = The theory of partition relations concerns generalisations of Ramsey's theorem. For any ordinal $\alpha$, write $\alpha \to (\alpha, m)^2$ if for each function $f$ from unordered pairs of elements of $\alpha$ into $\{0,1\}$, either there is a subset $X\subseteq \alpha$ order-isomorphic to $\alpha$ such that $f\{x,y\}=0$ for all $\{x,y\}\subseteq X$, or there is an $m$ element set $Y\subseteq \alpha$ such that $f\{x,y\}=1$ for all $\{x,y\}\subseteq Y$. (In both cases, with $\{x,y\}$ we require $x\not=y$.) In particular, the infinite Ramsey theorem can be written in this notation as $\omega \to (\omega, \omega)^2$, or if we restrict $m$ to the positive integers as above, then $\omega \to (\omega, m)^2$ for all $m$. This entry formalises Larson's proof of $\omega^\omega \to (\omega^\omega, m)^2$ along with a similar proof of a result due to Specker: $\omega^2 \to (\omega^2, m)^2$. Also proved is a necessary result by Erdős and Milner: $\omega^{1+\alpha\cdot n} \to (\omega^{1+\alpha}, 2^n)^2$. [Relational_Disjoint_Set_Forests] title = Relational Disjoint-Set Forests author = Walter Guttmann topic = Computer science/Data structures date = 2020-08-26 notify = walter.guttmann@canterbury.ac.nz abstract = We give a simple relation-algebraic semantics of read and write operations on associative arrays. The array operations seamlessly integrate with assignments in the Hoare-logic library. Using relation algebras and Kleene algebras we verify the correctness of an array-based implementation of disjoint-set forests with a naive union operation and a find operation with path compression. extra-history = Change history: [2021-06-19]: added path halving, path splitting, relational Peano structures, union by rank (revision 98c7aa03457d) [PAC_Checker] title = Practical Algebraic Calculus Checker author = Mathias Fleury , Daniela Kaufmann topic = Computer science/Algorithms date = 2020-08-31 notify = mathias.fleury@jku.at abstract = Generating and checking proof certificates is important to increase the trust in automated reasoning tools. In recent years formal verification using computer algebra became more important and is heavily used in automated circuit verification. An existing proof format which covers algebraic reasoning and allows efficient proof checking is the practical algebraic calculus (PAC). In this development, we present the verified checker Pastèque that is obtained by synthesis via the Refinement Framework. This is the formalization going with our FMCAD'20 tool presentation. [BirdKMP] title = Putting the `K' into Bird's derivation of Knuth-Morris-Pratt string matching author = Peter Gammie topic = Computer science/Functional programming date = 2020-08-25 notify = peteg42@gmail.com abstract = Richard Bird and collaborators have proposed a derivation of an intricate cyclic program that implements the Morris-Pratt string matching algorithm. Here we provide a proof of total correctness for Bird's derivation and complete it by adding Knuth's optimisation. [Extended_Finite_State_Machines] title = A Formal Model of Extended Finite State Machines author = Michael Foster , Achim D. Brucker , Ramsay G. Taylor , John Derrick topic = Computer science/Automata and formal languages date = 2020-09-07 notify = jmafoster1@sheffield.ac.uk, adbrucker@0x5f.org abstract = In this AFP entry, we provide a formalisation of extended finite state machines (EFSMs) where models are represented as finite sets of transitions between states. EFSMs execute traces to produce observable outputs. We also define various simulation and equality metrics for EFSMs in terms of traces and prove their strengths in relation to each other. Another key contribution is a framework of function definitions such that LTL properties can be phrased over EFSMs. Finally, we provide a simple example case study in the form of a drinks machine. [Extended_Finite_State_Machine_Inference] title = Inference of Extended Finite State Machines author = Michael Foster , Achim D. Brucker , Ramsay G. Taylor , John Derrick topic = Computer science/Automata and formal languages date = 2020-09-07 notify = jmafoster1@sheffield.ac.uk, adbrucker@0x5f.org abstract = In this AFP entry, we provide a formal implementation of a state-merging technique to infer extended finite state machines (EFSMs), complete with output and update functions, from black-box traces. In particular, we define the subsumption in context relation as a means of determining whether one transition is able to account for the behaviour of another. Building on this, we define the direct subsumption relation, which lifts the subsumption in context relation to EFSM level such that we can use it to determine whether it is safe to merge a given pair of transitions. Key proofs include the conditions necessary for subsumption to occur and that subsumption and direct subsumption are preorder relations. We also provide a number of different heuristics which can be used to abstract away concrete values into registers so that more states and transitions can be merged and provide proofs of the various conditions which must hold for these abstractions to subsume their ungeneralised counterparts. A Code Generator setup to create executable Scala code is also defined. [Physical_Quantities] title = A Sound Type System for Physical Quantities, Units, and Measurements author = Simon Foster , Burkhart Wolff topic = Mathematics/Physics, Computer science/Programming languages/Type systems date = 2020-10-20 notify = simon.foster@york.ac.uk, wolff@lri.fr abstract = The present Isabelle theory builds a formal model for both the International System of Quantities (ISQ) and the International System of Units (SI), which are both fundamental for physics and engineering. Both the ISQ and the SI are deeply integrated into Isabelle's type system. Quantities are parameterised by dimension types, which correspond to base vectors, and thus only quantities of the same dimension can be equated. Since the underlying "algebra of quantities" induces congruences on quantity and SI types, specific tactic support is developed to capture these. Our construction is validated by a test-set of known equivalences between both quantities and SI units. Moreover, the presented theory can be used for type-safe conversions between the SI system and others, like the British Imperial System (BIS). [Shadow_DOM] title = A Formal Model of the Document Object Model with Shadow Roots author = Achim D. Brucker , Michael Herzberg topic = Computer science/Data structures date = 2020-09-28 notify = adbrucker@0x5f.org, mail@michael-herzberg.de abstract = In this AFP entry, we extend our formalization of the core DOM with Shadow Roots. Shadow roots are a recent proposal of the web community to support a component-based development approach for client-side web applications. Shadow roots are a significant extension to the DOM standard and, as web standards are condemned to be backward compatible, such extensions often result in complex specification that may contain unwanted subtleties that can be detected by a formalization. Our Isabelle/HOL formalization is, in the sense of object-orientation, an extension of our formalization of the core DOM and enjoys the same basic properties, i.e., it is extensible, i.e., can be extended without the need of re-proving already proven properties and executable, i.e., we can generate executable code from our specification. We exploit the executability to show that our formalization complies to the official standard of the W3C, respectively, the WHATWG. [DOM_Components] title = A Formalization of Web Components author = Achim D. Brucker , Michael Herzberg topic = Computer science/Data structures date = 2020-09-28 notify = adbrucker@0x5f.org, mail@michael-herzberg.de abstract = While the DOM with shadow trees provide the technical basis for defining web components, the DOM standard neither defines the concept of web components nor specifies the safety properties that web components should guarantee. Consequently, the standard also does not discuss how or even if the methods for modifying the DOM respect component boundaries. In AFP entry, we present a formally verified model of web components and define safety properties which ensure that different web components can only interact with each other using well-defined interfaces. Moreover, our verification of the application programming interface (API) of the DOM revealed numerous invariants that implementations of the DOM API need to preserve to ensure the integrity of components. [Interpreter_Optimizations] title = Inline Caching and Unboxing Optimization for Interpreters author = Martin Desharnais topic = Computer science/Programming languages/Misc date = 2020-12-07 notify = martin.desharnais@unibw.de abstract = This Isabelle/HOL formalization builds on the VeriComp entry of the Archive of Formal Proofs to provide the following contributions:
  • an operational semantics for a realistic virtual machine (Std) for dynamically typed programming languages;
  • the formalization of an inline caching optimization (Inca), a proof of bisimulation with (Std), and a compilation function;
  • the formalization of an unboxing optimization (Ubx), a proof of bisimulation with (Inca), and a simple compilation function.
This formalization was described in the CPP 2021 paper Towards Efficient and Verified Virtual Machines for Dynamic Languages extra-history = Change history: [2021-06-14]: refactored function definitions to contain explicit basic blocks
[2021-06-25]: proved conditional completeness of compilation
[Isabelle_Marries_Dirac] title = Isabelle Marries Dirac: a Library for Quantum Computation and Quantum Information author = Anthony Bordg , Hanna Lachnitt, Yijun He topic = Computer science/Algorithms/Quantum computing, Mathematics/Physics/Quantum information date = 2020-11-22 notify = apdb3@cam.ac.uk, lachnitt@stanford.edu abstract = This work is an effort to formalise some quantum algorithms and results in quantum information theory. Formal methods being critical for the safety and security of algorithms and protocols, we foresee their widespread use for quantum computing in the future. We have developed a large library for quantum computing in Isabelle based on a matrix representation for quantum circuits, successfully formalising the no-cloning theorem, quantum teleportation, Deutsch's algorithm, the Deutsch-Jozsa algorithm and the quantum Prisoner's Dilemma. [Projective_Measurements] title = Quantum projective measurements and the CHSH inequality author = Mnacho Echenim topic = Computer science/Algorithms/Quantum computing, Mathematics/Physics/Quantum information date = 2021-03-03 notify = mnacho.echenim@univ-grenoble-alpes.fr abstract = This work contains a formalization of quantum projective measurements, also known as von Neumann measurements, which are based on elements of spectral theory. We also formalized the CHSH inequality, an inequality involving expectations in a probability space that is violated by quantum measurements, thus proving that quantum mechanics cannot be modeled with an underlying local hidden-variable theory. [Finite-Map-Extras] title = Finite Map Extras author = Javier Díaz topic = Computer science/Data structures date = 2020-10-12 notify = javier.diaz.manzi@gmail.com abstract = This entry includes useful syntactic sugar, new operators and functions, and their associated lemmas for finite maps which currently are not present in the standard Finite_Map theory. [Relational_Minimum_Spanning_Trees] title = Relational Minimum Spanning Tree Algorithms author = Walter Guttmann , Nicolas Robinson-O'Brien<> topic = Computer science/Algorithms/Graph date = 2020-12-08 notify = walter.guttmann@canterbury.ac.nz abstract = We verify the correctness of Prim's, Kruskal's and Borůvka's minimum spanning tree algorithms based on algebras for aggregation and minimisation. [Topological_Semantics] title = Topological semantics for paraconsistent and paracomplete logics author = David Fuenmayor topic = Logic/General logic date = 2020-12-17 notify = davfuenmayor@gmail.com abstract = We introduce a generalized topological semantics for paraconsistent and paracomplete logics by drawing upon early works on topological Boolean algebras (cf. works by Kuratowski, Zarycki, McKinsey & Tarski, etc.). In particular, this work exemplarily illustrates the shallow semantical embeddings approach (SSE) employing the proof assistant Isabelle/HOL. By means of the SSE technique we can effectively harness theorem provers, model finders and 'hammers' for reasoning with quantified non-classical logics. [CSP_RefTK] title = The HOL-CSP Refinement Toolkit author = Safouan Taha , Burkhart Wolff , Lina Ye topic = Computer science/Concurrency/Process calculi, Computer science/Semantics date = 2020-11-19 notify = wolff@lri.fr abstract = We use a formal development for CSP, called HOL-CSP2.0, to analyse a family of refinement notions, comprising classic and new ones. This analysis enables to derive a number of properties that allow to deepen the understanding of these notions, in particular with respect to specification decomposition principles for the case of infinite sets of events. The established relations between the refinement relations help to clarify some obscure points in the CSP literature, but also provide a weapon for shorter refinement proofs. Furthermore, we provide a framework for state-normalisation allowing to formally reason on parameterised process architectures. As a result, we have a modern environment for formal proofs of concurrent systems that allow for the combination of general infinite processes with locally finite ones in a logically safe way. We demonstrate these verification-techniques for classical, generalised examples: The CopyBuffer for arbitrary data and the Dijkstra's Dining Philosopher Problem of arbitrary size. [Hood_Melville_Queue] title = Hood-Melville Queue author = Alejandro Gómez-Londoño topic = Computer science/Data structures date = 2021-01-18 notify = nipkow@in.tum.de abstract = This is a verified implementation of a constant time queue. The original design is due to Hood and Melville. This formalization follows the presentation in Purely Functional Data Structuresby Okasaki. [JinjaDCI] title = JinjaDCI: a Java semantics with dynamic class initialization author = Susannah Mansky topic = Computer science/Programming languages/Language definitions date = 2021-01-11 notify = sjohnsn2@illinois.edu, susannahej@gmail.com abstract = We extend Jinja to include static fields, methods, and instructions, and dynamic class initialization, based on the Java SE 8 specification. This includes extension of definitions and proofs. This work is partially described in Mansky and Gunter's paper at CPP 2019 and Mansky's doctoral thesis (UIUC, 2020). [Blue_Eyes] title = Solution to the xkcd Blue Eyes puzzle author = Jakub Kądziołka topic = Logic/General logic/Logics of knowledge and belief date = 2021-01-30 notify = kuba@kadziolka.net abstract = In a puzzle published by Randall Munroe, perfect logicians forbidden from communicating are stranded on an island, and may only leave once they have figured out their own eye color. We present a method of modeling the behavior of perfect logicians and formalize a solution of the puzzle. [Laws_of_Large_Numbers] title = The Laws of Large Numbers author = Manuel Eberl topic = Mathematics/Probability theory date = 2021-02-10 notify = manuel@pruvisto.org abstract =

The Law of Large Numbers states that, informally, if one performs a random experiment $X$ many times and takes the average of the results, that average will be very close to the expected value $E[X]$.

More formally, let $(X_i)_{i\in\mathbb{N}}$ be a sequence of independently identically distributed random variables whose expected value $E[X_1]$ exists. Denote the running average of $X_1, \ldots, X_n$ as $\overline{X}_n$. Then:

  • The Weak Law of Large Numbers states that $\overline{X}_{n} \longrightarrow E[X_1]$ in probability for $n\to\infty$, i.e. $\mathcal{P}(|\overline{X}_{n} - E[X_1]| > \varepsilon) \longrightarrow 0$ as $n\to\infty$ for any $\varepsilon > 0$.
  • The Strong Law of Large Numbers states that $\overline{X}_{n} \longrightarrow E[X_1]$ almost surely for $n\to\infty$, i.e. $\mathcal{P}(\overline{X}_{n} \longrightarrow E[X_1]) = 1$.

In this entry, I formally prove the strong law and from it the weak law. The approach used for the proof of the strong law is a particularly quick and slick one based on ergodic theory, which was formalised by Gouëzel in another AFP entry.

[BTree] title = A Verified Imperative Implementation of B-Trees author = Niels Mündler topic = Computer science/Data structures date = 2021-02-24 notify = n.muendler@tum.de abstract = In this work, we use the interactive theorem prover Isabelle/HOL to verify an imperative implementation of the classical B-tree data structure invented by Bayer and McCreight [ACM 1970]. The implementation supports set membership, insertion and deletion queries with efficient binary search for intra-node navigation. This is accomplished by first specifying the structure abstractly in the functional modeling language HOL and proving functional correctness. Using manual refinement, we derive an imperative implementation in Imperative/HOL. We show the validity of this refinement using the separation logic utilities from the Isabelle Refinement Framework . The code can be exported to the programming languages SML, OCaml and Scala. We examine the runtime of all operations indirectly by reproducing results of the logarithmic relationship between height and the number of nodes. The results are discussed in greater detail in the corresponding Bachelor's Thesis. extra-history = Change history: [2021-05-02]: Add implementation and proof of correctness of imperative deletion operations. Further add the option to export code to OCaml.
[Sunflowers] title = The Sunflower Lemma of Erdős and Rado author = René Thiemann topic = Mathematics/Combinatorics date = 2021-02-25 notify = rene.thiemann@uibk.ac.at abstract = We formally define sunflowers and provide a formalization of the sunflower lemma of Erdős and Rado: whenever a set of size-k-sets has a larger cardinality than (r - 1)k · k!, then it contains a sunflower of cardinality r. [Mereology] title = Mereology author = Ben Blumson topic = Logic/Philosophical aspects date = 2021-03-01 notify = benblumson@gmail.com abstract = We use Isabelle/HOL to verify elementary theorems and alternative axiomatizations of classical extensional mereology. [Modular_arithmetic_LLL_and_HNF_algorithms] title = Two algorithms based on modular arithmetic: lattice basis reduction and Hermite normal form computation author = Ralph Bottesch <>, Jose Divasón , René Thiemann topic = Computer science/Algorithms/Mathematical date = 2021-03-12 notify = rene.thiemann@uibk.ac.at abstract = We verify two algorithms for which modular arithmetic plays an essential role: Storjohann's variant of the LLL lattice basis reduction algorithm and Kopparty's algorithm for computing the Hermite normal form of a matrix. To do this, we also formalize some facts about the modulo operation with symmetric range. Our implementations are based on the original papers, but are otherwise efficient. For basis reduction we formalize two versions: one that includes all of the optimizations/heuristics from Storjohann's paper, and one excluding a heuristic that we observed to often decrease efficiency. We also provide a fast, self-contained certifier for basis reduction, based on the efficient Hermite normal form algorithm. [Constructive_Cryptography_CM] title = Constructive Cryptography in HOL: the Communication Modeling Aspect author = Andreas Lochbihler , S. Reza Sefidgar <> topic = Computer science/Security/Cryptography, Mathematics/Probability theory date = 2021-03-17 notify = mail@andreas-lochbihler.de, reza.sefidgar@inf.ethz.ch abstract = Constructive Cryptography (CC) [ICS 2011, TOSCA 2011, TCC 2016] introduces an abstract approach to composable security statements that allows one to focus on a particular aspect of security proofs at a time. Instead of proving the properties of concrete systems, CC studies system classes, i.e., the shared behavior of similar systems, and their transformations. Modeling of systems communication plays a crucial role in composability and reusability of security statements; yet, this aspect has not been studied in any of the existing CC results. We extend our previous CC formalization [Constructive_Cryptography, CSF 2019] with a new semantic domain called Fused Resource Templates (FRT) that abstracts over the systems communication patterns in CC proofs. This widens the scope of cryptography proof formalizations in the CryptHOL library [CryptHOL, ESOP 2016, J Cryptol 2020]. This formalization is described in Abstract Modeling of Systems Communication in Constructive Cryptography using CryptHOL. [IFC_Tracking] title = Information Flow Control via Dependency Tracking author = Benedikt Nordhoff topic = Computer science/Security date = 2021-04-01 notify = b.n@wwu.de abstract = We provide a characterisation of how information is propagated by program executions based on the tracking data and control dependencies within executions themselves. The characterisation might be used for deriving approximative safety properties to be targeted by static analyses or checked at runtime. We utilise a simple yet versatile control flow graph model as a program representation. As our model is not assumed to be finite it can be instantiated for a broad class of programs. The targeted security property is indistinguishable security where executions produce sequences of observations and only non-terminating executions are allowed to drop a tail of those. A very crude approximation of our characterisation is slicing based on program dependence graphs, which we use as a minimal example and derive a corresponding soundness result. For further details and applications refer to the authors upcoming dissertation. [Grothendieck_Schemes] title = Grothendieck's Schemes in Algebraic Geometry author = Anthony Bordg , Lawrence Paulson , Wenda Li topic = Mathematics/Algebra, Mathematics/Geometry date = 2021-03-29 notify = apdb3@cam.ac.uk, lp15@cam.ac.uk abstract = We formalize mainstream structures in algebraic geometry culminating in Grothendieck's schemes: presheaves of rings, sheaves of rings, ringed spaces, locally ringed spaces, affine schemes and schemes. We prove that the spectrum of a ring is a locally ringed space, hence an affine scheme. Finally, we prove that any affine scheme is a scheme. [Progress_Tracking] title = Formalization of Timely Dataflow's Progress Tracking Protocol author = Matthias Brun<>, Sára Decova<>, Andrea Lattuada, Dmitriy Traytel topic = Computer science/Algorithms/Distributed date = 2021-04-13 notify = matthias.brun@inf.ethz.ch, traytel@di.ku.dk abstract = Large-scale stream processing systems often follow the dataflow paradigm, which enforces a program structure that exposes a high degree of parallelism. The Timely Dataflow distributed system supports expressive cyclic dataflows for which it offers low-latency data- and pipeline-parallel stream processing. To achieve high expressiveness and performance, Timely Dataflow uses an intricate distributed protocol for tracking the computation’s progress. We formalize this progress tracking protocol and verify its safety. Our formalization is described in detail in our forthcoming ITP'21 paper. [GaleStewart_Games] title = Gale-Stewart Games author = Sebastiaan Joosten topic = Mathematics/Games and economics date = 2021-04-23 notify = sjcjoosten@gmail.com abstract = This is a formalisation of the main result of Gale and Stewart from 1953, showing that closed finite games are determined. This property is now known as the Gale Stewart Theorem. While the original paper shows some additional theorems as well, we only formalize this main result, but do so in a somewhat general way. We formalize games of a fixed arbitrary length, including infinite length, using co-inductive lists, and show that defensive strategies exist unless the other player is winning. For closed games, defensive strategies are winning for the closed player, proving that such games are determined. For finite games, which are a special case in our formalisation, all games are closed. [Metalogic_ProofChecker] title = Isabelle's Metalogic: Formalization and Proof Checker author = Tobias Nipkow , Simon Roßkopf topic = Logic/General logic date = 2021-04-27 notify = rosskops@in.tum.de abstract = In this entry we formalize Isabelle's metalogic in Isabelle/HOL. Furthermore, we define a language of proof terms and an executable proof checker and prove its soundness wrt. the metalogic. The formalization is intentionally kept close to the Isabelle implementation(for example using de Brujin indices) to enable easy integration of generated code with the Isabelle system without a complicated translation layer. The formalization is described in our CADE 28 paper. [Regression_Test_Selection] title = Regression Test Selection author = Susannah Mansky topic = Computer science/Algorithms date = 2021-04-30 notify = sjohnsn2@illinois.edu, susannahej@gmail.com abstract = This development provides a general definition for safe Regression Test Selection (RTS) algorithms. RTS algorithms select which tests to rerun on revised code, reducing the time required to check for newly introduced errors. An RTS algorithm is considered safe if and only if all deselected tests would have unchanged results. This definition is instantiated with two class-collection-based RTS algorithms run over the JVM as modeled by JinjaDCI. This is achieved with a general definition for Collection Semantics, small-step semantics instrumented to collect information during execution. As the RTS definition mandates safety, these instantiations include proofs of safety. This work is described in Mansky and Gunter's LSFA 2020 paper and Mansky's doctoral thesis (UIUC, 2020). [Padic_Ints] title = Hensel's Lemma for the p-adic Integers author = Aaron Crighton topic = Mathematics/Number theory date = 2021-03-23 notify = crightoa@mcmaster.ca abstract = We formalize the ring of p-adic integers within the framework of the HOL-Algebra library. The carrier of the ring is formalized as the inverse limit of quotients of the integers by powers of a fixed prime p. We define an integer-valued valuation, as well as an extended-integer valued valuation which sends 0 to the infinite element. Basic topological facts about the p-adic integers are formalized, including completeness and sequential compactness. Taylor expansions of polynomials over a commutative ring are defined, culminating in the formalization of Hensel's Lemma based on a proof due to Keith Conrad. [Combinatorics_Words] title = Combinatorics on Words Basics author = Štěpán Holub , Martin Raška<>, Štěpán Starosta topic = Computer science/Automata and formal languages date = 2021-05-24 notify = holub@karlin.mff.cuni.cz, stepan.starosta@fit.cvut.cz abstract = We formalize basics of Combinatorics on Words. This is an extension of existing theories on lists. We provide additional properties related to prefix, suffix, factor, length and rotation. The topics include prefix and suffix comparability, mismatch, word power, total and reversed morphisms, border, periods, primitivity and roots. We also formalize basic, mostly folklore results related to word equations: equidivisibility, commutation and conjugation. Slightly advanced properties include the Periodicity lemma (often cited as the Fine and Wilf theorem) and the variant of the Lyndon-Schützenberger theorem for words. We support the algebraic point of view which sees words as generators of submonoids of a free monoid. This leads to the concepts of the (free) hull, the (free) basis (or code). [Combinatorics_Words_Lyndon] title = Lyndon words author = Štěpán Holub , Štěpán Starosta topic = Computer science/Automata and formal languages date = 2021-05-24 notify = holub@karlin.mff.cuni.cz, stepan.starosta@fit.cvut.cz abstract = Lyndon words are words lexicographically minimal in their conjugacy class. We formalize their basic properties and characterizations, in particular the concepts of the longest Lyndon suffix and the Lyndon factorization. Most of the work assumes a fixed lexicographical order. Nevertheless we also define the smallest relation guaranteeing lexicographical minimality of a given word (in its conjugacy class). [Combinatorics_Words_Graph_Lemma] title = Graph Lemma author = Štěpán Holub , Štěpán Starosta topic = Computer science/Automata and formal languages date = 2021-05-24 notify = holub@karlin.mff.cuni.cz, stepan.starosta@fit.cvut.cz abstract = Graph lemma quantifies the defect effect of a system of word equations. That is, it provides an upper bound on the rank of the system. We formalize the proof based on the decomposition of a solution into its free basis. A direct application is an alternative proof of the fact that two noncommuting words form a code. [Lifting_the_Exponent] title = Lifting the Exponent author = Jakub Kądziołka topic = Mathematics/Number theory date = 2021-04-27 notify = kuba@kadziolka.net abstract = We formalize the Lifting the Exponent Lemma, which shows how to find the largest power of $p$ dividing $a^n \pm b^n$, for a prime $p$ and positive integers $a$ and $b$. The proof follows Amir Hossein Parvardi's. [IMP_Compiler] title = A Shorter Compiler Correctness Proof for Language IMP author = Pasquale Noce topic = Computer science/Programming languages/Compiling date = 2021-06-04 notify = pasquale.noce.lavoro@gmail.com abstract = This paper presents a compiler correctness proof for the didactic imperative programming language IMP, introduced in Nipkow and Klein's book on formal programming language semantics (version of March 2021), whose size is just two thirds of the book's proof in the number of formal text lines. As such, it promises to constitute a further enhanced reference for the formal verification of compilers meant for larger, real-world programming languages. The presented proof does not depend on language determinism, so that the proposed approach can be applied to non-deterministic languages as well. As a confirmation, this paper extends IMP with an additional non-deterministic choice command, and proves compiler correctness, viz. the simulation of compiled code execution by source code, for such extended language. [Public_Announcement_Logic] title = Public Announcement Logic author = Asta Halkjær From topic = Logic/General logic/Logics of knowledge and belief date = 2021-06-17 notify = ahfrom@dtu.dk abstract = This work is a formalization of public announcement logic with countably many agents. It includes proofs of soundness and completeness for a variant of the axiom system PA + DIST! + NEC!. The completeness proof builds on the Epistemic Logic theory. [MiniSail] title = MiniSail - A kernel language for the ISA specification language SAIL author = Mark Wassell topic = Computer science/Programming languages/Type systems date = 2021-06-18 notify = mpwassell@gmail.com abstract = MiniSail is a kernel language for Sail, an instruction set architecture (ISA) specification language. Sail is an imperative language with a light-weight dependent type system similar to refinement type systems. From an ISA specification, the Sail compiler can generate theorem prover code and C (or OCaml) to give an executable emulator for an architecture. The idea behind MiniSail is to capture the key and novel features of Sail in terms of their syntax, typing rules and operational semantics, and to confirm that they work together by proving progress and preservation lemmas. We use the Nominal2 library to handle binding. [SpecCheck] title = SpecCheck - Specification-Based Testing for Isabelle/ML author = Kevin Kappelmann , Lukas Bulwahn , Sebastian Willenbrink topic = Tools date = 2021-07-01 notify = kevin.kappelmann@tum.de abstract = SpecCheck is a QuickCheck-like testing framework for Isabelle/ML. You can use it to write specifications for ML functions. SpecCheck then checks whether your specification holds by testing your function against a given number of generated inputs. It helps you to identify bugs by printing counterexamples on failure and provides you timing information. SpecCheck is customisable and allows you to specify your own input generators, test output formats, as well as pretty printers and shrinking functions for counterexamples among other things. [Relational_Forests] title = Relational Forests author = Walter Guttmann topic = Mathematics/Graph theory date = 2021-08-03 notify = walter.guttmann@canterbury.ac.nz abstract = We study second-order formalisations of graph properties expressed as first-order formulas in relation algebras extended with a Kleene star. The formulas quantify over relations while still avoiding quantification over elements of the base set. We formalise the property of undirected graphs being acyclic this way. This involves a study of various kinds of orientation of graphs. We also verify basic algorithms to constructively prove several second-order properties. [Fresh_Identifiers] title = Fresh identifiers author = Andrei Popescu , Thomas Bauereiss topic = Computer science/Data structures date = 2021-08-16 notify = thomas@bauereiss.name, a.popescu@sheffield.ac.uk abstract = This entry defines a type class with an operator returning a fresh identifier, given a set of already used identifiers and a preferred identifier. The entry provides a default instantiation for any infinite type, as well as executable instantiations for natural numbers and strings. [CoCon] title = CoCon: A Confidentiality-Verified Conference Management System author = Andrei Popescu , Peter Lammich , Thomas Bauereiss topic = Computer science/Security date = 2021-08-16 notify = thomas@bauereiss.name, a.popescu@sheffield.ac.uk -abstract = +abstract = This entry contains the confidentiality verification of the (functional kernel of) the CoCon conference management system [1, 2]. The confidentiality properties refer to the documents managed by the system, namely papers, reviews, discussion logs and acceptance/rejection decisions, and also to the assignment of reviewers to papers. They have all been formulated as instances of BD Security [3, 4] and verified using the BD Security unwinding technique. [BD_Security_Compositional] title = Compositional BD Security author = Thomas Bauereiss , Andrei Popescu topic = Computer science/Security date = 2021-08-16 notify = thomas@bauereiss.name, a.popescu@sheffield.ac.uk -abstract = +abstract = Building on a previous AFP entry that formalizes the Bounded-Deducibility Security (BD Security) framework [1], we formalize compositionality and transport theorems for information flow security. These results allow lifting BD Security properties from individual components specified as transition systems, to a composition of systems specified as communicating products of transition systems. The underlying ideas of these results are presented in the papers [1] and [2]. The latter paper also describes a major case study where these results have been used: on verifying the CoSMeDis distributed social media platform (itself formalized as an AFP entry that builds on this entry). [CoSMed] title = CoSMed: A confidentiality-verified social media platform author = Thomas Bauereiss , Andrei Popescu topic = Computer science/Security date = 2021-08-16 notify = thomas@bauereiss.name, a.popescu@sheffield.ac.uk -abstract = +abstract = This entry contains the confidentiality verification of the (functional kernel of) the CoSMed social media platform. The confidentiality properties are formalized as instances of BD Security [1, 2]. An innovation in the deployment of BD Security compared to previous work is the use of dynamic declassification triggers, incorporated as part of inductive bounds, for providing stronger guarantees that account for the repeated opening and closing of access windows. To further strengthen the confidentiality guarantees, we also prove "traceback" properties about the accessibility decisions affecting the information managed by the system. [CoSMeDis] title = CoSMeDis: A confidentiality-verified distributed social media platform author = Thomas Bauereiss , Andrei Popescu topic = Computer science/Security date = 2021-08-16 notify = thomas@bauereiss.name, a.popescu@sheffield.ac.uk -abstract = +abstract = This entry contains the confidentiality verification of the (functional kernel of) the CoSMeDis distributed social media platform presented in [1]. CoSMeDis is a multi-node extension the CoSMed prototype social media platform [2, 3, 4]. The confidentiality properties are formalized as instances of BD Security [5, 6]. The lifting of confidentiality properties from single nodes to the entire CoSMeDis network is performed using compositionality and transport theorems for BD Security, which are described in [1] and formalized in a separate AFP entry. [Three_Circles] title = The Theorem of Three Circles author = Fox Thomson , Wenda Li topic = Mathematics/Analysis date = 2021-08-21 notify = foxthomson0@gmail.com, wl302@cam.ac.uk abstract = The Descartes test based on Bernstein coefficients and Descartes’ rule of signs effectively (over-)approximates the number of real roots of a univariate polynomial over an interval. In this entry we formalise the theorem of three circles, which gives sufficient conditions for when the Descartes test returns 0 or 1. This is the first step for efficient root isolation. [Design_Theory] title = Combinatorial Design Theory author = Chelsea Edmonds , Lawrence Paulson topic = Mathematics/Combinatorics date = 2021-08-13 notify = cle47@cam.ac.uk abstract = Combinatorial design theory studies incidence set systems with certain balance and symmetry properties. It is closely related to hypergraph theory. This formalisation presents a general library for formal reasoning on incidence set systems, designs and their applications, including formal definitions and proofs for many key properties, operations, and theorems on the construction and existence of designs. Notably, this includes formalising t-designs, balanced incomplete block designs (BIBD), group divisible designs (GDD), pairwise balanced designs (PBD), design isomorphisms, and the relationship between graphs and designs. A locale-centric approach has been used to manage the relationships between the many different types of designs. Theorems of particular interest include the necessary conditions for existence of a BIBD, Wilson's construction on GDDs, and Bose's inequality on resolvable designs. Parts of this formalisation are explored in the paper "A Modular First Formalisation of Combinatorial Design Theory", presented at CICM 2021. [Logging_Independent_Anonymity] title = Logging-independent Message Anonymity in the Relational Method author = Pasquale Noce topic = Computer science/Security date = 2021-08-26 notify = pasquale.noce.lavoro@gmail.com abstract = In the context of formal cryptographic protocol verification, logging-independent message anonymity is the property for a given message to remain anonymous despite the attacker's capability of mapping messages of that sort to agents based on some intrinsic feature of such messages, rather than by logging the messages exchanged by legitimate agents as with logging-dependent message anonymity. This paper illustrates how logging-independent message anonymity can be formalized according to the relational method for formal protocol verification by considering a real-world protocol, namely the Restricted Identification one by the BSI. This sample model is used to verify that the pseudonymous identifiers output by user identification tokens remain anonymous under the expected conditions. [Dominance_CHK] title = A data flow analysis algorithm for computing dominators author = Nan Jiang<> topic = Computer science/Programming languages/Static analysis date = 2021-09-05 notify = nanjiang@whu.edu.cn abstract = This entry formalises the fast iterative algorithm for computing dominators due to Cooper, Harvey and Kennedy. It gives a specification of computing dominators on a control flow graph where each node refers to its reverse post order number. A semilattice of reversed-ordered list which represents dominators is built and a Kildall-style algorithm on the semilattice is defined for computing dominators. Finally the soundness and completeness of the algorithm are proved w.r.t. the specification. [Conditional_Simplification] title = Conditional Simplification author = Mihails Milehins topic = Tools date = 2021-09-06 notify = mihailsmilehins@gmail.com abstract = The article provides a collection of experimental general-purpose proof methods for the object logic Isabelle/HOL of the formal proof assistant Isabelle. The methods in the collection offer functionality that is similar to certain aspects of the functionality provided by the standard proof methods of Isabelle that combine classical reasoning and rewriting, such as the method auto, but use a different approach for rewriting. More specifically, these methods allow for the side conditions of the rewrite rules to be solved via intro-resolution. [Intro_Dest_Elim] title = IDE: Introduction, Destruction, Elimination author = Mihails Milehins topic = Tools date = 2021-09-06 notify = mihailsmilehins@gmail.com abstract = The article provides the command mk_ide for the object logic Isabelle/HOL of the formal proof assistant Isabelle. The command mk_ide enables the automated synthesis of the introduction, destruction and elimination rules from arbitrary definitions of constant predicates stated in Isabelle/HOL. [CZH_Foundations] title = Category Theory for ZFC in HOL I: Foundations: Design Patterns, Set Theory, Digraphs, Semicategories author = Mihails Milehins topic = Mathematics/Category theory, Logic/Set theory date = 2021-09-06 notify = mihailsmilehins@gmail.com abstract = This article provides a foundational framework for the formalization of category theory in the object logic ZFC in HOL of the formal proof assistant Isabelle. More specifically, this article provides a formalization of canonical set-theoretic constructions internalized in the type V associated with the ZFC in HOL, establishes a design pattern for the formalization of mathematical structures using sequences and locales, and showcases the developed infrastructure by providing formalizations of the elementary theories of digraphs and semicategories. The methodology chosen for the formalization of the theories of digraphs and semicategories (and categories in future articles) rests on the ideas that were originally expressed in the article Set-Theoretical Foundations of Category Theory written by Solomon Feferman and Georg Kreisel. Thus, in the context of this work, each of the aforementioned mathematical structures is represented as a term of the type V embedded into a stage of the von Neumann hierarchy. [CZH_Elementary_Categories] title = Category Theory for ZFC in HOL II: Elementary Theory of 1-Categories author = Mihails Milehins topic = Mathematics/Category theory date = 2021-09-06 notify = mihailsmilehins@gmail.com abstract = This article provides a formalization of the foundations of the theory of 1-categories in the object logic ZFC in HOL of the formal proof assistant Isabelle. The article builds upon the foundations that were established in the AFP entry Category Theory for ZFC in HOL I: Foundations: Design Patterns, Set Theory, Digraphs, Semicategories. [CZH_Universal_Constructions] title = Category Theory for ZFC in HOL III: Universal Constructions author = Mihails Milehins topic = Mathematics/Category theory date = 2021-09-06 notify = mihailsmilehins@gmail.com abstract = The article provides a formalization of elements of the theory of universal constructions for 1-categories (such as limits, adjoints and Kan extensions) in the object logic ZFC in HOL of the formal proof assistant Isabelle. The article builds upon the foundations established in the AFP entry Category Theory for ZFC in HOL II: Elementary Theory of 1-Categories. [Conditional_Transfer_Rule] title = Conditional Transfer Rule author = Mihails Milehins topic = Tools date = 2021-09-06 notify = mihailsmilehins@gmail.com abstract = This article provides a collection of experimental utilities for unoverloading of definitions and synthesis of conditional transfer rules for the object logic Isabelle/HOL of the formal proof assistant Isabelle written in Isabelle/ML. [Types_To_Sets_Extension] title = Extension of Types-To-Sets author = Mihails Milehins topic = Tools date = 2021-09-06 notify = mihailsmilehins@gmail.com abstract = In their article titled From Types to Sets by Local Type Definitions in Higher-Order Logic and published in the proceedings of the conference Interactive Theorem Proving in 2016, Ondřej Kunčar and Andrei Popescu propose an extension of the logic Isabelle/HOL and an associated algorithm for the relativization of the type-based theorems to more flexible set-based theorems, collectively referred to as Types-To-Sets. One of the aims of their work was to open an opportunity for the development of a software tool for applied relativization in the implementation of the logic Isabelle/HOL of the proof assistant Isabelle. In this article, we provide a prototype of a software framework for the interactive automated relativization of theorems in Isabelle/HOL, developed as an extension of the proof language Isabelle/Isar. The software framework incorporates the implementation of the proposed extension of the logic, and builds upon some of the ideas for further work expressed in the original article on Types-To-Sets by Ondřej Kunčar and Andrei Popescu and the subsequent article Smooth Manifolds and Types to Sets for Linear Algebra in Isabelle/HOL that was written by Fabian Immler and Bohua Zhan and published in the proceedings of the International Conference on Certified Programs and Proofs in 2019. - + [Complex_Bounded_Operators] title = Complex Bounded Operators author = Jose Manuel Rodriguez Caballero , Dominique Unruh topic = Mathematics/Analysis date = 2021-09-18 notify = unruh@ut.ee -abstract = +abstract = We present a formalization of bounded operators on complex vector spaces. Our formalization contains material on complex vector spaces (normed spaces, Banach spaces, Hilbert spaces) that complements and goes beyond the developments of real vectors spaces in the Isabelle/HOL standard library. We define the type of bounded operators between complex vector spaces (cblinfun) and develop the theory of unitaries, projectors, extension of bounded linear functions (BLT theorem), adjoints, Loewner order, closed subspaces and more. For the finite-dimensional case, we provide code generation support by identifying finite-dimensional operators with matrices as formalized in the Jordan_Normal_Form AFP entry. [Weighted_Path_Order] title = A Formalization of Weighted Path Orders and Recursive Path Orders author = Christian Sternagel , René Thiemann , Akihisa Yamada topic = Logic/Rewriting date = 2021-09-16 notify = rene.thiemann@uibk.ac.at -abstract = +abstract = We define the weighted path order (WPO) and formalize several properties such as strong normalization, the subterm property, and closure properties under substitutions and contexts. Our definition of WPO extends the original definition by also permitting multiset comparisons of arguments instead of just lexicographic extensions. Therefore, our WPO not only subsumes lexicographic path orders (LPO), but also recursive path orders (RPO). We formally prove these subsumptions and therefore all of the mentioned properties of WPO are automatically transferable to LPO and RPO as well. Such a transformation is not required for Knuth–Bendix orders (KBO), since they have already been formalized. Nevertheless, we still provide a proof that WPO subsumes KBO and thereby underline the generality of WPO. [FOL_Axiomatic] title = Soundness and Completeness of an Axiomatic System for First-Order Logic author = Asta Halkjær From topic = Logic/General logic/Classical first-order logic, Logic/Proof theory date = 2021-09-24 notify = ahfrom@dtu.dk abstract = This work is a formalization of the soundness and completeness of an axiomatic system for first-order logic. The proof system is based on System Q1 by Smullyan and the completeness proof follows his textbook "First-Order Logic" (Springer-Verlag 1968). The completeness proof is in the Henkin style where a consistent set is extended to a maximal consistent set using Lindenbaum's construction and Henkin witnesses are added during the construction to ensure saturation as well. The resulting set is a Hintikka set which, by the model existence theorem, is satisfiable in the Herbrand universe. - - + + [Virtual_Substitution] title = Verified Quadratic Virtual Substitution for Real Arithmetic author = Matias Scharager , Katherine Cordwell , Stefan Mitsch , André Platzer topic = Computer science/Algorithms/Mathematical date = 2021-10-02 notify = mscharag@cs.cmu.edu, kcordwel@cs.cmu.edu, smitsch@cs.cmu.edu, aplatzer@cs.cmu.edu -abstract = +abstract = This paper presents a formally verified quantifier elimination (QE) algorithm for first-order real arithmetic by linear and quadratic virtual substitution (VS) in Isabelle/HOL. The Tarski-Seidenberg theorem established that the first-order logic of real arithmetic is decidable by QE. However, in practice, QE algorithms are highly complicated and often combine multiple methods for performance. VS is a practically successful method for QE that targets formulas with low-degree polynomials. To our knowledge, this is the first work to formalize VS for quadratic real arithmetic including inequalities. The proofs necessitate various contributions to the existing multivariate polynomial libraries in Isabelle/HOL. Our framework is modularized and easily expandable (to facilitate integrating future optimizations), and could serve as a basis for developing practical general-purpose QE algorithms. Further, as our formalization is designed with practicality in mind, we export our development to SML and test the resulting code on 378 benchmarks from the literature, comparing to Redlog, Z3, Wolfram Engine, and SMT-RAT. This identified inconsistencies in some tools, underscoring the significance of a verified approach for the intricacies of real arithmetic. +[Correctness_Algebras] +title = Algebras for Iteration, Infinite Executions and Correctness of Sequential Computations +author = Walter Guttmann +topic = Computer science/Programming languages/Logics +date = 2021-10-12 +notify = walter.guttmann@canterbury.ac.nz +abstract = + We study models of state-based non-deterministic sequential + computations and describe them using algebras. We propose algebras + that describe iteration for strict and non-strict computations. They + unify computation models which differ in the fixpoints used to + represent iteration. We propose algebras that describe the infinite + executions of a computation. They lead to a unified approximation + order and results that connect fixpoints in the approximation and + refinement orders. This unifies the semantics of recursion for a range + of computation models. We propose algebras that describe preconditions + and the effect of while-programs under postconditions. They unify + correctness statements in two dimensions: one statement applies in + various computation models to various correctness claims. + +[Belief_Revision] +title = Belief Revision Theory +author = Valentin Fouillard , Safouan Taha , Frédéric Boulanger , Nicolas Sabouret <> +topic = Logic/General logic/Logics of knowledge and belief +date = 2021-10-19 +notify = safouan.taha@lri.fr, valentin.fouillard@limsi.fr +abstract = + The 1985 paper by Carlos Alchourrón, Peter Gärdenfors, and David + Makinson (AGM), “On the Logic of Theory Change: Partial Meet + Contraction and Revision Functions” launches a large and rapidly + growing literature that employs formal models and logics to handle + changing beliefs of a rational agent and to take into account new + piece of information observed by this agent. In 2011, a review book + titled "AGM 25 Years: Twenty-Five Years of Research in Belief + Change" was edited to summarize the first twenty five years of + works based on AGM. This HOL-based AFP entry is a faithful + formalization of the AGM operators (e.g. contraction, revision, + remainder ...) axiomatized in the original paper. It also contains the + proofs of all the theorems stated in the paper that show how these + operators combine. Both proofs of Harper and Levi identities are + established. + + diff --git a/thys/Belief_Revision/AGM_Contraction.thy b/thys/Belief_Revision/AGM_Contraction.thy new file mode 100755 --- /dev/null +++ b/thys/Belief_Revision/AGM_Contraction.thy @@ -0,0 +1,674 @@ +(*<*) +\\ ******************************************************************** + * Project : AGM Theory + * Version : 1.0 + * + * Authors : Valentin Fouillard, Safouan Taha, Frederic Boulanger + and Nicolas Sabouret + * + * This file : AGM contraction + * + * Copyright (c) 2021 Université Paris Saclay, France + * + * All rights reserved. + * + ******************************************************************************\ + +theory AGM_Contraction + +imports AGM_Logic AGM_Remainder + +begin +(*>*) + +section \Contractions\ +text\The first operator of belief change of the AGM framework is contraction. This operator consist to remove +a sentence @{term \\\} from a belief set @{term \K\} in such a way that @{term \K\} no longer imply @{term \\\}. + +In the following we will first axiomatize such operators at different levels of logics (Tarskian, supraclassical and compact) +and then we will give constructions satisfying these axioms. The following graph summarizes all equivalences we established: + +\includegraphics[width=\textwidth]{"graph_locales.pdf"} + +We will use the extension feature of locales in Isabelle/HOL to incrementally define the contraction +operator as shown by blue arrows in the previous figure. Then, using the interpretation feature of locales, we will prove the equivalence between +descriptive and constructive approaches at each level depending on the adopted logics (black arrows). +\ + +subsection\AGM contraction postulates\ +text\ +The operator of contraction is denoted by the symbol @{text \\
\} and respects the six following conditions : +\<^item> @{text \contract_closure\} : a belief set @{term \K\} contracted by @{term \\\} should be logically closed +\<^item> @{text \contract_inclusion\} : a contracted set @{term \K\} should be a subset of the original one +\<^item> @{text \contract_vacuity\} : if @{term \\\} is not included in a set @{term \K\} then the contraction of @{term \K\} by @{term \\\} involves no change at all +\<^item> @{text \contract_success\} : if a set @{term \K\} is contracted by @{term \\\} then @{term \K\} does not imply @{term \\\} +\<^item> @{text \contract_recovery\}: all propositions removed in a set @{term \K\} by contraction of @{term \\\} will be recovered by expansion of @{term \\\} +\<^item> @{text \contract_extensionality\} : Extensionality guarantees that the logic of contraction is extensional in the sense of allowing logically +quivalent sentences to be freely substituted for each other\ +locale AGM_Contraction = Tarskian_logic + +fixes contraction::\'a set \ 'a \ 'a set\ (infix \\
\ 55) +assumes contract_closure: \K = Cn(A) \ K \
\ = Cn(K \
\)\ + and contract_inclusion: \K = Cn(A) \ K \
\ \ K\ + and contract_vacuity: \K = Cn(A) \ \ \ K \ K \
\ = K\ + and contract_success: \K = Cn(A) \ \ \ Cn({}) \ \ \ K \
\\ + and contract_recovery: \K = Cn(A) \ K \ ((K \
\) \ \)\ + and contract_extensionality: \K = Cn(A) \ Cn({\}) = Cn({\}) \ K \
\ = K \
\\ + + +text\ +A full contraction is defined by two more postulates to rule the conjunction. We base on a supraclassical logic. +\<^item> @{text \contract_conj_overlap\} : An element in both @{text \K \
\\} and @{text \K \
\\} is also an element of @{text \K \
(\ \ \)\} +\<^item> @{text \contract_conj_inclusion\} : If @{term \\\} not in @{text \K \
(\ \ \)\} then all elements removed by this contraction are also removed from @{text \K \
\\}\ +locale AGM_FullContraction = AGM_Contraction + Supraclassical_logic + + assumes contract_conj_overlap: \K = Cn(A) \ (K \
\) \ (K \
\) \ (K \
(\ .\. \))\ + and contract_conj_inclusion: \K = Cn(A) \ \ \ (K \
(\ .\. \)) \ ((K \
(\ .\. \) \ (K \
\)))\ + +begin +\ \two important lemmas/corollaries that can replace the two assumptions @{text \contract_conj_overlap\} and @{text \contract_conj_inclusion\}\ +text\@{text \contract_conj_overlap_variant\} does not need \\\ to occur in the left side! \ +corollary contract_conj_overlap_variant: \K = Cn(A) \ (K \
\) \ Cn({\}) \ (K \
(\ .\. \))\ +proof - + assume a:\K = Cn(A)\ + { assume b:\K \ \\ and c:\K \ \\ + hence d:\K \
(\ .\. \) = K \
(\ .\. ((.\ \) .\. \))\ + apply(rule_tac contract_extensionality[OF a]) + using conj_overlap[of _ \ \] by (simp add: Cn_same) + have e:\K \ Cn {\} \ K \
(.\ \ .\. \)\ + proof(safe) + fix \ + assume f:\\ \ K\ and g:\\ \ Cn {\}\ + have \K \
(.\ \ .\. \) \ (.\ \ .\. \) .\. \\ + by (metis a contract_recovery expansion_def f impI_PL infer_def subset_eq) + hence \K \
(.\ \ .\. \) \ .\ \ .\. \\ + by (meson disjI1_PL imp_trans inclusion_L infer_def insert_subset validD_L valid_imp_PL) + with g show \\ \ K \
(.\ \ .\. \)\ + by (metis a contract_closure disjE_PL ex_mid_PL infer_def validD_L valid_imp_PL) + qed + have ?thesis + unfolding d using e contract_conj_overlap[OF a, of \ \(.\ \ .\. \)\] a contract_inclusion by force + } + then show ?thesis + apply (cases \\ K \ \ \ \ K \ \\) + by (metis IntE a assumption_L conjE1_PL conjE2_PL contract_inclusion contract_vacuity subsetD subsetI) blast +qed + +text\@{text \contract_conj_inclusion_variant\}: Everything retained in @{text \K \
(\ \ \)\} is retained in @{text \K \
\\}\ +corollary contract_conj_inclusion_variant : \K = Cn(A) \ (K \
(\ .\. \) \ (K \
\)) \ (K \
(\ .\. \) \ (K \
\))\ +proof - + assume a:\K = Cn(A)\ + { assume d:\\ \ (K \
(\ .\. \)) \ \ \ (K \
(\ .\. \))\ + hence \\ .\. \ \ (K \
(\ .\. \))\ + using Supraclassical_logic.conjI_PL Supraclassical_logic_axioms a contract_closure by fastforce + with d have ?thesis + by (metis (no_types, lifting) Supraclassical_logic.valid_conj_PL Supraclassical_logic_axioms + Tarskian_logic.valid_expansion Tarskian_logic_axioms a contract_closure contract_inclusion + contract_recovery contract_success dual_order.trans expansion_def) + } + then show ?thesis + by (metis a conj_com_Cn contract_conj_inclusion contract_extensionality) +qed + +end + +subsection \Partial meet contraction definition\ + +text\A partial meet contraction of @{term \K\} by @{term \\\} is the intersection of some sets that not imply @{term \\\}. +We define these sets as the "remainders" @{text \(K .\. \\}. +The function of selection @{term \\\} select the best set of the remainders that do not imply @{term \\\}. +This function respect these postulates : +\<^item> @{text \is_selection\} : if there exist some set that do not imply @{term \\\} then the function selection @{term \\\} is a subset of these sets +\<^item> @{text \tautology_selection\} : if there is no set that do not imply @{term \\\} then the result of the selection function is @{term \K\} +\<^item> @{text nonempty_selection} : An empty selection function do not exist +\<^item> @{text extensional_selection} : Two proposition with the same closure have the same selection function\ +locale PartialMeetContraction = Tarskian_logic + + +fixes selection::\'a set \ 'a \ 'a set set\ (\\\) +assumes is_selection: \K = Cn(A) \ (K .\. \) \ {} \ \ K \ \ (K .\. \)\ +assumes tautology_selection: \K = Cn(A) \ (K .\. \) = {} \ \ K \ = {K}\ +assumes nonempty_selection: \K = Cn(A) \ \ K \ \ {}\ +assumes extensional_selection: \K = Cn(A) \ Cn({\}) = Cn({\}) \ \ K \ = \ K \\ + +\ \extensionality seems very hard to implement for a constructive approach, +one basic implementation will be to ignore @{term \A\} and @{term \\\} +and only base on @{text \A .\. \\} that +is already proved as extensional (lemma @{text \remainder_extensionality\})\ + +begin + +text \A partial meet is the intersection of set of selected element.\ +definition (in Tarskian_logic) meet_contraction::\'a set \ ('a set \ 'a \ 'a set set) \ 'a \ 'a set\ (\_ \
\<^bsub>_\<^esub> _\ [60,50,60]55) + where mc: \(A \
\<^bsub>\\<^esub> \) \ \(\ A \)\ + +text \Following this definition 4 postulates of AGM can be proved on a partial meet contraction: +\<^item> @{text \contract_inclusion\} +\<^item> @{text \ contract_vacuity\} +\<^item> @{text \ contract_closure\} +\<^item> @{text \ contract_extensionality\}\ + +text \@{text \pmc_inclusion\ } :a partial meet contraction is a subset of the contracted set\ +lemma pmc_inclusion: \K = Cn(A) \ K \
\<^bsub>\\<^esub> \ \ K\ + apply (cases \(K .\. \) = {}\, simp_all add: mc tautology_selection) + by (meson Inf_less_eq in_mono is_selection nonempty_selection rem_inclusion) + +text\@{text \pmc_vacuity\} : if @{term \\\} is not included in a set @{term \K\} then the partial meet contraction of @{term \K\} by @{term \\\} involves not change at all\ +lemma pmc_vacuity: \K = Cn(A) \ \ K \ \ \ K \
\<^bsub>\\<^esub> \ = K\ + unfolding mc nonconsequence_remainder + by (metis Inf_superset_mono Un_absorb1 cInf_singleton insert_not_empty is_selection mc nonconsequence_remainder pmc_inclusion sup_commute) + +text\@{text \pmc_closure\} : a partial meet contraction is logically closed\ +lemma pmc_closure: \K = Cn(A) \ (K \
\<^bsub>\\<^esub> \) = Cn(K \
\<^bsub>\\<^esub> \)\ +proof (rule subset_antisym, simp_all add:inclusion_L mc transitivity_L, goal_cases) + case 1 + have \\(\ (Cn A) \) = \{Cn(B)|B. B \ \ (Cn A) \}\ + by auto (metis idempotency_L insert_absorb insert_iff insert_subset is_selection rem_closure tautology_selection)+ + from Cn_Inter[OF this] show ?case by blast +qed + +text \@{text \pmc_extensionality\} : Extensionality guarantees that the logic of contraction is extensional in the sense of allowing logically equivalent sentences to be freely substituted for each other\ +lemma pmc_extensionality: \K = Cn(A) \ Cn({\}) = Cn({\}) \ K \
\<^bsub>\\<^esub> \ = K \
\<^bsub>\\<^esub> \\ + by (metis extensional_selection mc) + +text \@{text \pmc_tautology\} : if @{term \\\} is a tautology then the partial meet contraction of @{term \K\} by @{term \\\} is @{term \K\}\ +lemma pmc_tautology: \K = Cn(A) \ \ \ \ K \
\<^bsub>\\<^esub> \ = K\ + by (simp add: mc taut2emptyrem tautology_selection) + +text\@{text \completion\} is a an operator that can build an equivalent selection from an existing one\ +definition (in Tarskian_logic) completion::\('a set \ 'a \ 'a set set) \ 'a set \ 'a \ 'a set set\ (\*\) + where \* \ A \ \ if (A .\. \) = {} then {A} else {B. B \ A .\. \ \ \ (\ A \) \ B}\ + + +lemma selection_completion: "K = Cn(A) \ \ K \ \ * \ K \" + using completion_def is_selection tautology_selection by fastforce + +lemma (in Tarskian_logic) completion_completion: "K = Cn(A) \ * (* \) K \ = * \ K \" + by (auto simp add:completion_def) + +lemma pmc_completion: \K = Cn(A) \ K \
\<^bsub>*\\<^esub> \ = K \
\<^bsub>\\<^esub> \\ + apply(auto simp add: mc completion_def tautology_selection) + by (metis Inter_lower equals0D in_mono is_selection) + +end + +text\A transitively relational meet contraction is a partial meet contraction using a binary relation between the elements of the selection function\ +text\A relation is : +\<^item> transitive (@{text \trans_rel\}) +\<^item> non empty (there is always an element preferred to the others (@{text \nonempty_rel\}))\ + +text\A selection function @{term \\\<^sub>T\<^sub>R\} is transitively relational @{text \rel_sel\} with the following condition : +\<^item> If the the remainders @{text \K .\. \\} is empty then the selection function return @{term \K\} +\<^item> Else the selection function return a non empty transitive relation on the remainders\ +locale TransitivelyRelationalMeetContraction = Tarskian_logic + + +fixes relation::\'a set \ 'a set \ 'a set \ bool\ (\_ \\<^bsub>_\<^esub> _\ [60,50,60]55) +assumes trans_rel: \K = Cn(A) \ B \\<^bsub>K\<^esub> C \ C \\<^bsub>K\<^esub> D \ B \\<^bsub>K\<^esub> D\ +assumes nonempty_rel: \K = Cn(A) \ (K .\. \) \ {} \ \B\(K .\. \). (\C\(K .\. \). C \\<^bsub>K\<^esub> B)\ \ \pas clair dans la litterrature\ + +fixes rel_sel::\'a set \ 'a \ 'a set set\ (\\\<^sub>T\<^sub>R\) +defines rel_sel: \\\<^sub>T\<^sub>R K \ \ if (K .\. \) = {} then {K} + else {B. B\(K .\. \) \ (\C\(K .\. \). C \\<^bsub>K\<^esub> B)}\ + +begin + +text\A transitively relational selection function respect the partial meet contraction postulates.\ +sublocale PartialMeetContraction where selection = \\<^sub>T\<^sub>R + apply(unfold_locales) + apply(simp_all add: rel_sel) + using nonempty_rel apply blast + using remainder_extensionality by blast + +end + +text\A full meet contraction is a limiting case of the partial meet contraction where if the remainders are not empty then +the selection function return all the remainders (as defined by @{text \full_sel\}\ +locale FullMeetContraction = Tarskian_logic + + +fixes full_sel::\'a set \ 'a \ 'a set set\ (\\\<^sub>F\<^sub>C\) +defines full_sel: \\\<^sub>F\<^sub>C K \ \ if K .\. \ = {} then {K} else K .\. \\ + +begin + + +text\A full selection and a relation ? is a transitively relational meet contraction postulates.\ +sublocale TransitivelyRelationalMeetContraction where relation = \\ K A B. True\ and rel_sel=\\<^sub>F\<^sub>C + by (unfold_locales, auto simp add:full_sel, rule eq_reflection, simp) + +end + +subsection\Equivalence of partial meet contraction and AGM contraction\ + + +locale PMC_SC = PartialMeetContraction + Supraclassical_logic + Compact_logic + +begin + +text \In a context of a supraclassical and a compact logic the two remaining postulates of AGM contraction : +\<^item> @{text \contract_recovery\} +\<^item> @{text \contract_success\} +can be proved on a partial meet contraction.\ + +text\@{text \pmc_recovery\} : all proposition removed by a partial meet contraction of @{term \\\} will be recovered by the expansion of @{term \\\}\ + +\ \recovery requires supraclassicality\ +lemma pmc_recovery: \K = Cn(A) \ K \ ((K \
\<^bsub>\\<^esub> \) \ \)\ + apply(cases \(K .\. \) = {}\, simp_all (no_asm) add:mc expansion_def) + using inclusion_L tautology_selection apply fastforce + proof - + assume a:\K = Cn(A)\ and b:\K .\. \ \ {}\ + { fix \ + assume d:\K \ \\ + have \\ .\. \ \ \(\ K \)\ + using is_selection[OF a b] + by auto (metis a d infer_def rem_closure remainder_recovery subsetD) + } + with a b show \K \ Cn (insert \ (\ (\ K \)))\ + by (metis (no_types, lifting) Un_commute assumption_L imp_PL infer_def insert_is_Un subsetI) + qed + +text \@{text \pmc_success\} : a partial meet contraction of @{term \K\} by @{term \\\} not imply @{term \\\}\ +\ \success requires compacteness\ +lemma pmc_success: \K = Cn(A) \ \ \ Cn({}) \ \ \ K \
\<^bsub>\\<^esub> \\ +proof + assume a:\K = Cn(A)\ and b:\\ \ Cn({})\ and c:\\ \ K \
\<^bsub>\\<^esub> \\ + from c show False unfolding mc + proof(cases \K .\. \ = {K}\) + case True + then show ?thesis + by (meson assumption_L c nonconsequence_remainder pmc_inclusion[OF a] subsetD) + next + case False + hence \\B\K .\. \. \ \ B\ using assumption_L rem by auto + moreover have \K .\. \ \ {}\ using b emptyrem2taut validD_L by blast + ultimately show ?thesis + using b c mc nonempty_selection[OF a] validD_L emptyrem2taut is_selection[OF a] + by (metis Inter_iff bot.extremum_uniqueI subset_iff) + qed +qed + +text\As a partial meet contraction has been proven to respect all postulates of AGM contraction +the equivalence between the both are straightforward\ +sublocale AGM_Contraction where contraction = \\A \. A \
\<^bsub>\\<^esub> \\ + using pmc_closure pmc_inclusion pmc_vacuity + pmc_success pmc_recovery pmc_extensionality + expansion_def idempotency_L infer_def + by (unfold_locales) metis+ + +end + + +locale AGMC_SC = AGM_Contraction + Supraclassical_logic + Compact_logic + +begin + +text \obs 2.5 page 514\ +definition AGM_selection::\'a set \ 'a \ 'a set set\ (\\\<^sub>A\<^sub>G\<^sub>M\) + where AGM_sel: \\\<^sub>A\<^sub>G\<^sub>M A \ \ if A .\. \ = {} then {A} else {B. B \ A .\. \ \ A \
\ \ B}\ + +text\The selection function @{term \\\<^sub>A\<^sub>G\<^sub>M\} respect the partial meet contraction postulates\ +sublocale PartialMeetContraction where selection = \\<^sub>A\<^sub>G\<^sub>M +proof(unfold_locales, unfold AGM_sel, simp_all, goal_cases) + case (1 K A \) \ \@{text \non_emptiness\} of selection requires a compact logic\ + then show ?case using upper_remainder[of \K \
\\ K \] contract_success[OF 1(1)] + by (metis contract_closure contract_inclusion infer_def taut2emptyrem valid_def) +next + case (2 K A \ \) + then show ?case + by (metis (mono_tags, lifting) contract_extensionality Collect_cong remainder_extensionality) +qed + +text \@{text \contraction_is_pmc\} : an AGM contraction is equivalent to a partial met contraction using the selection function \\\<^sub>A\<^sub>G\<^sub>M\\ +lemma contraction_is_pmc: \K = Cn(A) \ K \
\ = K \
\<^bsub>\\<^sub>A\<^sub>G\<^sub>M\<^esub> \\ \ \requires a supraclassical logic\ +proof + assume a:\K = Cn(A)\ + show \K \
\ \ K \
\<^bsub>\\<^sub>A\<^sub>G\<^sub>M\<^esub> \\ + using contract_inclusion[OF a] by (auto simp add:mc AGM_sel) +next + assume a:\K = Cn(A)\ + show \K \
\<^bsub>\\<^sub>A\<^sub>G\<^sub>M\<^esub> \ \ K \
\\ + proof (cases \\ \\) + case True + hence \K .\. \ = {}\ + using nonconsequence_remainder taut2emptyrem by auto + then show ?thesis + apply(simp_all add:mc AGM_sel) + by (metis a emptyrem2taut contract_closure contract_recovery valid_expansion) + next + case validFalse:False + then show ?thesis + proof (cases \K \ \\) + case True + hence b:\K .\. \ \ {}\ + using emptyrem2taut validFalse by blast + have d:\\ \ K \ \ .\. \ \ K \
\\ for \ + using Supraclassical_logic.impI_PL Supraclassical_logic_axioms a contract_closure contract_recovery expansion_def by fastforce + { fix \ + assume e:\\ \ K \ and f:\\ \ K \
\\ + have \(\ .\. \) .\. \ \ K \
\\ + using imp_recovery2[of \K \
\\ \ \] a contract_closure d e f by auto + hence g:\\ (K \
\) \ {\ .\. \} \ \\ + using a contract_closure impI_PL by fastforce + then obtain B where h:\(K \
\) \ {\ .\. \} \ B\ and i:\B \ K .\. \\ + using upper_remainder[of \(K \
\) \ {\ .\. \}\ K \] a True contract_inclusion idempotency_L impI2 by auto + hence j:\\ \ Cn(B)\ + by (metis (no_types, lifting) CollectD mp_PL Un_insert_right a infer_def insert_subset rem rem_closure) + have \\ \ K \
\<^bsub>\\<^sub>A\<^sub>G\<^sub>M\<^esub> \\ + apply(simp add:mc AGM_sel b, rule_tac x=B in exI) + by (meson Tarskian_logic.assumption_L Tarskian_logic_axioms h i j le_sup_iff) + } + then show ?thesis + using a pmc_inclusion by fastforce + next + case False + hence \K .\. \ = {K}\ + using nonconsequence_remainder taut2emptyrem by auto + then show ?thesis + using False a contract_vacuity idempotency_L pmc_vacuity by auto + qed + qed +qed + +lemma contraction_with_completion: \K = Cn(A) \ K \
\ = K \
\<^bsub>* \\<^sub>A\<^sub>G\<^sub>M\<^esub> \\ + by (simp add: contraction_is_pmc pmc_completion) + +end + +(* in case of doubt uncomment one of these\ +sublocale AGMC_SC \ PMC_SC where selection = \\<^sub>A\<^sub>G\<^sub>M\<^sub>C + by (unfold_locales) + +sublocale PMC_SC \ AGMC_SC where contraction = \\A \. A \
\<^bsub>\\<^esub> \\ + by (unfold_locales) +*) + +locale TRMC_SC = TransitivelyRelationalMeetContraction + PMC_SC where selection = \\<^sub>T\<^sub>R + +begin +text \A transitively relational selection function respect conjuctive overlap.\ +lemma rel_sel_conj_overlap: \K = Cn(A) \ \\<^sub>T\<^sub>R K (\ .\. \) \ \\<^sub>T\<^sub>R K \ \ \\<^sub>T\<^sub>R K \\ +proof(intro subsetI) + fix B + assume a:\K = Cn(A)\ and b:\B \ \\<^sub>T\<^sub>R K (\ .\. \)\ + show \B \ \\<^sub>T\<^sub>R K \ \ \\<^sub>T\<^sub>R K \\ (is ?A) + proof(cases \\ \ \ \ \ \ \ K \ \ \ \ K \ \\, elim disjE) + assume \\ \\ + hence c:\Cn({\ .\. \}) = Cn({\})\ + using conj_equiv valid_Cn_equiv valid_def by blast + from b show ?A + by (metis Un_iff a c extensional_selection) + next + assume \\ \\ + hence c:\Cn({\ .\. \}) = Cn({\})\ + by (simp add: Cn_conj_bis Cn_same validD_L) + from b show ?A + by (metis Un_iff a c extensional_selection) + next + assume \\ K \ \\ + then show ?A + by (metis UnI1 a b conjE1_PL is_selection nonconsequence_remainder nonempty_selection tautology_selection subset_singletonD) + next + assume \\ K \ \\ + then show ?A + by (metis UnI2 a b conjE2_PL is_selection nonconsequence_remainder nonempty_selection tautology_selection subset_singletonD) + next + assume d:\\ (\ \ \ \ \ \ \ K \ \ \ \ K \ \)\ + hence h:\K .\. \ \ {}\ and i:\K .\. \ \ {}\ and j:\K .\. (\ .\. \) \ {}\ and k:"K \ \ .\. \" + using d emptyrem2taut valid_conj_PL apply auto + by (meson Supraclassical_logic.conjI_PL Supraclassical_logic_axioms d) + show ?A + using remainder_conj[OF a k] b h i j rel_sel by auto + qed +qed + +text\A transitively relational meet contraction respect conjuctive overlap.\ +lemma trmc_conj_overlap: \K = Cn(A) \ (K \
\<^bsub>\\<^sub>T\<^sub>R\<^esub> \) \ (K \
\<^bsub>\\<^sub>T\<^sub>R\<^esub> \) \ (K \
\<^bsub>\\<^sub>T\<^sub>R\<^esub> (\ .\. \))\ + unfolding mc using rel_sel_conj_overlap by blast + +text\A transitively relational selection function respect conjuctive inclusion\ +lemma rel_sel_conj_inclusion: \K = Cn(A) \ \\<^sub>T\<^sub>R K (\ .\. \) \ (K .\. \) \ {} \ \\<^sub>T\<^sub>R K \ \ \\<^sub>T\<^sub>R K (\ .\. \)\ +proof(intro subsetI) + fix B + assume a:\K = Cn(A)\ and b:\\\<^sub>T\<^sub>R K (\ .\. \) \ (K .\. \) \ {}\ and c:\B \ \\<^sub>T\<^sub>R K \\ + show \B \ \\<^sub>T\<^sub>R K (\ .\. \)\ (is ?A) + proof(cases \\ \ \ \ \ \ \ K \ \ \ \ K \ \\, auto) + assume \\ \\ + then show ?A + using b taut2emptyrem by auto + next + assume \\ \\ + hence \Cn({\ .\. \}) = Cn({\})\ + by (simp add: Cn_conj_bis Cn_same validD_L) + then show ?A + using a c extensional_selection by blast + next + assume d:\\ \ Cn K\ + with d show ?A + by (metis Int_emptyI Tarskian_logic.nonconsequence_remainder Tarskian_logic_axioms a b c idempotency_L + inf_bot_right is_selection nonempty_selection singletonD subset_singletonD) + next + assume d:\\ \ Cn K\ + hence e:\(\ .\. \) \ Cn K\ + by (meson Supraclassical_logic.conjE2_PL Supraclassical_logic_axioms) + hence f:\\\<^sub>T\<^sub>R K (\ .\. \) = {K}\ + by (metis Tarskian_logic.nonconsequence_remainder Tarskian_logic_axioms a insert_not_empty is_selection + nonempty_selection subset_singletonD) + with b have g:\(K .\. \) = {K}\ + unfolding nonconsequence_remainder[symmetric] using rem by auto + with d f show ?A + using a c is_selection by fastforce + next + assume d:\\ \ \\ and e:\\ \ \\ and f:\\ \ Cn K\ and g:\\ \ Cn K\ + hence h:\K .\. \ \ {}\ and i:\K .\. \ \ {}\ and j:\K .\. (\ .\. \) \ {}\ and k:"K \ \ .\. \" + using e d emptyrem2taut valid_conj_PL apply auto + by (meson Supraclassical_logic.conjI_PL Supraclassical_logic_axioms f g) + have o:\B \ K .\. \ \ B \ K .\. (\ .\. \)\ for B + using a k remainder_conj by auto + from b obtain B' where l:\B' \ K .\. (\ .\. \)\ and m:\\C\K .\. (\ .\. \). C \\<^bsub>K\<^esub> B'\ and n:\\ \ B'\ + apply (auto simp add:mc rel_sel j) + using assumption_L rem by force + have p:\B' \ K .\. \\ + apply(simp add: rem) + by (metis (no_types, lifting) Supraclassical_logic.conjE1_PL Supraclassical_logic_axioms + Tarskian_logic.rem Tarskian_logic_axioms a l mem_Collect_eq n rem_closure) + from c show ?A + apply (simp add:rel_sel o j h) + using m p trans_rel a by blast + qed +qed + +text\A transitively relational meet contraction respect conjuctive inclusion\ +lemma trmc_conj_inclusion: \K = Cn(A) \ \ \ (K \
\<^bsub>\\<^sub>T\<^sub>R\<^esub> (\ .\. \)) \ ((K \
\<^bsub>\\<^sub>T\<^sub>R\<^esub> (\ .\. \) \ (K \
\<^bsub>\\<^sub>T\<^sub>R\<^esub> \)))\ +proof - + assume a:\K = Cn(A)\ and b:\\ \ (K \
\<^bsub>\\<^sub>T\<^sub>R\<^esub> (\ .\. \))\ + then obtain B where c:\B \ \\<^sub>T\<^sub>R K (\ .\. \)\ and d:\\ B \ \\ apply(simp add:mc) + by (metis b emptyrem2taut is_selection pmc_tautology rem_closure subset_iff validD_L valid_conj_PL) + hence \B \ (K .\. \)\ + using remainder_recovery_bis[OF a _ d, of \\ .\. \\] + by (metis (no_types, hide_lams) a conj_PL emptyrem2taut insert_not_empty is_selection + nonconsequence_remainder subsetD taut2emptyrem) + with c have e:\\\<^sub>T\<^sub>R K (\ .\. \) \ (K .\. \) \ {}\ by blast + then show \((K \
\<^bsub>\\<^sub>T\<^sub>R\<^esub> (\ .\. \) \ (K \
\<^bsub>\\<^sub>T\<^sub>R\<^esub> \)))\ + unfolding mc using rel_sel_conj_inclusion[OF a e] by blast +qed + +text\As a transitively relational meet contraction has been proven to respect all postulates of AGM full contraction +the equivalence between the both are straightforward\ +sublocale AGM_FullContraction where contraction = \\A \. A \
\<^bsub>\\<^sub>T\<^sub>R\<^esub> \\ + using trmc_conj_inclusion trmc_conj_overlap + by (unfold_locales, simp_all) + +end + + +locale AGMFC_SC = AGM_FullContraction + AGMC_SC + +begin + +text\An AGM relation is defined as ?\ +definition AGM_relation::\'a set \ 'a set \ 'a set \ bool\ + where AGM_rel: \AGM_relation C K B \ (C = K \ B = K) \ ( (\\. K \ \ \ C \ K .\. \) + \ (\\. K \ \ \ B \ K .\. \ \ K \
\ \ B) + \ (\\. (K \ \ \ C \ K .\. \ \ B \ K .\. \ \ K \
\ \ C) \ K \
\ \ B))\ +text\An AGM relational selection is defined as a function that return @{term \K\} if the remainders of @{text \K .\. \\} is empty and +the best element of the remainders according to an AGM relation\ +definition AGM_relational_selection::\'a set \ 'a \ 'a set set\ (\\\<^sub>A\<^sub>G\<^sub>M\<^sub>T\<^sub>R\) + where AGM_rel_sel: \\\<^sub>A\<^sub>G\<^sub>M\<^sub>T\<^sub>R K \ \ if (K .\. \) = {} + then {K} + else {B. B\(K .\. \) \ (\C\(K .\. \). AGM_relation C K B)}\ + +lemma AGM_rel_sel_completion: \K = Cn(A) \ \\<^sub>A\<^sub>G\<^sub>M\<^sub>T\<^sub>R K \ = * \\<^sub>A\<^sub>G\<^sub>M K \\ + apply (unfold AGM_rel_sel, simp add:completion_def split: if_splits) +proof(auto simp add:AGM_sel) + fix S B C + assume a:\S \ Cn(A) .\. \\ and b:\B \ Cn(A) .\. \\ and c:\\ {B \ Cn(A) .\. \. Cn(A) \
\ \ B} \ B\ + and d:\C \ Cn(A) .\. \\ + hence e:\\ \ Cn(A) \
\\ + using Tarskian_logic.taut2emptyrem Tarskian_logic_axioms contract_success by fastforce + show \AGM_relation C (Cn(A)) B\ + proof(cases \\ \ Cn(A)\) + case True + { fix \ + assume \Cn A \
\ \ C\ + hence \Cn A \
(\ .\. \) \ Cn A \
\\ + using contract_conj_inclusion_variant[of \Cn(A)\ A \ \] + by (metis (mono_tags, lifting) assumption_L contract_conj_inclusion d mem_Collect_eq rem subset_iff) + } note f = this + { fix \ \' + assume g:\\ \ Cn A \
\'\ and h:\B \ Cn A .\. \'\ and j:\Cn A \
\' \ C\ and i:\\ \ B\ + hence \\' .\. \ \ Cn A \
\'\ + using Supraclassical_logic.disjI2_PL Supraclassical_logic_axioms contract_closure by fastforce + hence k:\\' .\. \ \ Cn A \
\\ + using contract_conj_overlap_variant[of \Cn(A)\ A \' \] f[OF j] + by (metis IntI Supraclassical_logic.disjI1_PL Supraclassical_logic_axioms conj_com_Cn + contract_extensionality inclusion_L singletonI subsetD) + hence l:\Cn A \
\ \ B\ using c by auto + from k l have m:\\' .\. \ \ B\ and n:\B =Cn(B)\ + using b rem_closure by blast+ + have \B \ {\} \ \'\ using g h i + by (simp add:rem) (metis contract_inclusion insertI1 insert_subsetI psubsetI subsetD subset_insertI) + with n m have \B \ \'\ + by (metis Cn_equiv assumption_L disjE_PL disj_com equiv_PL imp_PL) + with h have False + using assumption_L rem by auto + } note g = this + with True show ?thesis + apply(unfold AGM_rel, rule_tac disjI2) + using d b c by (auto simp add:AGM_rel idempotency_L del:subsetI) blast+ + next + case False + then show ?thesis + by (metis AGM_rel b d idempotency_L infer_def nonconsequence_remainder singletonD) + qed +next + fix S B \ + assume a:\S \ Cn(A) .\. \\ and b:\B \ Cn(A) .\. \\ and c:\\C\Cn A .\. \. AGM_relation C (Cn A) B\ + and d:\\C'. C' \ Cn A .\. \ \ Cn A \
\ \ C' \ \ \ C'\ + then show \\ \ B\ + unfolding AGM_rel + by (metis (no_types, lifting) AGM_sel empty_Collect_eq insert_Diff insert_not_empty + nonconsequence_remainder nonempty_selection singletonD) +qed + +text\A transitively relational selection and an AGM relation is a transitively relational meet contraction\ +sublocale TransitivelyRelationalMeetContraction where relation = AGM_relation and rel_sel = \\\<^sub>A\<^sub>G\<^sub>M\<^sub>T\<^sub>R\ +proof(unfold_locales, simp_all (no_asm) only:atomize_eq, goal_cases) + case a:(1 K A C B' B) \ \Very difficult proof requires litterature and high automation of isabelle!\ + from a(2,3) show ?case + unfolding AGM_rel apply(elim disjE conjE, simp_all) + proof(intro disjI2 allI impI, elim exE conjE, goal_cases) + case (1 \ _ _ \) + have b:\B \ K .\. (\ .\. \)\ and c:\B' \ K .\. (\ .\. \)\ and d:\C \ K .\. (\ .\. \)\ + using remainder_conj[OF a(1)] 1 conjI_PL by auto + hence e:\K \
(\ .\. \) \ B\ + using contract_conj_inclusion_variant[OF a(1), of \ \] + by (meson "1"(1) "1"(12) "1"(16) "1"(2) "1"(3) "1"(8) Supraclassical_logic.conj_PL + Supraclassical_logic_axioms dual_order.trans) + { fix \ + assume f:\\ \ K \
\\ + have \\ .\. \ \ (K \
\) \ Cn {\}\ + by (metis Int_iff Supraclassical_logic.disjI1_PL Supraclassical_logic.disjI2_PL Supraclassical_logic_axioms + f a(1) contract_closure in_mono inclusion_L singletonI) + hence g:\\ .\. \ \ B\ + using contract_conj_overlap_variant[OF a(1), of \] + by (metis AGM_Contraction.contract_extensionality AGM_Contraction_axioms a(1) conj_com_Cn e in_mono) + have \\ .\. \ \ B\ + by (metis a(1) "1"(10) "1"(15) "1"(16) assumption_L f in_mono infer_def rem_closure rem_inclusion remainder_recovery) + with g have \\ \ B\ + by (metis 1(15) a(1) disjE_PL infer_def order_refl rem_closure validD_L valid_Cn_imp) + } + then show ?case by blast + qed +next + case (2 K A \) + hence \* \\<^sub>A\<^sub>G\<^sub>M K \ \ {}\ + using nonempty_selection[OF 2(1), of \] selection_completion[OF 2(1), of \] by blast + then show ?case + using AGM_rel_sel_completion[OF 2(1), of \] AGM_rel_sel 2(1,2) by force +next + case (3 K \) + then show ?case using AGM_rel_sel_completion AGM_rel_sel by simp +qed + +\ \ça marche tout seul! ==> Je ne vois pas où sont utilisés ces lemmas\ +lemmas fullcontraction_is_pmc = contraction_is_pmc +lemmas fullcontraction_is_trmc = contraction_with_completion + +end + + +locale FMC_SC = FullMeetContraction + TRMC_SC + +begin + +lemma full_meet_weak1: \K = Cn(A) \ K \ \ \ (K \
\<^bsub>\\<^sub>F\<^sub>C\<^esub> \) = K \ Cn({.\ \})\ +proof(intro subset_antisym Int_greatest) + assume a:\K = Cn(A)\ and b:\K \ \\ + then show \ (K \
\<^bsub>\\<^sub>F\<^sub>C\<^esub> \) \ K\ + by (simp add: Inf_less_eq full_sel mc rem_inclusion) +next + assume a:\K = Cn(A)\ and b:\K \ \\ + show \(K \
\<^bsub>\\<^sub>F\<^sub>C\<^esub> \) \ Cn({.\ \})\ + proof + fix \ + assume c:\\ \ (K \
\<^bsub>\\<^sub>F\<^sub>C\<^esub> \)\ + { assume \\ {.\ \} \ \\ + hence \\ {.\ \} \ \\ + by (metis Un_insert_right insert_is_Un not_PL notnot_PL) + hence \\ {\ .\. .\ \} \ \\ + by (metis assumption_L disjI2_PL singleton_iff transitivity2_L) + then obtain B where d:\{\ .\. .\ \} \ B\ and e:\B \ K .\. \\ + by (metis a b disjI1_PL empty_subsetI idempotency_L infer_def insert_subset upper_remainder) + hence f:\\ \ \ B\ + by (metis (no_types, lifting) CollectD assumption_L insert_subset disj_notE_PL rem) + hence \\ \ \ (K \
\<^bsub>\\<^sub>F\<^sub>C\<^esub> \)\ + using e mc full_sel by auto + } + then show \\ \ Cn({.\ \})\ + using c infer_def by blast + qed +next + assume a:\K = Cn(A)\ and b:\K \ \\ + show \K \ Cn({.\ \}) \ (K \
\<^bsub>\\<^sub>F\<^sub>C\<^esub> \)\ + proof(safe) + fix \ + assume c:\\ \ K\ and d: \\ \ Cn {.\ \}\ + have e:\B \ .\ \ .\. \\ for B + by (simp add: d validD_L valid_imp_PL) + { fix B + assume f:\B \ K .\. \\ + hence \B \ \ .\. \\ + using a assumption_L c remainder_recovery by auto + then have f:\B \ \\ using d e + using disjE_PL ex_mid_PL by blast + } + then show \\ \ (K \
\<^bsub>\\<^sub>F\<^sub>C\<^esub> \)\ + apply(simp_all add:mc c full_sel) + using a rem_closure by blast + qed +qed + +lemma full_meet_weak2:\K = Cn(A) \ K \ \ \ Cn((K \
\<^bsub>\\<^sub>F\<^sub>C\<^esub> \) \ {.\ \}) = Cn({.\ \})\ + unfolding full_meet_weak1 + by (metis Cn_union idempotency_L inf.cobounded2 sup.absorb_iff2 sup_commute) + +end + +end diff --git a/thys/Belief_Revision/AGM_Logic.thy b/thys/Belief_Revision/AGM_Logic.thy new file mode 100755 --- /dev/null +++ b/thys/Belief_Revision/AGM_Logic.thy @@ -0,0 +1,480 @@ +(*<*) +\\ ******************************************************************** + * Project : AGM Theory + * Version : 1.0 + * + * Authors : Valentin Fouillard, Safouan Taha, Frederic Boulanger + and Nicolas Sabouret + * + * This file : AGM logics + * + * Copyright (c) 2021 Université Paris Saclay, France + * + ******************************************************************************\ + +theory AGM_Logic + +imports Main + +begin +(*>*) + +section \Introduction\ + +text\ +The 1985 paper by Carlos Alchourrón, Peter Gärdenfors, +and David Makinson (AGM), “On the Logic of Theory Change: Partial Meet +Contraction and Revision Functions” @{cite "alchourron1985logic"} launches a large and +rapidly growing literature that employs formal models and logics to handle changing beliefs of a rational agent +and to take into account new piece of information observed by this agent. +In 2011, a review book titled "AGM 25 Years: Twenty-Five Years of Research in Belief Change" +was edited to summarize the first twenty five years of works based on AGM +@{cite "Ferme2011"}. + +According to Google Scholar, the original AGM paper was cited 4000 times! +This AFP entry is HOL-based and it is a faithful formalization of the logic operators (e.g. contraction, revision, remainder \dots ) +axiomatized in the AGM paper. It also contains the proofs of all the theorems stated in the paper that show how these operators combine. +Both proofs of Harper and Levi identities are established. +\ + +text\A belief state can be considered as a consistent set of beliefs (logical propositions) closed under logical reasoning. +Belief changes represent the operations that apply on a belief state to remove some of it and/or to add new beliefs (propositions). +In the latter case, it is possible that other beliefs are affected by these changes (to preserve consistency for example). +AGM define several postulates to guarantee that such operations preserve consistency meaning that the agent keeps rational. +Three kinds of operators are defined : +\<^item> The contraction @{text \\
\} : where a proposition is removed from a belief set +\<^item> The expansion @{text \\\} : where a proposition is added to a belief set +\<^item> The revision @{text \\<^bold>*\} : where a proposition is added to a belief set such that the belief set remains consistent +\ + +text\In this AFP entry, there are three theory files: +\<^enum> The AGM Logic file contains a classification of logics used in the AGM framework. +\<^enum> The AGM Remainder defines a important operator used in the AGM framework. +\<^enum> The AGM Contraction file contains the postulates of the AGM contraction and its relation with the meet contraction. +\<^enum> The AGM Revision file contains the postulates of the AGM revision and its relation with the meet revision. +\ + +section \Logics\ + +text\The AGM framework depends on the underlying logic used to express beliefs. AGM requires at least a Tarskian propositional calculus. +If this logic is also supra-classical and/or compact, new properties are established and the main theorems of AGM are strengthened. +To model AGM it is therefore important to start by formalizing this underlying logic and its various extensions. +We opted for a deep embedding in HOL which required the redefinition of all the logical operators and an axiomatization of their rules. +This is certainly not efficient in terms of proof but it gives a total control of our formalization and an assurance +that the logic used has no hidden properties depending on the Isabelle/HOL implementation. +We use the Isabelle \<^emph>\locales\ feature and we take advantage of the inheritance/extension mechanisms between locales.\ + + +subsection \Tarskian Logic\ + +text \ +The first locale formalizes a Tarskian logic based on the famous Tarski's consequence operator: @{term \Cn(A)\} +which gives the set of all propositions (\<^bold>\closure\) that can be inferred from the set of propositions @{term \A\}, +Exactly as it is classically axiomatized in the literature, three assumptions of the locale define the consequence operator. +\ +locale Tarskian_logic = +fixes Cn::\'a set \ 'a set\ +assumes monotonicity_L: \A \ B \ Cn(A) \ Cn(B)\ + and inclusion_L: \A \ Cn(A)\ + and transitivity_L: \Cn(Cn(A)) \ Cn(A)\ + +\ \ + Short notation for ``@{term \\\} can be inferred from the propositions in @{term \A\}''. +\ +fixes infer::\'a set \ 'a \ bool\ (infix \\\ 50) +defines \A \ \ \ \ \ Cn(A)\ + +\ \ + @{term \\\} is valid (a tautology) if it can be inferred from nothing. +\ +fixes valid::\'a \ bool\ (\\\) +defines \\ \ \ {} \ \\ + +\ \ + @{term \A \ \\} is all that can be infered from @{term \A\} and @{term \\\}. +\ +fixes expansion::\'a set \ 'a \ 'a set\ (infix \\\ 57) +defines \A \ \ \ Cn(A \ {\})\ + +begin + +lemma idempotency_L: \Cn(Cn(A)) = Cn(A)\ + by (simp add: inclusion_L transitivity_L subset_antisym) + +lemma assumption_L: \\ \ A \ A \ \\ + using inclusion_L infer_def by blast + +lemma validD_L: \\ \ \ \ \ Cn(A)\ + using monotonicity_L valid_def infer_def by fastforce + +lemma valid_expansion: \K = Cn(A) \ \ \ \ K \ \ = K\ + by (simp add: idempotency_L insert_absorb validD_L valid_def expansion_def) + +lemma transitivity2_L: + assumes \\\ \ B. A \ \\ + and \B \ \\ + shows \A \ \\ +proof - + from assms(1) have \B \ Cn(A)\ by (simp add: infer_def subsetI) + hence \Cn(B) \ Cn(A)\ using idempotency_L monotonicity_L by blast + moreover from assms(2) have \\ \ Cn(B)\ by (simp add: infer_def) + ultimately show ?thesis using infer_def by blast +qed + +lemma Cn_same: \(Cn(A) = Cn(B)) \ (\C. A \ Cn(C) \ B \ Cn(C))\ +proof + { assume h:\Cn(A) = Cn(B)\ + from h have \\\ \ B. A \ \\ + by (simp add: Tarskian_logic.assumption_L Tarskian_logic_axioms infer_def) + moreover from h[symmetric] have \\\ \ A. B \ \\ + by (simp add: Tarskian_logic.assumption_L Tarskian_logic_axioms infer_def) + ultimately have \\C. A \ Cn(C) \ B \ Cn(C)\ + using h idempotency_L inclusion_L monotonicity_L by blast + } thus \Cn(A) = Cn(B) \ \C. (A \ Cn(C)) = (B \ Cn(C))\ . +next + { assume h:\\C. (A \ Cn(C)) = (B \ Cn(C))\ + from h have \(A \ Cn(A)) = (B \ Cn(A))\ and \(A \ Cn(B)) = (B \ Cn(B))\ by simp+ + hence \B \ Cn(A)\ and \A \ Cn(B)\ by (simp add: inclusion_L)+ + hence \Cn(A) = Cn(B)\ + using idempotency_L monotonicity_L by blast + } thus \(\C. (A \ Cn(C)) = (B \ Cn(C))) \ Cn(A) = Cn(B)\ . +qed + +\ \ +The closure of the union of two consequence closures. +\ +lemma Cn_union: \Cn(Cn(A) \ Cn(B)) = Cn(A \ B)\ +proof + have \Cn(Cn(A) \ Cn(B)) \ Cn(Cn (A \ B))\ by (simp add: monotonicity_L) + thus \Cn(Cn(A) \ Cn(B)) \ Cn(A \ B)\ by (simp add: idempotency_L) +next + have \(A \ B) \ (Cn(A) \ Cn(B))\ using inclusion_L by blast + thus \Cn(A \ B) \ Cn(Cn(A) \ Cn(B))\ by (simp add: monotonicity_L) +qed + +\ \ +The closure of an infinite union of consequence closures. +\ +lemma Cn_Union: \Cn(\{Cn(B)|B. P B}) = Cn(\{B. P B})\ (is \?A = ?B\) +proof + have \?A \ Cn ?B\ + apply(rule monotonicity_L, rule Union_least, auto) + by (metis Sup_upper in_mono mem_Collect_eq monotonicity_L) + then show \?A \ ?B\ + by (simp add: idempotency_L) +next + show \?B \ ?A\ + by (metis (mono_tags, lifting) Union_subsetI inclusion_L mem_Collect_eq monotonicity_L) +qed + +\ \ +The intersection of two closures is closed. +\ +lemma Cn_inter: \K = Cn(A) \ Cn(B) \ K = Cn(K)\ +proof - + { fix K assume h:\K = Cn(A) \ Cn(B)\ + from h have \K \ Cn(A)\ and \K \ Cn(B)\ by simp+ + hence \Cn(K) \ Cn(A)\ and \Cn(K) \ Cn(B)\ using idempotency_L monotonicity_L by blast+ + hence \Cn(K) \ Cn(A) \ Cn(B)\ by simp + with h have \K = Cn(K)\ by (simp add: inclusion_L subset_antisym) + } thus \K = Cn(A) \ Cn(B) \ K = Cn(K)\ . +qed + +\ \ +An infinite intersection of closures is closed. +\ +lemma Cn_Inter: \K = \{Cn(B)|B. P B} \ K = Cn(K)\ +proof - + { fix K assume h:\K = \{Cn(B)|B. P B}\ + from h have \\B. P B \ K \ Cn(B)\ by blast + hence \\B. P B \ Cn(K) \ Cn(B)\ using idempotency_L monotonicity_L by blast + hence \Cn(K) \ \{Cn(B)|B. P B}\ by blast + with h have \K = Cn(K)\ by (simp add: inclusion_L subset_antisym) + } thus \K = \{Cn(B)|B. P B} \ K = Cn(K)\ . +qed + +end + +subsection \Supraclassical Logic\ + +text \ +A Tarskian logic has only one abstract operator catching the notion of consequence. A basic case of such a logic is a \<^bold>\Supraclassical\ logic that +is a logic with all classical propositional operators (e.g. conjunction (\\\), implication(\\\), negation (\\\) \dots ) together with their classical semantics. + +We define a new locale. In order to distinguish the propositional operators of our supraclassical logic from those of Isabelle/HOL, we use dots (e.g. \.\.\ stands for conjunction). +We axiomatize the introduction and elimination rules of each operator as it is commonly established in the classical literature. As explained before, +we give priority to a complete control of our logic instead of an efficient shallow embedding in Isabelle/HOL. +\ + +locale Supraclassical_logic = Tarskian_logic + + +fixes true_PL:: \'a\ (\\\) + and false_PL:: \'a\ (\\\) + and imp_PL:: \'a \ 'a \ 'a\ (infix \.\.\ 55) + and not_PL:: \'a \ 'a\ (\.\\) + and conj_PL:: \'a \ 'a \ 'a\ (infix \.\.\ 55) + and disj_PL:: \'a \ 'a \ 'a\ (infix \.\.\ 55) + and equiv_PL:: \'a \ 'a \ 'a\ (infix \.\.\ 55) + +assumes true_PL: \A \ \\ + + and false_PL: \{\} \ p\ + + and impI_PL: \A \ {p} \ q \ A \ (p .\. q)\ + and mp_PL: \A \ p .\. q \ A \ p \ A \ q\ + + and notI_PL: \A \ p .\. \ \ A \ .\ p\ + and notE_PL: \A \ .\ p \ A \ (p .\. \)\ + + and conjI_PL: \A \ p \ A \ q \ A \ (p .\. q)\ + and conjE1_PL: \A \ p .\. q \ A \ p\ + and conjE2_PL: \A \ p .\. q \ A \ q\ + + and disjI1_PL: \A \ p \ A \ (p .\. q)\ + and disjI2_PL: \A \ q \ A \ (p .\. q)\ + and disjE_PL: \A \ p .\. q \ A \ p .\. r \ A \ q.\. r \ A \ r\ + + and equivI_PL: \A \ p .\. q \ A \ q .\. p \ A \ (p .\. q)\ + and equivE1_PL: \A \ p .\. q \ A \ p .\. q\ + and equivE2_PL: \A \ p .\. q \ A \ q .\. p\ + +\ \non intuitionistic rules\ + and absurd_PL: \A \ .\ (.\ p) \ A \ p\ + and ex_mid_PL: \A \ p .\. (.\ p)\ + +begin + +text \In the following, we will first retrieve the classical logic operators semantics coming from previous introduction and elimination rules\ + +lemma non_consistency: + assumes \A \ .\ p\ + and \A \ p\ + shows \A \ q\ + by (metis assms(1) assms(2) false_PL mp_PL notE_PL singleton_iff transitivity2_L) + +\ \this direct result brings directly many remarkable properties of implication (i.e. transitivity)\ +lemma imp_PL: \A \ p .\. q \ A \ {p} \ q\ + apply (intro iffI impI_PL) + apply(rule mp_PL[where p=p], meson UnI1 assumption_L transitivity2_L) + using assumption_L by auto + +lemma not_PL: \A \ .\ p \ A \ {p} \ \\ + using notE_PL notI_PL imp_PL by blast + +\ \Classical logic result\ +lemma notnot_PL: \A \ .\ (.\ p) \ A \ p\ + apply(rule iffI, simp add:absurd_PL) + by (meson mp_PL notE_PL Un_upper1 Un_upper2 assumption_L infer_def monotonicity_L not_PL singletonI subsetD) + +lemma conj_PL: \A \ p .\. q \ (A \ p \ A \ q)\ + using conjE1_PL conjE2_PL conjI_PL by blast + +lemma disj_PL: \A \ p .\. q \ A \ {.\ p} \ q\ +proof + assume a:\A \ p .\. q\ + have b:\A \ p .\. (.\ p .\. q)\ + by (intro impI_PL) (meson Un_iff assumption_L insertI1 non_consistency) + have c:\A \ q .\. (.\ p .\. q)\ + by (simp add: assumption_L impI_PL) + from a b c have \A \ .\ p .\. q\ + by (erule_tac disjE_PL) simp_all + then show \A \ {.\ p} \ q\ + using imp_PL by blast +next + assume a:\A \ {.\ p} \ q\ + hence b:\A \ .\ p .\. q\ by (simp add: impI_PL) + then show \A \ p .\. q\ + apply(rule_tac disjE_PL[OF ex_mid_PL, of A p \p .\. q\]) + by (auto simp add: assumption_L disjI2_PL disjI1_PL impI_PL imp_PL) +qed + +lemma equiv_PL:\A \ p .\. q \ (A \ {p} \ q \ A \ {q} \ p)\ + using imp_PL equivE1_PL equivE2_PL equivI_PL by blast + +corollary valid_imp_PL: \\ (p .\. q) = ({p} \ q)\ + and valid_not_PL: \\ (.\ p) = ({p} \ \)\ + and valid_conj_PL: \\ (p .\. q) = (\ p \ \ q)\ + and valid_disj_PL: \\ (p .\. q) = ({.\ p} \ q)\ + and valid_equiv_PL:\\ (p .\. q) = ({p} \ q \ {q} \ p)\ + using imp_PL not_PL conj_PL disj_PL equiv_PL valid_def by auto + +text\Second, we will combine each logical operator with the consequence operator \Cn\: it is a trick to profit from set theory to get many essential +lemmas without complex inferences\ +declare infer_def[simp] + +lemma nonemptyCn: \Cn(A) \ {}\ + using true_PL by auto + +lemma Cn_true: \Cn({\}) = Cn({})\ + using Cn_same true_PL by auto + +lemma Cn_false: \Cn({\}) = UNIV\ + using false_PL by auto + +lemma Cn_imp: \A \ (p .\. q) \ Cn({q}) \ Cn(A \ {p})\ + and Cn_imp_bis: \A \ (p .\. q) \ Cn(A \ {q}) \ Cn(A \ {p})\ + using Cn_same imp_PL idempotency_L inclusion_L infer_def subset_insertI by force+ + +lemma Cn_not: \A \ .\ p \ Cn(A \ {p}) = UNIV\ + using Cn_false Cn_imp notE_PL not_PL by fastforce + +lemma Cn_conj: \A \ (p .\. q) \ Cn({p}) \ Cn({q}) \ Cn(A)\ + apply(intro iffI conjI_PL, frule conjE1_PL, frule conjE2_PL) + using Cn_same Un_insert_right bot.extremum idempotency_L inclusion_L by auto + +lemma Cn_conj_bis: \Cn({p .\. q}) = Cn({p, q})\ + by (unfold Cn_same) + (meson Supraclassical_logic.conj_PL Supraclassical_logic_axioms insert_subset) + +lemma Cn_disj: \A \ (p .\. q) \ Cn({q}) \ Cn(A \ {.\ p})\ + and Cn_disj_bis: \A \ (p .\. q) \ Cn(A \ {q}) \ Cn(A \ {.\ p})\ + using disj_PL Cn_same imp_PL idempotency_L inclusion_L infer_def subset_insertI by force+ + +lemma Cn_equiv: \A \ (p .\. q) \ Cn(A \ {p}) = Cn(A \ {q})\ + by (metis Cn_imp_bis equivE1_PL equivE2_PL equivI_PL set_eq_subset) + +corollary valid_nonemptyCn: \Cn({}) \ {}\ + and valid_Cn_imp: \\ (p .\. q) \ Cn({q}) \ Cn({p})\ + and valid_Cn_not: \\ (.\ p) \ Cn({p}) = UNIV\ + and valid_Cn_conj: \\ (p .\. q) \ Cn({p}) \ Cn({q}) \ Cn({})\ + and valid_Cn_disj: \\ (p .\. q) \ Cn({q}) \ Cn({.\ p})\ + and valid_Cn_equiv: \\ (p .\. q) \ Cn({p}) = Cn({q})\ + using nonemptyCn Cn_imp Cn_not Cn_conj Cn_disj Cn_equiv valid_def by auto + +\ \Finally, we group additional lemmas that were essential in further proofs\ +lemma consistency: \Cn({p}) \ Cn({.\ p}) = Cn({})\ +proof + { fix q + assume \{p} \ q\ and \{.\ p} \ q\ + hence "{} \ p .\. q" and "{} \ (.\ p) .\. q" + using impI_PL by auto + hence \{} \ q\ + using ex_mid_PL by (rule_tac disjE_PL[where p=p and q=\.\ p\]) blast + } + then show \Cn({p}) \ Cn({.\ p}) \ Cn({})\ by (simp add: subset_iff) +next + show \Cn({}) \ Cn({p}) \ Cn({.\ p})\ by (simp add: monotonicity_L) +qed + +lemma Cn_notnot: \Cn({.\ (.\ \)}) = Cn({\})\ + by (metis (no_types, hide_lams) notnot_PL valid_Cn_equiv valid_equiv_PL) + +lemma conj_com: \A \ p .\. q \ A \ q .\. p\ + using conj_PL by auto + +lemma conj_com_Cn: \Cn({p .\. q}) = Cn({q .\. p})\ + by (simp add: Cn_conj_bis insert_commute) + +lemma disj_com: \A \ p .\. q \ A \ q .\. p\ +proof - + { fix p q + have \A \ p .\. q \ A \ q .\. p\ + apply(erule disjE_PL) + using assumption_L disjI2_PL disjI1_PL impI_PL by auto + } + then show ?thesis by auto +qed + +lemma disj_com_Cn: \Cn({p .\. q}) = Cn({q .\. p})\ + unfolding Cn_same using disj_com by simp + +lemma imp_contrapos: \A \ p .\. q \ A \ .\ q .\. .\ p\ + by (metis Cn_not Un_insert_left Un_insert_right imp_PL notnot_PL) + +lemma equiv_negation: \A \ p .\. q \ A \ .\ p .\. .\ q\ + using equivE1_PL equivE2_PL equivI_PL imp_contrapos by blast + +lemma imp_trans: \A \ p .\.q \ A \ q .\.r \ A \ p .\.r\ + using Cn_imp_bis by auto + +lemma imp_recovery0: \A \ p .\. (p .\. q)\ + apply(subst disj_PL, subst imp_contrapos) + using assumption_L impI_PL by auto + +lemma imp_recovery1: \A \ {p .\. q} \ p \ A \ p\ + using disjE_PL[OF imp_recovery0, of A p p q] assumption_L imp_PL by auto + +lemma imp_recovery2: \A \ p .\. q \ A \ (q .\. p) .\. p \ A \ q\ + using imp_PL imp_recovery1 imp_trans by blast + +lemma impI2: \A \ q \ A \ p .\. q\ + by (meson assumption_L impI_PL in_mono sup_ge1 transitivity2_L) + +lemma conj_equiv: \A \ p \ A \ ((p .\. q) .\. q)\ + by (metis Un_insert_right assumption_L conjE2_PL conjI_PL equiv_PL impI2 imp_PL insertI1 sup_bot.right_neutral) + +lemma conj_imp: \A \ (p .\. q) .\. r \ A \ p .\. (q .\. r)\ +proof + assume "A \ (p .\. q) .\. r" + then have "Cn (A \ {r}) \ Cn (A \ {p, q})" + by (metis (no_types) Cn_conj_bis Cn_imp_bis Cn_union Un_insert_right sup_bot.right_neutral) + then show \A \ p .\. (q .\. r)\ + by (metis Un_insert_right impI_PL inclusion_L infer_def insert_commute insert_subset subset_eq sup_bot.right_neutral) +next + assume "A \ p .\. (q .\. r)" + then have "A \ {p} \ {q} \ r" + using imp_PL by auto + then show "A \ (p .\. q) .\. r" + by (metis (full_types) Cn_conj_bis Cn_union impI_PL infer_def insert_is_Un sup_assoc) +qed + +lemma conj_not_impE_PL: \A \ (p .\. q) .\. r \ A \ (p .\. .\ q) .\. r \ A \ p .\. r\ + by (meson conj_imp disjE_PL ex_mid_PL imp_PL) + +lemma disj_notE_PL: \A \ q \ A \ p .\. .\ q \ A \ p\ + using Cn_imp Cn_imp_bis Cn_not disjE_PL notnot_PL by blast + +lemma disj_not_impE_PL: \A \ (p .\. q) .\. r \ A \ (p .\. .\ q) .\. r \ A \ r\ + by (metis Un_insert_right disjE_PL disj_PL disj_com ex_mid_PL insert_commute sup_bot.right_neutral) + +lemma imp_conj: \A \ p .\. q \ A \ r .\. s \ A \ (p .\. r) .\. (q .\. s)\ + apply(intro impI_PL conjI_PL, unfold imp_PL[symmetric]) + by (meson assumption_L conjE1_PL conjE2_PL imp_trans infer_def insertI1 validD_L valid_imp_PL)+ + +lemma conj_overlap: \A \ (p .\. q) \ A \ (p .\. ((.\ p) .\. q))\ + by (meson conj_PL disjI2_PL disj_com disj_notE_PL) + +lemma morgan: \A \ .\ (p .\. q) \ A \ (.\ p) .\. (.\ q)\ + by (meson conj_imp disj_PL disj_com imp_PL imp_contrapos notE_PL notI_PL) + +lemma conj_superexpansion1: \A \ .\ (p .\. q) .\. .\ p \ A \ .\ p\ + using conj_PL disjI1_PL morgan by auto + +lemma conj_superexpansion2: \A \ (p .\. q) .\. p \ A \ p\ + using conj_PL disjI1_PL by auto + +end + +subsection \Compact Logic\ +text\If the logic is compact, which means that any result is based on a finite set of hypothesis\ +locale Compact_logic = Tarskian_logic + + assumes compactness_L: \A \ \ \ (\A'. A'\ A \ finite A' \ A'\ \)\ + +begin + +text \Very important lemma preparing the application of the Zorn's lemma. It states that in a compact logic, we can not deduce \\\ +if we accumulate an infinity of hypothesis groups which locally do not deduce phi\ +lemma chain_closure: \\ \ \ \ subset.chain {B. \ B \ \} C \ \ \C \ \\ +proof + assume a:\subset.chain {B. \ B \ \} C\ and b:\\ \ \\ and \\ C \ \\ + then obtain A' where c:\A'\ \ C\ and d:\finite A'\ and e:\A' \ \\ using compactness_L by blast + define f where f:\f \ \a. SOME B. B \ C \ a \ B\ + have g:\finite (f ` A')\ using f d by simp + have h:\(f ` A') \ C\ + unfolding f by auto (metis (mono_tags, lifting) Union_iff c someI_ex subset_eq) + have i:\subset.chain {B. \ B \ \} (f ` A')\ using a h + using a h by (meson subsetD subset_chain_def subset_trans) + have \A' \ {} \ \ (f ` A') \ {B. \ B \ \}\ using g i + by (meson Union_in_chain image_is_empty subset_chain_def subset_eq) + hence j:\A' \ {} \ \ \(f ` A') \ \\ by simp + have \A' \ \(f ` A')\ + unfolding f by auto (metis (no_types, lifting) Union_iff c someI_ex subset_iff) + with j e b show False + by (metis infer_def monotonicity_L subsetD valid_def) +qed + +end + +end + + diff --git a/thys/Belief_Revision/AGM_Remainder.thy b/thys/Belief_Revision/AGM_Remainder.thy new file mode 100644 --- /dev/null +++ b/thys/Belief_Revision/AGM_Remainder.thy @@ -0,0 +1,167 @@ +(*<*) +\\ ******************************************************************** + * Project : AGM Theory + * Version : 1.0 + * + * Authors : Valentin Fouillard, Safouan Taha, Frederic Boulanger + and Nicolas Sabouret + * + * This file : AGM Remainders + * + * Copyright (c) 2021 Université Paris Saclay, France + * + ******************************************************************************\ + +theory AGM_Remainder + +imports AGM_Logic + +begin + +(*>*) + + + +section \Remainders\ + +text\In AGM, one important feature is to eliminate some proposition from a set of propositions by ensuring +that the set of retained clauses is maximal and that nothing among these clauses allows to retrieve the eliminated proposition\ + +subsection \Remainders in a Tarskian logic\ +text \In a general context of a Tarskian logic, we consider a descriptive definition (by comprehension)\ +context Tarskian_logic + +begin +definition remainder::\'a set \ 'a \ 'a set set\ (infix \.\.\ 55) + where rem: \A .\. \ \ {B. B \ A \ \ B \ \ \ (\B'\ A. B \ B' \ B' \ \)}\ + +lemma rem_inclusion: \B \ A .\. \ \ B \ A\ + by (auto simp add:rem split:if_splits) + +lemma rem_closure: "K = Cn(A) \ B \ K .\. \ \ B = Cn(B)" + apply(cases \K .\. \ = {}\, simp) + by (simp add:rem infer_def) (metis idempotency_L inclusion_L monotonicity_L psubsetI) + +lemma remainder_extensionality: \Cn({\}) = Cn({\}) \ A .\. \ = A .\. \\ + unfolding rem infer_def apply safe + by (simp_all add: Cn_same) blast+ + +lemma nonconsequence_remainder: \A .\. \ = {A} \ \ A \ \\ + unfolding rem by auto + +\ \As we will see further, the other direction requires compactness!\ +lemma taut2emptyrem: \\ \ \ A .\. \ = {}\ + unfolding rem by (simp add: infer_def validD_L) + +end + +subsection \Remainders in a supraclassical logic\ +text\In case of a supraclassical logic, remainders get impressive properties\ +context Supraclassical_logic + +begin + +\ \As an effect of being maximal, a remainder keeps the eliminated proposition in its propositions hypothesis\ +lemma remainder_recovery: \K = Cn(A) \ K \ \ \ B \ K .\. \ \ B \ \ .\. \\ +proof - + { fix \ and B + assume a:\K = Cn(A)\ and c:\\ \ K\ and d:\B \ K .\. \\ and e:\\ .\. \ \ Cn(B)\ + with a have f:\\ .\. \ \ K\ using impI2 infer_def by blast + with d e have \\ \ Cn(B \ {\ .\. \})\ + apply (simp add:rem, elim conjE) + by (metis dual_order.order_iff_strict inclusion_L insert_subset) + with d have False using rem imp_recovery1 + by (metis (no_types, lifting) CollectD infer_def) + } + thus \K = Cn(A) \ K \ \ \ B \ K .\. \ \ B \ \ .\. \\ + using idempotency_L by auto +qed + +\ \When you remove some proposition \\\ several other propositions can be lost. +An important lemma states that the resulting remainder is also a remainder of any lost proposition\ +lemma remainder_recovery_bis: \K = Cn(A) \ K \ \ \ \ B \ \ \ B \ K .\. \ \ B \ K .\. \\ +proof- + assume a:\K = Cn(A)\ and b:\\ B \ \\ and c:\B \ K .\. \\ and d:\K \ \\ + hence d:\B \ \ .\. \\ using remainder_recovery by simp + with c show \B \ K .\. \\ + by (simp add:rem) (meson b dual_order.trans infer_def insert_subset monotonicity_L mp_PL order_refl psubset_imp_subset) +qed + +corollary remainder_recovery_imp: \K = Cn(A) \ K \ \ \ \ (\ .\. \) \ B \ K .\. \ \ B \ K .\. \\ + apply(rule remainder_recovery_bis, simp_all) + by (simp add:rem) (meson infer_def mp_PL validD_L) + +\ \If we integrate back the eliminated proposition into the remainder, we retrieve the original set!\ +lemma remainder_expansion: \K = Cn(A) \ K \ \ \ \ B \ \ \ B \ K .\. \ \ B \ \ = K\ +proof + assume a:\K = Cn(A)\ and b:\K \ \\ and c:\\ B \ \\ and d:\B \ K .\. \\ + then show \B \ \ \ K\ + by (metis Un_insert_right expansion_def idempotency_L infer_def insert_subset + monotonicity_L rem_inclusion sup_bot.right_neutral) +next + assume a:\K = Cn(A)\ and b:\K \ \\ and c:\\ B \ \\ and d:\B \ K .\. \\ + { fix \ + assume \\ \ K\ + hence e:\B \ \ .\.\\ using remainder_recovery[OF a _ d, of \] assumption_L by blast + have \\ \ K\ using a b idempotency_L infer_def by blast + hence f:\B \ {\} \ \\ using b c d apply(simp add:rem) + by (meson inclusion_L insert_iff insert_subsetI less_le_not_le subset_iff) + from e f have \B \ {\} \ \\ using imp_PL imp_trans by blast + } + then show \K \ B \ \\ + by (simp add: expansion_def subsetI) +qed + +text\To eliminate a conjunction, we only need to remove one side\ +lemma remainder_conj: \K = Cn(A) \ K \ \ .\. \ \ K .\. (\ .\. \) = (K .\. \) \ (K .\. \)\ + apply(intro subset_antisym Un_least subsetI, simp add:rem) + apply (meson conj_PL infer_def) + using remainder_recovery_imp[of K A \\ .\. \\ \] + apply (meson assumption_L conjE1_PL singletonI subsetI valid_imp_PL) + using remainder_recovery_imp[of K A \\ .\. \\ \] + by (meson assumption_L conjE2_PL singletonI subsetI valid_imp_PL) + +end + +subsection \Remainders in a compact logic\ +text\In case of a supraclassical logic, remainders get impressive properties\ +context Compact_logic +begin + +text \The following lemma is the Lindembaum's lemma requiring the Zorn's lemma (already available in standard Isabelle/HOL). + For more details, please refer to the book "Theory of logical calculi" @{cite wojcicki2013theory}. +This very important lemma states that we can get a maximal set (remainder \B'\) starting from any set +\B\ if this latter does not infer the proposition \\\ we want to eliminate\ +lemma upper_remainder: \B \ A \ \ B \ \ \ \B'. B \ B' \ B' \ A .\. \\ +proof - + assume a:\B \ A\ and b:\\ B \ \\ + have c:\\ \ \\ + using b infer_def validD_L by blast + define \ where "\ \ {B'. B \ B' \ B' \ A \ \ B' \ \}" + have d:\subset.chain \ C \ subset.chain {B. \ B \ \} C\ for C + unfolding \_def + by (simp add: le_fun_def less_eq_set_def subset_chain_def) + have e:\C \ {} \ subset.chain \ C \ B \ \ C\ for C + by (metis (no_types, lifting) \_def subset_chain_def less_eq_Sup mem_Collect_eq subset_iff) + { fix C + assume f:\C \ {}\ and g:\subset.chain \ C\ + have \\ C \ \\ + using \_def e[OF f g] chain_closure[OF c d[OF g]] + by simp (metis (no_types, lifting) CollectD Sup_least Sup_subset_mono g subset.chain_def subset_trans) + } note f=this + have \subset.chain \ C \ \U\\. \X\C. X \ U\ for C + apply (cases \C \ {}\) + apply (meson Union_upper f) + using \_def a b by blast + with subset_Zorn[OF this, simplified] obtain B' where f:\B'\ \ \ (\X\\. B' \ X \ X = B')\ by auto + then show ?thesis + by (simp add:rem \_def, rule_tac x=B' in exI) (metis psubsetE subset_trans) +qed + +\ \An immediate corollary ruling tautologies\ +corollary emptyrem2taut: \A .\. \ = {} \ \ \\ + by (metis bot.extremum empty_iff upper_remainder valid_def) + +end + +end diff --git a/thys/Belief_Revision/AGM_Revision.thy b/thys/Belief_Revision/AGM_Revision.thy new file mode 100755 --- /dev/null +++ b/thys/Belief_Revision/AGM_Revision.thy @@ -0,0 +1,236 @@ +(*<*) +\\ ******************************************************************** + * Project : AGM Theory + * Version : 1.0 + * + * Authors : Valentin Fouillard, Safouan Taha, Frederic Boulanger + and Nicolas Sabouret + * + * This file : AGM revision + * + * Copyright (c) 2021 Université Paris Saclay, France + * + ******************************************************************************\ + +theory AGM_Revision + +imports AGM_Contraction + +begin +(*>*) + +section \Revisions\ +text \The third operator of belief change introduce by the AGM framework is the revision. In revision a sentence +@{term \\\} is added to the belief set @{term \K\} in such a way that other sentences +of @{term \K\} are removed if needed so that @{term \K\} is consistent\ + +subsection \AGM revision postulates\ + +text \The revision operator is denoted by the symbol @{text \\<^bold>*\} and respect the following conditions : +\<^item> @{text \revis_closure\} : a belief set @{term \K\} revised by @{term \\\} should be logically closed +\<^item> @{text \revis_inclusion\} : a belief set @{term \K\} revised by @{term \\\} should be a subset of @{term \K\} expanded by @{term \\\} +\<^item> @{text \revis_vacuity\} : if @{text \\\\} is not in @{term \K\} then the revision of @{term \K\} by @{term \\\} is equivalent of the expansion of @{term \K\} by @{term \\\} +\<^item> @{text \revis_success\} : a belief set @{term \K\} revised by @{term \\\} should contain @{term \\\} +\<^item> @{text \revis_extensionality\} : Extensionality guarantees that the logic of contraction is extensional in the sense of allowing logically equivalent sentences to be freely substituted for each other +\<^item> @{text \revis_consistency\} : a belief set @{term \K\} revised by @{term \\\} is consistent if @{term \\\} is consistent\ +locale AGM_Revision = Supraclassical_logic + + +fixes revision:: \'a set \ 'a \ 'a set\ (infix \\<^bold>*\ 55) + +assumes revis_closure: \K = Cn(A) \ K \<^bold>* \ = Cn(K \<^bold>* \)\ + and revis_inclusion: \K = Cn(A) \ K \<^bold>* \ \ K \ \\ + and revis_vacuity: \K = Cn(A) \ .\ \ \ K \ K \ \ \ K \<^bold>* \\ + and revis_success: \K = Cn(A) \ \ \ K \<^bold>* \\ + and revis_extensionality: \K = Cn(A) \ Cn({\}) = Cn({\}) \ K \<^bold>* \ = K \<^bold>* \\ + and revis_consistency: \K = Cn(A) \ .\ \ \ Cn({}) \ \ \ K \<^bold>* \\ + +text\A full revision is defined by two more postulates : +\<^item> @{text \revis_superexpansion\} : An element of @{text \ K \<^bold>* (\ .\. \)\} is also an element of @{term \K\} revised by @{term \\\} and expanded by @{term \\\} +\<^item> @{text \revis_subexpansion\} : An element of @{text \(K \<^bold>* \) \ \\} is also an element of @{term \K\} revised by @{text \\ .\. \\} if @{text \(K \<^bold>* \)\} do not imply @{text \\ \\} +\ +locale AGM_FullRevision = AGM_Revision + + assumes revis_superexpansion: \K = Cn(A) \ K \<^bold>* (\ .\. \) \ (K \<^bold>* \) \ \\ + and revis_subexpansion: \K = Cn(A) \ .\ \ \ (K \<^bold>* \) \ (K \<^bold>* \) \ \ \ K \<^bold>* (\ .\. \)\ + +begin + +\ \important lemmas/corollaries that can replace the previous assumptions\ +corollary revis_superexpansion_ext : \K = Cn(A) \ (K \<^bold>* \) \ (K \<^bold>* \) \ (K \<^bold>* (\ .\. \))\ +proof(intro subsetI, elim IntE) + fix \ + assume a:\K = Cn(A)\ and b:\\ \ (K \<^bold>* \)\ and c:\\ \ (K \<^bold>* \)\ + have \ Cn({(\' .\. \') .\. \'}) = Cn({\'})\ for \' \' + using conj_superexpansion2 by (simp add: Cn_same) + hence d:\K \<^bold>* \' \ (K \<^bold>* (\' .\. \')) \ \'\ for \' \' + using revis_superexpansion[OF a, of \\' .\. \'\ \'] revis_extensionality a by metis + hence \\ .\. \ \ (K \<^bold>* (\ .\. \))\ and \\ .\. \ \ (K \<^bold>* (\ .\. \))\ + using d[of \ \] d[of \ \] revis_extensionality[OF a disj_com_Cn, of \ \] + using imp_PL a b c expansion_def revis_closure by fastforce+ + then show c:\\ \ (K \<^bold>* (\ .\. \))\ + using disjE_PL a revis_closure revis_success by fastforce +qed + +end + +subsection \Relation of AGM revision and AGM contraction \ + +text\The AGM contraction of @{term \K\} by @{term \\\} can be defined as the AGM revision of @{term \K\} by @{text \\\\} +intersect with @{term \K\} (to remove @{text \\\\} from K revised). This definition is known as Harper identity @{cite "Harper1976"}\ +sublocale AGM_Revision \ AGM_Contraction where contraction = \\K \. K \ (K \<^bold>* .\ \)\ +proof(unfold_locales, goal_cases) + case closure:(1 K A \) + then show ?case + by (metis Cn_inter revis_closure) +next + case inclusion:(2 K A \) + then show ?case by blast +next + case vacuity:(3 K A \) + hence \.\ (.\ \) \ K\ + using absurd_PL infer_def by blast + hence \K \ (K \<^bold>* .\ \)\ + using revis_vacuity[where \=\.\ \\] expansion_def inclusion_L vacuity(1) by fastforce + then show ?case + by fast +next + case success:(4 K A \) + hence \.\ (.\ \) \ Cn({})\ + using infer_def notnot_PL by blast + hence a:\\ \ K \<^bold>* (.\ \)\ + by (simp add: revis_consistency success(1)) + have \.\ \ \ K \<^bold>* (.\ \)\ + by (simp add: revis_success success(1)) + with a have \\ \ K \<^bold>* (.\ \)\ + using infer_def non_consistency revis_closure success(1) by blast + then show ?case + by simp +next + case recovery:(5 K A \) + show ?case + proof + fix \ + assume a:\\ \ K\ + hence b:\\ .\. \ \ K\ using impI2 recovery by auto + have \.\ \ .\. .\ \ \ K \<^bold>* .\ \\ + using impI2 recovery revis_closure revis_success by fastforce + hence \\ .\. \ \ K \<^bold>* .\ \\ + using imp_contrapos recovery revis_closure by fastforce + with b show \\ \ Cn (K \ (K \<^bold>* .\ \) \ {\})\ + by (meson Int_iff Supraclassical_logic.imp_PL Supraclassical_logic_axioms inclusion_L subsetD) + qed +next + case extensionality:(6 K A \ \) + hence \Cn({.\ \}) = Cn({.\ \})\ + using equiv_negation[of \{}\ \ \] valid_Cn_equiv valid_def by auto + hence \(K \<^bold>* .\ \) = (K \<^bold>* .\ \)\ + using extensionality(1) revis_extensionality by blast + then show ?case by simp +qed + + +locale AGMC_S = AGM_Contraction + Supraclassical_logic + +text\The AGM revision of @{term \K\} by @{term \\\} can be defined as the AGM contraction of @{term \K\} by @{text \\\\} +followed by an expansion by @{term \\\}. This definition is known as Levi identity @{cite "Levi1977SubjunctivesDA"}.\ +sublocale AGMC_S \ AGM_Revision where revision = \\K \. (K \
.\ \) \ \\ +proof(unfold_locales, goal_cases) + case closure:(1 K A \) + then show ?case + by (simp add: expansion_def idempotency_L) +next + case inclusion:(2 K A \) + have "K \
.\ \ \ K \ {\}" + using contract_inclusion inclusion by auto + then show ?case + by (simp add: expansion_def monotonicity_L) +next + case vacuity:(3 K A \) + then show ?case + by (simp add: contract_vacuity expansion_def) +next + case success:(4 K A \) + then show ?case + using assumption_L expansion_def by auto +next + case extensionality:(5 K A \ \) + hence \Cn({.\ \}) = Cn({.\ \})\ + using equiv_negation[of \{}\ \ \] valid_Cn_equiv valid_def by auto + hence \(K \
.\ \) = (K \
.\ \)\ + using contract_extensionality extensionality(1) by blast + then show ?case + by (metis Cn_union expansion_def extensionality(2)) +next + case consistency:(6 K A \) + then show ?case + by (metis contract_closure contract_success expansion_def infer_def not_PL) +qed + +text\The relationship between AGM full revision and AGM full contraction is the same as the relation between AGM revison and AGM contraction\ +sublocale AGM_FullRevision \ AGM_FullContraction where contraction = \\K \. K \ (K \<^bold>* .\ \)\ +proof(unfold_locales, goal_cases) + case conj_overlap:(1 K A \ \) + have a:\Cn({.\ (\ .\. \)}) = Cn({(.\ \) .\. (.\ \)})\ + using Cn_same morgan by simp + show ?case (is ?A) + using revis_superexpansion_ext[OF conj_overlap(1), of \.\ \\ \.\ \\] + revis_extensionality[OF conj_overlap(1) a] by auto +next + case conj_inclusion:(2 K A \ \) + have a:\Cn({.\ (\ .\. \) .\. .\ \}) = Cn({.\ \})\ + using conj_superexpansion1 by (simp add: Cn_same) + from conj_inclusion show ?case + proof(cases \\ \ K\) + case True + hence b:\.\ (.\ \) \ K \<^bold>* .\ (\ .\. \)\ + using absurd_PL conj_inclusion revis_closure by fastforce + show ?thesis + using revis_subexpansion[OF conj_inclusion(1) b] revis_extensionality[OF conj_inclusion(1) a] + expansion_def inclusion_L by fastforce + next + case False + then show ?thesis + by (simp add: conj_inclusion(1) contract_vacuity) + qed +qed + + +locale AGMFC_S = AGM_FullContraction + AGMC_S + +sublocale AGMFC_S \ AGM_FullRevision where revision = \\K \. (K \
.\ \) \ \\ +proof(unfold_locales, safe, goal_cases) + case super:(1 K A \ \ \) + hence a:\(\ .\. \) .\. \ \ Cn(Cn(A) \
.\ (\ .\. \))\ + using Supraclassical_logic.imp_PL Supraclassical_logic_axioms expansion_def by fastforce + have b:\(\ .\. \) .\. \ \ Cn({.\ (\ .\. \)})\ + by (meson Supraclassical_logic.imp_recovery0 Supraclassical_logic.valid_disj_PL Supraclassical_logic_axioms) + have c:\(\ .\. \) .\. \ \ Cn(A) \
(.\ (\ .\. \) .\. .\ \)\ + using contract_conj_overlap_variant[of \Cn(A)\ A \.\ (\ .\. \)\ \.\ \\] a b + using AGM_Contraction.contract_closure AGM_FullContraction_axioms AGM_FullContraction_def by fastforce + have d:\Cn({.\ (\ .\. \) .\. .\ \}) = Cn({.\ \})\ + using conj_superexpansion1 by (simp add: Cn_same) + hence e:\(\ .\. \) .\. \ \ Cn(A) \
.\ \\ + using AGM_Contraction.contract_extensionality[OF _ _ d] c + AGM_FullContraction_axioms AGM_FullContraction_def by fastforce + hence f:\\ .\. (\ .\. \) \ Cn(A) \
.\ \\ + using conj_imp AGM_Contraction.contract_closure AGM_FullContraction_axioms AGM_FullContraction_def conj_imp by fastforce + then show ?case + by (metis assumption_L expansion_def imp_PL infer_def) +next + case sub:(2 K A \ \ \) + hence a:\\ .\. (\ .\. \) \ Cn(A) \
.\ \\ + by (metis AGMC_S.axioms(1) AGMC_S_axioms AGM_Contraction.contract_closure expansion_def impI_PL infer_def revis_closure) + have b:\Cn({.\ (\ .\. \) .\. .\ \}) = Cn({.\ \})\ + using conj_superexpansion1 by (simp add: Cn_same) + have c:\.\ (\ .\. \) \ Cn A \
(.\ \)\ + using sub(1) by (metis assumption_L conj_imp expansion_def imp_PL infer_def not_PL) + have c:\Cn(A) \
.\ \ \ Cn(A) \
(.\ (\ .\. \))\ + using contract_conj_inclusion[of \Cn(A)\ A \.\ (\ .\. \)\ \.\ \\] + by (metis AGM_Contraction.contract_extensionality AGM_FullContraction.axioms(1) AGM_FullContraction_axioms b c) + then show ?case + by (metis a assumption_L conj_imp expansion_def imp_PL in_mono infer_def) +qed + + +end + + diff --git a/thys/Belief_Revision/ROOT b/thys/Belief_Revision/ROOT new file mode 100644 --- /dev/null +++ b/thys/Belief_Revision/ROOT @@ -0,0 +1,15 @@ +chapter AFP + +session "Belief_Revision" (AFP) = HOL + + options [timeout=600] + theories + AGM_Logic + AGM_Contraction + AGM_Revision + AGM_Remainder + document_files + "root.tex" + "adb-long.bib" + "root.bib" + "graph_locales.pdf" + diff --git a/thys/Belief_Revision/document/adb-long.bib b/thys/Belief_Revision/document/adb-long.bib new file mode 100644 --- /dev/null +++ b/thys/Belief_Revision/document/adb-long.bib @@ -0,0 +1,80 @@ +% $Id: adb-long.bib 6518 2010-01-24 14:18:10Z brucker $ +@PREAMBLE{ {\providecommand{\ac}[1]{\textsc{#1}} } + # {\providecommand{\acs}[1]{\textsc{#1}} } + # {\providecommand{\acf}[1]{\textsc{#1}} } + # {\providecommand{\TAP}{T\kern-.1em\lower-.5ex\hbox{A}\kern-.1em P} } + # {\providecommand{\leanTAP}{\mbox{\sf lean\it\TAP}} } + # {\providecommand{\holz}{\textsc{hol-z}} } + # {\providecommand{\holocl}{\textsc{hol-ocl}} } + # {\providecommand{\isbn}{\textsc{isbn}} } + # {\providecommand{\Cpp}{C++} } + # {\providecommand{\Specsharp}{Spec\#} } + # {\providecommand{\doi}[1]{\href{http://dx.doi.org/#1}{doi: + {\urlstyle{rm}\nolinkurl{#1}}}}} } +@STRING{conf-tphols="\acs{tphols}" } +@STRING{iso = {International Organization for Standardization} } +@STRING{j-ar = "Journal of Automated Reasoning" } +@STRING{j-cacm = "Communications of the \acs{acm}" } +@STRING{j-acta-informatica = "Acta Informatica" } +@STRING{j-sosym = "Software and Systems Modeling" } +@STRING{j-sttt = "International Journal on Software Tools for Technology" } +@STRING{j-ist = "Information and Software Technology" } +@STRING{j-toplas= "\acs{acm} Transactions on Programming Languages and + Systems" } +@STRING{j-tosem = "\acs{acm} Transactions on Software Engineering and + Methodology" } +@STRING{j-eceasst="Electronic Communications of the \acs{easst}" } +@STRING{j-fac = "Formal Aspects of Computing" } +@STRING{j-ucs = "Journal of Universal Computer Science" } +@STRING{j-sl = "Journal of Symbolic Logic" } +@STRING{j-fp = "Journal of Functional Programming" } +@STRING{j-tkde = {\acs{ieee} Transaction on Knowledge and Data Engineering} } +@STRING{j-tse = {\acs{ieee} Transaction on Software Engineering} } +@STRING{j-entcs = {Electronic Notes in Theoretical Computer Science} } +@STRING{s-lnai = "Lecture Notes in Computer Science" } +@STRING{s-lncs = "Lecture Notes in Computer Science" } +@STRING{s-lnbip = "Lecture Notes in Business Information Processing" } +@String{j-computer = "Computer"} +@String{j-tissec = "\acs{acm} Transactions on Information and System Security"} +@STRING{omg = {Object Management Group} } +@STRING{j-ipl = {Information Processing Letters} } +@STRING{j-login = ";login: the USENIX Association newsletter" } + +@STRING{PROC = "Proceedings of the " } + + +% Publisher: +% ========== +@STRING{pub-awl = {Addison-Wesley Longman, Inc.} } +@STRING{pub-awl:adr={Reading, MA, \acs{usa}} } +@STRING{pub-springer={Springer-Verlag} } +@STRING{pub-springer:adr={Heidelberg} } +@STRING{pub-cup = {Cambridge University Press} } +@STRING{pub-cup:adr={New York, \acs{ny}, \acs{usa}} } +@STRING{pub-mit = {\acs{mit} Press} } +@STRING{pub-mit:adr={Cambridge, Massachusetts} } +@STRING{pub-springer-ny={Springer-Verlag} } +, +@STRING{pub-springer-netherlands={Springer Netherlands} } +@STRING{pub-springer-netherlands:adr={} } +@STRING{pub-springer-ny:adr={New York, \acs{ny}, \acs{usa}} } +@STRING{pub-springer-london={Springer-Verlag} } +@STRING{pub-springer-london:adr={London} } +@STRING{pub-ieee= {\acs{ieee} Computer Society} } +@STRING{pub-ieee:adr={Los Alamitos, \acs{ca}, \acs{usa}} } +@STRING{pub-prentice={Prentice Hall, Inc.} } +@STRING{pub-prentice:adr={Upper Saddle River, \acs{nj}, \acs{usa}} } +@STRING{pub-acm = {\acs{acm} Press} } +@STRING{pub-acm:adr={New York, \acs{ny} \acs{usa}} } +@STRING{pub-oxford={Oxford University Press, Inc.} } +@STRING{pub-oxford:adr={New York, \acs{ny}, \acs{usa}} } +@STRING{pub-kluwer={Kluwer Academic Publishers} } +@STRING{pub-kluwer:adr={Dordrecht} } +@STRING{pub-elsevier={Elsevier Science Publishers} } +@STRING{pub-elsevier:adr={Amsterdam} } +@STRING{pub-north={North-Holland Publishing Co.} } +@STRING{pub-north:adr={Nijmegen, The Netherlands} } +@STRING{pub-ios = {\textsc{ios} Press} } +@STRING{pub-ios:adr={Amsterdam, The Netherlands} } +@STRING{pub-heise={Heise Zeitschriften Verlag} } +@STRING{pub-heise:adr={Hannover, Germany} } diff --git a/thys/Belief_Revision/document/graph_locales.pdf b/thys/Belief_Revision/document/graph_locales.pdf new file mode 100755 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..de82b63fc62377704afc6b23a3a81da12b89981e GIT binary patch literal 13771 zc$|fM1yo!~vv3IR?l!mum>JyNEx0>_J40{?4uL?>;1b+|y9Rf6w-5p$3GVR2?!NEs z+kNl+bIzUauBxu8F1tPF)X=?_lx797aiP%l3YN+{pqWER5m?b%mHZpm=7q>nX&pNTYQeYVRAE6iJMe z0@3V^;Jp=UMKQ%3{iqi4tl92w?+2%3^}OyTUvdSNJLpNc$olUN#8u=TZ){~x#mwuB zo6~~nrc&hn?jLS{A7buoz`j5Eb(o%<9#M zO3O~b@uSs`gG0lk8&2QyhklW()3KRNZAL#$K>gFlP-^5i?TYJ^MD03@qaDQ?2w)5tK zJb$Um4aWlf- z3OR@8XV6I`yoze@zLY-9dXw2-S~V1pz)k!!oVlw)l7l;9h4@BOCqt)enMB50iQbn#+SL}yGSeAfXZ)}1K?;Ko|KG}&3^yVORj+XZD_ew$N z%JIDfbFw#Kr}f>V@wsxzn)+itG%m&?%6Wl+^jBT1gKmen8FY;=7|B&91`e)`Sl;z7 zefmmFlE0ouIrHj+s|fOI4jiVh?ny>t@utajQl#~_E0gWz2X#3bkp-9pp2+RdGE2*^ zqx1%CS0cmR&h5uI=j(q28Sf#}BJLy*YQJqLBmz2j36qNnXKDg;#0!U@LDS?(=D|+* z#L_6f*qo=-=}5`t3r&Jw4{P?kng|4$SoiNIzbnb&uy1*Z71DNCW`Cl)P6|a!mJ$`O znOdCmw7?KmkfV1AUbLNZ$F-U^<7i{(axXPHpaLZIXo`G3F6kYtad~M9?P*a7*Hb7U z&`@B=DNG-oB}Zs$DxT`cV3Mhir)lq&UrZcXmiH9xeno;Cx}eJ!i0L3h)VbOZqdS+{ z@Pj8obwV~Bw|lppBAy-#spl&j zGur^bP>E6@cLlUi)Dr%P&18Oyo^nY&SwgG!Gj8fy(tMZI+~QK6g8?}LqRg`HN*z&6 zedE>C7RQ=SEg6H=Or&Zabx>xai);igP&yd%ld|TLA8Lfb#h6C%#WP_+Z>Si`pz!sy z$W}TWY5%A%km*_Grh>uMAYJ#XC>R^`vQ>smJh|H4yvLi2rifdz!bm(9(w{8g?U@Dj zE1+G`Oh8BVi^-t_y$>qIv#{bblZblGVhVxx`F%pbX=gg(Gz;Cjxz=>ijVU%(!oyEz zuJWcKNb}U4@&Vl2Ufi~|3@k22*#r)=)JmbpP0^#gbKWK%zBkK;5q3m!dz=#!i*Ed7 zUlocNXKWTU^~FH)3pxSgI?~+9#RHhDwe?`b3DX+6an}PhzRUrkWtr=I?s@jz(ap5n z#Hny)6>2u31G^={pw~BWYEBO^c};j1<9Uu&0I4wKbE-sw($pxmY8-%D*;`NvAHHl> zfBj!sYKP^{-pWS(?rf4wQ`cr3o%q#2P+IRhmX0u%uB6Pl@eWv>7cDaSmhr*e9N&^E z;rP+0s^}I{AJ}$juTtSXkrHXwJel2&Uz{gbTM~mMrt=UH0%Em>8Yicc;6g0pJ>#{~ z&ori(Vbh;$TvpT{^ACJ_ zDq0HM2%H|427}g4jD*IbQpCrLY1&Nnf%}b6|AqvMm%M97@CwcQlQf|n9(b@8ZLnbF z5y2aJp*Josz2OU`!SM*>(k2E7@D4Z@JG5I7Rm`{ONC^sFacD|4<;%9~u?aqUZ?GAN z_@*?7?^145X#MK77oGSKa<@t|wr)P*$YoMOvA;LYtV1p5;EYa2``lgUYxi`4W=sSO}5K9g@&KsEKJLqVxY z$&9HiT~?hi#4ozuus*ZOLS{*!7!>zTnSF<1niLPTGVY)FV{?1gYdKbjJ-_Q(@MysN zrPUjGZ!J6h{nuV9OlG&J%uaBPj{Hy?V=|hEegiGW zf$}O7uj~se3#RhVPiRpXDN|v3@x4vaONmgHi6fXHxX@!4@2yast>-l=Bh-IMOYuQM zM74`f`J5Vp8Ob5#Gm6|?a=yKDMLBtfTZ95}wD{|4^)!3FjiGS-bu9yP@N)hK$n~$g z8u$NkSNjVO0)RL@5IS6h><3X7L=x&OqC+a)Iu%x5onE zpwt5X^mVDqWyst8%ln@mk1ayqv-mZO4rWVA)r*v^J{^q&^~&-iDfEUFJp5jvY(FKT z+U2D(anPZ_Up(=s#rjDTqPsxksCToOdpLp9&4v3(XmPfsrkZE$(Mmj8S)5EDE%>2p zs)WTAiP#+vqG&DMJ2p+OXHSDkywt|&eF{D7iNygg#~cmlOQ1j0UjAyvU2%PER=DR0 zPhTuSqcR^|Bv`&v2#%$n_YXf36vrV5mx$Z9bnSaVwbN?h-6TZR;V~cBvlxTjtt>;f zDbo>+WV_4Vpew$ed_ai+;UpnQCuZ$JeG)8qTlUN4yQmQ&b%De zJs?d`LRV|Rs~++LeY6&B=J5shPZ}l9+b!5xH&}sltMcndKN~G`BiXI>M04v+avS`E z2P*G?a@$F-+C^$va&r5Ed!6EA18K}Nn@{AQ$c>q8r7b|a|xAc8qlx3$e}wY zu5SI7u@p&3eA5YAZuOIEoJ1>drAAIwL!7^_bY7OP zaZ;6~`Z;IJ^_bPH<1`r#Lx29r8u!6WABy*`1Z^T;RaU;Q_TN3SP~aE%6{Fi$4ay>} z1~vNH4x}QQFJ%$afhu3sx7j*poS;gGM^AA2gp4~OD-WE`eIzu$W0p>^c}N~KiB6>m zF4b%F<&&S9&&({K$gQjGmzEURNrZENErw5Iyt?KkYu^OQ%`vt$g?jc(LfDXb^5Js9y}Vb*+YYo|0e)$G(?; zZLvQ4%iA_kk0z}{6S{`C;Bs~ z#enB%b^b3hN0p*EYP_#E*2XaPY53OIX1)X@$jxD3lhwgjnU>44H_F#H!-23Cy5-_= zh<$^u@`7~d2OB4r0w`FMlMo|1tiAdEUSgyO|2cEIHLvEutxB2j`q7;6=&#=0)p!$i zV!wuMlj<}U)CiJTlfgpL2c#OGbsUimwZ-ME?$M;+b(5`>kFeE%lM6yd&SZ>$ok*v! zRX%rqXMpe(&`Y6Mtd;m9mZjHFd-SdUyMQR}mE2EN%4n3+K;1bLPOXNOBc#bcVrW9`LUy*k$P+tRLW zRHPhQ2kvNGj`s8pnA{_5R1@zKLX|U%+FH~JeyYOb%=&6Y4^q7{vT>O-0I*wQ?iqcF zR!C#pA9qwuBy>Ba)s3*y%9&9&6iB`3_jOI*A@F*|!W4eeJkxJ!Zks-T6zJ~OBBi6< zo^ugFY}f4WW_QWarC6E0cSsRgw@mYrinV z>(S%=4h51(gMlZg#%@;aDn|CS_=dp&Sg;d#&;%U!sNu{md0NUt>169j1w{)^ z!7|fx{;aQ9n!ePRA$=%PQpqnBzBIL2H1=iMyXY%Be{sVcvsC$A7NFBdn5D4uKt+fs znugf1t9_C_1gz0{Wj1D0!rv}DH(Ds5X!BvPuz-#sf!`P`1GUU7E}M1#{$}6X-H$&r zu5YEdfx%xQsXOtsE4W(%XsA5m+-L-Q2a z#(@jvj-+HKwBsGJ^#MNGIA}aW;R=+#+<|-?&MuQZaOa1+Fk5pEau6=TkT;Hit>+if z58Y@}1EpNJv7c;&w>CED+D{%3r^{4C1q@Dw75z^j-;IkVWH%(bwXFiJ{2$|>&H-qj z?@=|rIH<@?+{4#>V7=G9(Yh%*lX^yy9f*+ac{^>XgQtRwoN z7G0fE#hGMk`GZ+LR5gi<+k%d>sIUrl6tD!1U4CqthY87K=6k;I2C+~bJybhU3lYzw zkq6A{xd1+lF!JdS%< zp}Z$ozk}h(4aoIhYzmJ4PF#fBAP>Xm;k?0H_Ul9cmle8G8{cm^6&Sdkv=YVa}I`PBHIbR_w?8t1Z|x7Y=-4Y0HjbPZ;w-kY$4ExIdsECfY|t#nCG8pIjL-<+^pUacuI7aq=t`eYFg7^VO>e#bXF$VoL0#t&)=? zP!KP`Jl!3Xr<6My3MapwOu{6;#u^X5@hu(=7ko!n6OuGr=7_7gg9wC}?oBlhW3ev3w&&S!JB(Hyvr5VLx|>PP_Kur?Z~A9JBFv1ulj( zwB9hLmH}iCLq_2Q48z$}TLr*lvR9Z0K!ksyr%}@!8q(sT4MS=~kk4`B?P6opMj|~s#_UEYDzn7%&x1`nvZrxPBeF|h>vTAm9 zg-4D^s82&npNW%==qcQ4TZ$wuU;9~l#&={n|2mTR)m+AVD-YgGbibv^1oZQz$vE^r z&yH~fE^A0As>J~Q>&z&&lf;azVjlV!QA38DT34!Yv4Py}lX12Q26*)oq-4Jfr*tY+F71pV?zRq>e+w!T+wUgdu^v~CBd*k>lGaS;! zD_YIpb5|CM5)FNnyvBD~z%oXD?~P+td?xv{PX}!7XXulw^;o~;7k^XAK$oq@jHpQi z@tsZj)?``+ISYzpP>8D~`@qw&wpf^ssDHFFdL{7Akxxe%pI}iZ<3tDNJ)WFM zpk1!$80s+N^@(Vz^@4a*QI&~VU`5xSs(5yDcjNv<K==^BN*gwkXCZnF_PX4 z*7nHVZr-u0)1(rsAee(%Pb*#-e&E*4L|*+(+3|3I=6iAfjlJB@b0zG&(;F(pA%3HA zL$+#SdXld`J>cimaZdTGeWR?rNZD60*I%D)+|(s_r;58Vb1yeGJVEg6H`b1+04BL$ zh?n!R$@bP6)-(o1i)Zns>D!t!wL6n(BOege83Jnx4y;lS~)~5J*tus4WH!U!~!ToXKd7?+LdWFc*dQ&}3 zwU$Hhq5-d5!{X3XET*tkyTv%oJ^K25p^L&BuB9X3#PmBe@A2UVu=WP?xNO7kciB*|dp)sGeXx?(-DZ8={gkI*guo=8M4gp+vp! z9I5B_1uEIghi@qme>jVNgykp0dL!Bn%S`^^+uL-uu`wt!(Ff+UygoJ`KQoe@RpQ|F9dEXQbJt~(z1qKLT_HWlz!@{A~LV?{84;#w@ z+1{QT1SFVBI*q@&TKwR2sf`NeXyYwf4b5pX6lCk?wm!dn!_77Kg@4ptSO#oYeEQW! zigl*oXa=muI^vG>n*A++F^h4~5IPe}58Azp6ZBkQvQw_?UY_@+1l(gqTjjE|W;Lw> zdquv~?t=W3u^{^L?~`Y?H=Ke~DO~4uOTkDLS##EC2nH|0G#2{4&7|+XZsz6Q4rD!4 zpz(3O4cc|ToUP;*vO8Qk9V(geQ}NvwjGtYvsSX~xa&vMkb2)VL{4I-s+H`TpZ}SD6 zJ-*5K+U(5kB{4g{Y>;wgUw(WJk8=iSDyNS+wWlW2Sqp7>`=#A-fln!E-HC2Lwf0v7 z&Bw@|ff$gA|7vH#mx=cK?31$FA|U&@lh9+NpcZQXhqxH`(3DR*o>}|ny(zM12!69x zlg@9zvzo5T!vohOMCk+e@Mr8XZFZY1B+I~kM;n?w8eA;Rhiw02;yPVG|jsMUOp`NwNusv#SdVaHXMtGSse z2BT(p2E>5|IMK3`u2EBCn))?A{o#J&k-zF_esI}o{m1>@=l!+xN9y}3+asaI3cbUn zmaQyA(@|JM=ZTTWhxM{d8Fk))FG;a&7sClGn1wDSGAX_4Je4}lwz`kkTy{L`^nP_> z4fo6?$xC6kx;R5vV|wu>CVa3mItfdgU2E?h1td;m-OU1}V?J}L5SZ(pk4_b3Z!}(p z78&yn=5P9Y-HjPB>E9UKh!1;x(43j& zZ(5Ujpb^>T{>*LGM?|0kL&34r1NM3t=iS}$(T-Z+Sv5{s3Oz5b(4lu9p7H1a;gn^(}(;AgcEt;Ls0=ZRAwsjQ14eOwVufBPJ|Xtu4^nu(czxh@pO8LJjxh zD>S{VBnV6I_~p{+g6p*AF)Eb8GQBPjPjmQS`{sO5Fa?|JRv!jOvdNcy7mhH6TyM$K zH+$tqj}>xImQ6Cxs_kOWs-fPdYfG06g2!D41KyXaE3dlwd}G~+qKarf#-Vw5t8oct zAJ7^&#!O{o=y8wib2Jd?CSVJOBPvHpMKGU_o^pCGN@5c6<%N@TD;A%Ih6OwECSoEc zqag~3_!12q9QvM<6i;`dp<6FnjKZZD?3^Y~M9X)Xo5y)fQU8j4neB#Z-?rbcnBGGW zL;1ZUe}Av}S3zSw`^{}x3{pE{Oqk1|?igAx>&FEK4g zJp}PGV-~V?Z0Ap34+YC27lkJlqD+$w{L1S)n0$B{@a4rvzi?-{f}3v8btvlRf)}%{ z982uqy^3_mDj-Qa7>x^7+lIdI#si&wrw9ayFf|L>$n9MziM!EMNsc^^rd330HJqE| zCL(+y8TTO-Yjtwe>D;OU2!mpd^o{73@!!pov#`)d=t(?4U1=%&VIW(g>-KS@><1xM z#o6`uQbYl75!i$}7y|`%i}O3ya|o;&L;!Q$g;2oLYF-hHb3uOvzpY?V!tKX z$&viTjp~hjvt6?f_5L(5T<6T)zgK*=G%B4i{>5VOTcHjCo|Tw3K}c;^58^Of)PGoaJ%-5b{3Ajxz?}q}VbP)Z|7CsV?t(^QMt?b_*wieWBjSgJt zgSjOVU-fP@nfngkeu-j5pVdD02DN&L7h>cnR1M$u`e&#ayJ!36Nzf3gvFBDkVb*20 zx{=PWQ2lnSlWxt#w9Qb`4|nu*zw&l9jdb=yQP_FgoX1g%7*9>Ym5g2^{GP_hARPDJ z>S?mxZJ$EjVM0b^A~CbS|A@7eXwJ75%{sZXC%^`YHk22-J;1W8cygJDn>> z#Zs3Wg2`kQ*ZN_boN#AB-IuSAfyk5^XT2@u_1$CPph-5RhY9oh4|Z_1*kyF*Tn=j! zAyX=k3tK0TP)5d%{1a!N8~`6z1yMX|%4X~IyR1sw%CqR3pF|W^y;GHJ`r~bq0f+rq zCq*uuLV8wdcdR>V{#NH{SCxo6okL#|wte4+f86LkKnN%f_FZP0cDJ~BJu0<8gigh9 zwUJ<{gnexHCL!}q(A#a5fi`?eY2F1JO!>>5j7sXak1M9Sq`_iVxHH4aQR4T(m0l?X zVQGryc7ogSk6!_+L12RIM$BuZ?y}$9W7jL!_D>0gsa2rfon=mm@${JqGb&Sot3hgefs@UXyxX& za$yDI0ugI0^g}ciq7IcoWoyN&;N$m_5@zi6y+mLP57DozABiXM7HEF+_F%`T$%wAx zwC`aqV?-e9=eXL)<)PGiQW%wXlU+3HhkBLQ^ORK~Yz=7YC^%BxYU;?S0rW%9X^|!I zxkXV8in@_|hsa3o=FBlpV^A34#kn8s25IBRl|_}__EaPbj%k$wKVX;VccaN|XE2Y( zAFtvwkH%rIn&Iz7CftUi*kL{J;Lpd`u8QH$$4%WS*5kFMn%l}aEYS_YlUJ~8K7ckopHzOQ1I~Gl)b5zzQp*CWIC~e+Sf-;Q5x{+$@MoY zMbace4uK}EK&__y9=b8FFBcJL-@fgB{fhULhtu_R)K{U_ts+=u{>W_%Es=xX#O!AuxD&!(IqeEq@+6z?isU*|yRlW8Uf~PK$ z5d!QE<TDcrZU7wrk#DP`81vE^K*rW?0a19U>@U> z=aU(StuGpHNhfuztCPc!xDb{_k#1M|lV90kh6We!-lf*ELnZ6+B&5YFMuM@ITUm+F zBrRm%oZnRphQKz4iSrDr+o09wc?VjbE>hiahb?~^B*D20d@)Xpj&G~epz#JBDaIS23F=nhZMFYRA* zFWHS0+kKVOm`G#Q2$>QtD9*@vGf1#4#>Zxr!4H>KO-J^AS5i;UP5ely8NZ-*2yI$C z?RU(!A1bmLIosyZKm%ni0uHrlO?=&q(O3ljpmlG|mtVf|;kqEq+|)&ukTb1aLT5~5 z(?_7+P01>j4Go_x==Ji%$cRj_{9Io2i4_eeDjjD|^wMA>T!{o7YCeg+1FiUV+8}tg zCswGCx)28Bz^Jf?_4^g;oiXj5pgTo@B*O7^u_}eXtLzqpFTltaRSnMeD`UZ+9$P`$ zMnn_VPeKtZWSCp{l5fHb9JJx3 zD;4ap_9fd#;TJ~=I&y^G&t}04v&v|v4>!Z@C|xbNz$GnI-V~sVQ2i*gW;(SfyKu~( zKq7BMAHc%W@_qGM335cjJ|&(*aH@XGB5eF_mhoui?UmMDxyHh>T3syo0LPuHBFC=Q z%lYTxMy8mM&ldvuI?mE5rq}jFTUc{Kj_uX6G&K)mrAcVl-}Q`x$SR;@Qn8jKxc&!I zIR{&ebKH&vZ~0ktdw_k>bab2~yXWK@tbCLlJwHa|HOjd-sUa$LYJM~l7LBS7uy}+d zb>*vNn&fh+E-YiBJQ&^uanh+~?JttJFBpET$U5L^%Al*he|e>PS1zf6XtlBrBK za%(^K)wiMx^b%jwwD3=*Yzp5?tqWZU=@Y8LMx~B+RWROno+Ggr-X|~)Mi~8TJ}~D7M5$M%D8_<+Kvj=+9V^D9jlk>DK|+5wP~@9CGI%4xqKWI zkL)L~mnjj(wjCkk52w%9-bAXkX)n*vmo|0xO|>gc-_K2Xffq-T^2uMNXP7s%$%NUz z_4h5a+KU9N06Kq~%!r(%Mn0Ty+aS7 zB@3D!wsTDV(9&&+UBzB=l};vDQ1y1MEfYmN5+{M1K$ig<$z-0=Hp{FsjI_KNI0bo%ed1+yCs? z_8X09uWnbISYK~f{F${aGeW_d0tJv4TVYYGy43 z(1d~_`P3LS*|Eksoh}XYah^Hg+XseC*VK1P?uDB4;}*kQnknLi?+X(OVGC9mfKc~|iVN=%OgqHVVe{qTP>>GcNO^Mhb_P6Xtc9pVx2mdvbn|0JJt_AG zo1PK#>$&>so2`x$ofj~w2>e(;%wNiMGrn=wv@N47#yKx&56-2j!Eml4Ws{6>>qzm{ zq+nB;mT1YgI7Uh8CcRG)v0Y{%G%0pilrR=w1hzH5V#;^J5aa=TC6lPAv3H|#G9vgh z{6Nr9L)X@#)xtqqy4Zw-=2cfvQlczo_7bgdRl>^8D0C50;06rc* zLv}f+slAQ4n4^_F1OP-~7jrX*I6?t@KrR&aXIak+Rxk*J!Y*OzEDN!*vW7nILt$5k zLL9UJyud%%OWW8(z<}rW9yJtU;s2a~_h0=!oIvpZr!(6yUe2)xh$eL@9Gd){x=Kog z6^qE49uS3Ly$s9Coj_u3A>l&M>~}8L2;UNMEEzYO(Ct~T|I2`L}7Qf8jI!6#6?NI>25-jF2!pbY&6*(KNr~vRPP8b*z!74m&{ivuh`xh`;K(D zF>u`K9QAkykcY7g*ODsXXeQZQ6Ws{T^|<-%>6GrZa@%|CI8ewx;H*2rlE&9-CKWne zgJ;oimos_wXyqUTD-QV8c8^Dp$YipZF{>kqQuGqJ_ySE6we=wNuUx#ii;=3-pLw5xkow{PvXslSd9$pyP7~GU|v0;#Y#rG_;%I#F*%nD|B#3_H^vpw21 zemmEh=zGq-X)-6GA2cVDBz&OK)T;S-Mp6C{;!$;=f}qYubXILPh^ zrw>?-E4-AURC~n3$bW!SQG3LB8bf!B1HyP{&)HFl-+s^TXEf0YkRIjK{?+3-BsyYX z`iXy_lSAYTwsiLr-3~g~`@cjh-~SSr?CS1j(C3AQt2^WmsJN*c>3bHsNw(E_7AVTHnn>Ga(zx=&Oh4h)F5t7?ylw#H^3hv)Yz3E7B;5O zmElO0?CL_)+2Y^2LSy-4| z;z_|L32uPr|6!>8U7f%EF+;-1{rMPwy8Nm5{2~0=<$ulp$1ynmxev&$VFR^?2ue#y zO9O$NoIoJ=Q<;47D3AyEG(N$=Pa_9N_>YnW)&KKsI=XsKc6`a8PY+yD6meB!O*4z;&&d{zqlFQfgfvr^(ISDym$Z|33w|79*7;Xi`N z!rdJ5AH2k0<^=ym_Y~EfPjGJTzn*6t$Di;&%=`ocOFfMoPrUvR^`x`pQzd7Va+WO`l%5uFw8fxABHNnUr14$>}Nb{)#6#M@uIF zi0iL~BtRb^2;}1A;^gGu0CPx4b8(0Rc|j6l;+$YHE?yo^9$t``Fv|a*, \, \, \, \, \, + %\, \, \, \, \, + %\, \, \ + +%\usepackage[greek,english]{babel} + %option greek for \ + %option english (default language) for \, \ + +%\usepackage[latin1]{inputenc} + %for \, \, \, \, + %\, \, \ + +%\usepackage[only,bigsqcap]{stmaryrd} + %for \ + +%\usepackage{eufrak} + %for \ ... \, \ ... \ (also included in amssymb) + +%\usepackage{textcomp} + %for \, \ + +% this should be the last package used +\usepackage{pdfsetup} + +% urls in roman style, theory text in math-similar italics +\urlstyle{rm} +\isabellestyle{it} + +\begin{document} + +% sane default for proof documents +\parindent 0pt\parskip 0.5ex + +\title{Belief Revision Theory} +\author{ Valentin Fouillard \and Safouan TAHA \and Frederic Boulanger \and Nicolas Sabouret} +\maketitle + +\newpage + +\begin{abstract} +The 1985 paper by Carlos Alchourrón, Peter Gärdenfors, +and David Makinson (AGM), “On the Logic of Theory Change: Partial Meet Contraction and Revision Functions” launches a large and rapidly growing literature that employs formal models and logics to handle changing beliefs of a rational agent and to take into account new piece of information observed by this agent. In 2011, a review book titled "AGM 25 Years: Twenty-Five Years of Research in Belief Change" was edited to summarize the first twenty five years of works based on AGM. + +This HOL-based AFP entry is a faithful formalization of the AGM operators (e.g. contraction, revision, remainder ...) axiomatized in the original paper. It also contains the proofs of all the theorems stated in the paper that show how these operators combine. Both proofs of Harper and Levi identities are established. +\end{abstract} + +\newpage + +\tableofcontents + +\newpage + +\includegraphics{session_graph.pdf} + +\newpage + +% generated text of all theories +\input{session} + +% optional bibliography +\bibliographystyle{abbrv} +\bibliography{adb-long,root} + +\end{document} + + +%%% Local Variables: +%%% mode: latex +%%% TeX-master: t +%%% End: diff --git a/thys/Correctness_Algebras/Approximation.thy b/thys/Correctness_Algebras/Approximation.thy new file mode 100644 --- /dev/null +++ b/thys/Correctness_Algebras/Approximation.thy @@ -0,0 +1,114 @@ +(* Title: Approximation + Author: Walter Guttmann + Maintainer: Walter Guttmann +*) + +section \Approximation\ + +theory Approximation + +imports Stone_Kleene_Relation_Algebras.Iterings + +begin + +class apx = + fixes apx :: "'a \ 'a \ bool" (infix "\" 50) + +class apx_order = apx + + assumes apx_reflexive: "x \ x" + assumes apx_antisymmetric: "x \ y \ y \ x \ x = y" + assumes apx_transitive: "x \ y \ y \ z \ x \ z" + +sublocale apx_order < apx: order where less_eq = apx and less = "\x y . x \ y \ \ y \ x" + apply unfold_locales + apply simp + apply (rule apx_reflexive) + using apx_transitive apply blast + by (simp add: apx_antisymmetric) + +context apx_order +begin + +abbreviation the_apx_least_fixpoint :: "('a \ 'a) \ 'a" ("\ _" [201] 200) where "\ f \ apx.the_least_fixpoint f" +abbreviation the_apx_least_prefixpoint :: "('a \ 'a) \ 'a" ("p\ _" [201] 200) where "p\ f \ apx.the_least_prefixpoint f" + +definition is_apx_meet :: "'a \ 'a \ 'a \ bool" where "is_apx_meet x y z \ z \ x \ z \ y \ (\w . w \ x \ w \ y \ w \ z)" +definition has_apx_meet :: "'a \ 'a \ bool" where "has_apx_meet x y \ \z . is_apx_meet x y z" +definition the_apx_meet :: "'a \ 'a \ 'a" (infixl "\" 66) where "x \ y \ THE z . is_apx_meet x y z" + +lemma apx_meet_unique: + "has_apx_meet x y \ \!z . is_apx_meet x y z" + by (meson apx_antisymmetric has_apx_meet_def is_apx_meet_def) + +lemma apx_meet: + assumes "has_apx_meet x y" + shows "is_apx_meet x y (x \ y)" +proof - + have "is_apx_meet x y (THE z . is_apx_meet x y z)" + by (metis apx_meet_unique assms theI) + thus ?thesis + by (simp add: the_apx_meet_def) +qed + +lemma apx_greatest_lower_bound: + "has_apx_meet x y \ (w \ x \ w \ y \ w \ x \ y)" + by (meson apx_meet apx_transitive is_apx_meet_def) + +lemma apx_meet_same: + "is_apx_meet x y z \ z = x \ y" + using apx_meet apx_meet_unique has_apx_meet_def by blast + +lemma apx_meet_char: + "is_apx_meet x y z \ has_apx_meet x y \ z = x \ y" + using apx_meet_same has_apx_meet_def by auto + +end + +class apx_biorder = apx_order + order +begin + +lemma mu_below_kappa: + "has_least_fixpoint f \ apx.has_least_fixpoint f \ \ f \ \ f" + using apx.mu_unfold is_least_fixpoint_def least_fixpoint by auto + +lemma kappa_below_nu: + "has_greatest_fixpoint f \ apx.has_least_fixpoint f \ \ f \ \ f" + by (meson apx.mu_unfold greatest_fixpoint is_greatest_fixpoint_def) + +lemma kappa_apx_below_mu: + "has_least_fixpoint f \ apx.has_least_fixpoint f \ \ f \ \ f" + using apx.is_least_fixpoint_def apx.least_fixpoint mu_unfold by auto + +lemma kappa_apx_below_nu: + "has_greatest_fixpoint f \ apx.has_least_fixpoint f \ \ f \ \ f" + by (metis apx.is_least_fixpoint_def apx.least_fixpoint nu_unfold) + +end + +class apx_semiring = apx_biorder + idempotent_left_semiring + L + + assumes apx_L_least: "L \ x" + assumes sup_apx_left_isotone: "x \ y \ x \ z \ y \ z" + assumes mult_apx_left_isotone: "x \ y \ x * z \ y * z" + assumes mult_apx_right_isotone: "x \ y \ z * x \ z * y" +begin + +lemma sup_apx_right_isotone: + "x \ y \ z \ x \ z \ y" + by (simp add: sup_apx_left_isotone sup_commute) + +lemma sup_apx_isotone: + "w \ y \ x \ z \ w \ x \ y \ z" + by (meson apx_transitive sup_apx_left_isotone sup_apx_right_isotone) + +lemma mult_apx_isotone: + "w \ y \ x \ z \ w * x \ y * z" + by (meson apx_transitive mult_apx_left_isotone mult_apx_right_isotone) + +lemma affine_apx_isotone: + "apx.isotone (\x . y * x \ z)" + by (simp add: apx.isotone_def mult_apx_right_isotone sup_apx_left_isotone) + +end + +end + diff --git a/thys/Correctness_Algebras/Base.thy b/thys/Correctness_Algebras/Base.thy new file mode 100644 --- /dev/null +++ b/thys/Correctness_Algebras/Base.thy @@ -0,0 +1,228 @@ +(* Title: Base + Author: Walter Guttmann + Maintainer: Walter Guttmann +*) + +section \Base\ + +theory Base + +imports Stone_Relation_Algebras.Semirings + +begin + +class while = + fixes while :: "'a \ 'a \ 'a" (infixr "\" 59) + +class n = + fixes n :: "'a \ 'a" + +class diamond = + fixes diamond :: "'a \ 'a \ 'a" ("| _ > _" [50,90] 95) + +class box = + fixes box :: "'a \ 'a \ 'a" ("| _ ] _" [50,90] 95) + +context ord +begin + +definition ascending_chain :: "(nat \ 'a) \ bool" + where "ascending_chain f \ \n . f n \ f (Suc n)" + +definition descending_chain :: "(nat \ 'a) \ bool" + where "descending_chain f \ \n . f (Suc n) \ f n" + +definition directed :: "'a set \ bool" + where "directed X \ X \ {} \ (\x\X . \y\X . \z\X . x \ z \ y \ z)" + +definition co_directed :: "'a set \ bool" + where "co_directed X \ X \ {} \ (\x\X . \y\X . \z\X . z \ x \ z \ y)" + +definition chain :: "'a set \ bool" + where "chain X \ \x\X . \y\X . x \ y \ y \ x" + +end + +context order +begin + +lemma ascending_chain_k: + "ascending_chain f \ f m \ f (m + k)" + apply (induct k) + apply simp + using le_add1 lift_Suc_mono_le ord.ascending_chain_def by blast + +lemma ascending_chain_isotone: + "ascending_chain f \ m \ k \ f m \ f k" + using lift_Suc_mono_le ord.ascending_chain_def by blast + +lemma ascending_chain_comparable: + "ascending_chain f \ f k \ f m \ f m \ f k" + by (meson ascending_chain_isotone linear) + +lemma ascending_chain_chain: + "ascending_chain f \ chain (range f)" + by (simp add: ascending_chain_comparable chain_def) + +lemma chain_directed: + "X \ {} \ chain X \ directed X" + by (metis chain_def directed_def) + +lemma ascending_chain_directed: + "ascending_chain f \ directed (range f)" + by (simp add: ascending_chain_chain chain_directed) + +lemma descending_chain_k: + "descending_chain f \ f (m + k) \ f m" + apply (induct k) + apply simp + using le_add1 lift_Suc_antimono_le ord.descending_chain_def by blast + +lemma descending_chain_antitone: + "descending_chain f \ m \ k \ f k \ f m" + using descending_chain_def lift_Suc_antimono_le by blast + +lemma descending_chain_comparable: + "descending_chain f \ f k \ f m \ f m \ f k" + by (meson descending_chain_antitone nat_le_linear) + +lemma descending_chain_chain: + "descending_chain f \ chain (range f)" + by (simp add: descending_chain_comparable chain_def) + +lemma chain_co_directed: + "X \ {} \ chain X \ co_directed X" + by (metis chain_def co_directed_def) + +lemma descending_chain_codirected: + "descending_chain f \ co_directed (range f)" + by (simp add: chain_co_directed descending_chain_chain) + +end + +context semilattice_sup +begin + +lemma ascending_chain_left_sup: + "ascending_chain f \ ascending_chain (\n . x \ f n)" + using ascending_chain_def sup_right_isotone by blast + +lemma ascending_chain_right_sup: + "ascending_chain f \ ascending_chain (\n . f n \ x)" + using ascending_chain_def sup_left_isotone by auto + +lemma descending_chain_left_add: + "descending_chain f \ descending_chain (\n . x \ f n)" + using descending_chain_def sup_right_isotone by blast + +lemma descending_chain_right_add: + "descending_chain f \ descending_chain (\n . f n \ x)" + using descending_chain_def sup_left_isotone by auto + +primrec pSum0 :: "(nat \ 'a) \ nat \ 'a" + where "pSum0 f 0 = f 0" + | "pSum0 f (Suc m) = pSum0 f m \ f m" + +lemma pSum0_below: + "\i . f i \ x \ pSum0 f m \ x" + apply (induct m) + by auto + +end + +context non_associative_left_semiring +begin + +lemma ascending_chain_left_mult: + "ascending_chain f \ ascending_chain (\n . x * f n)" + by (simp add: mult_right_isotone ord.ascending_chain_def) + +lemma ascending_chain_right_mult: + "ascending_chain f \ ascending_chain (\n . f n * x)" + by (simp add: mult_left_isotone ord.ascending_chain_def) + +lemma descending_chain_left_mult: + "descending_chain f \ descending_chain (\n . x * f n)" + by (simp add: descending_chain_def mult_right_isotone) + +lemma descending_chain_right_mult: + "descending_chain f \ descending_chain (\n . f n * x)" + by (simp add: descending_chain_def mult_left_isotone) + +end + +context complete_lattice +begin + +lemma sup_Sup: + "A \ {} \ sup x (Sup A) = Sup ((sup x) ` A)" + apply (rule order.antisym) + apply (meson ex_in_conv imageI SUP_upper2 Sup_mono sup.boundedI sup_left_divisibility sup_right_divisibility) + by (meson SUP_least Sup_upper sup_right_isotone) + +lemma sup_SUP: + "Y \ {} \ sup x (SUP y\Y . f y) = (SUP y\Y. sup x (f y))" + apply (subst sup_Sup) + by (simp_all add: image_image) + +lemma inf_Inf: + "A \ {} \ inf x (Inf A) = Inf ((inf x) ` A)" + apply (rule order.antisym) + apply (meson INF_greatest Inf_lower inf.sup_right_isotone) + by (simp add: INF_inf_const1) + +lemma inf_INF: + "Y \ {} \ inf x (INF y\Y . f y) = (INF y\Y. inf x (f y))" + apply (subst inf_Inf) + by (simp_all add: image_image) + +lemma SUP_image_id[simp]: + "(SUP x\f`A . x) = (SUP x\A . f x)" + by simp + +lemma INF_image_id[simp]: + "(INF x\f`A . x) = (INF x\A . f x)" + by simp + +end + +lemma image_Collect_2: + "f ` { g x | x . P x } = { f (g x) | x . P x }" + by auto + +text \The following instantiation and four lemmas are from Jose Divason Mallagaray.\ + +instantiation "fun" :: (type, type) power +begin + +definition one_fun :: "'a \ 'a" + where one_fun_def: "one_fun \ id" + +definition times_fun :: "('a \ 'a) \ ('a \ 'a) \ ('a \ 'a)" + where times_fun_def: "times_fun \ comp" + +instance + by intro_classes + +end + +lemma id_power: + "id^m = id" + apply (induct m) + apply (simp add: one_fun_def) + by (simp add: times_fun_def) + +lemma power_zero_id: + "f^0 = id" + by (simp add: one_fun_def) + +lemma power_succ_unfold: + "f^Suc m = f \ f^m" + by (simp add: times_fun_def) + +lemma power_succ_unfold_ext: + "(f^Suc m) x = f ((f^m) x)" + by (simp add: times_fun_def) + +end + diff --git a/thys/Correctness_Algebras/Binary_Iterings.thy b/thys/Correctness_Algebras/Binary_Iterings.thy new file mode 100644 --- /dev/null +++ b/thys/Correctness_Algebras/Binary_Iterings.thy @@ -0,0 +1,1000 @@ +(* Title: Binary Iterings + Author: Walter Guttmann + Maintainer: Walter Guttmann +*) + +section \Binary Iterings\ + +theory Binary_Iterings + +imports Base + +begin + +class binary_itering = idempotent_left_zero_semiring + while + + assumes while_productstar: "(x * y) \ z = z \ x * ((y * x) \ (y * z))" + assumes while_sumstar: "(x \ y) \ z = (x \ y) \ (x \ z)" + assumes while_left_dist_sup: "x \ (y \ z) = (x \ y) \ (x \ z)" + assumes while_sub_associative: "(x \ y) * z \ x \ (y * z)" + assumes while_simulate_left_plus: "x * z \ z * (y \ 1) \ w \ x \ (z * v) \ z * (y \ v) \ (x \ (w * (y \ v)))" + assumes while_simulate_right_plus: "z * x \ y * (y \ z) \ w \ z * (x \ v) \ y \ (z * v \ w * (x \ v))" +begin + +text \Theorem 9.1\ + +lemma while_zero: + "bot \ x = x" + by (metis sup_bot_right mult_left_zero while_productstar) + +text \Theorem 9.4\ + +lemma while_mult_increasing: + "x * y \ x \ y" + by (metis le_supI2 mult.left_neutral mult_left_sub_dist_sup_left while_productstar) + +text \Theorem 9.2\ + +lemma while_one_increasing: + "x \ x \ 1" + by (metis mult.right_neutral while_mult_increasing) + +text \Theorem 9.3\ + +lemma while_increasing: + "y \ x \ y" + by (metis sup_left_divisibility mult_left_one while_productstar) + +text \Theorem 9.42\ + +lemma while_right_isotone: + "y \ z \ x \ y \ x \ z" + by (metis le_iff_sup while_left_dist_sup) + +text \Theorem 9.41\ + +lemma while_left_isotone: + "x \ y \ x \ z \ y \ z" + using sup_left_divisibility while_sumstar while_increasing by auto + +lemma while_isotone: + "w \ x \ y \ z \ w \ y \ x \ z" + by (meson order_lesseq_imp while_left_isotone while_right_isotone) + +text \Theorem 9.17\ + +lemma while_left_unfold: + "x \ y = y \ x * (x \ y)" + by (metis mult_1_left mult_1_right while_productstar) + +lemma while_simulate_left_plus_1: + "x * z \ z * (y \ 1) \ x \ (z * w) \ z * (y \ w) \ (x \ bot)" + by (metis sup_bot_right mult_left_zero while_simulate_left_plus) + +text \Theorem 11.1\ + +lemma while_simulate_absorb: + "y * x \ x \ y \ x \ x \ (y \ bot)" + by (metis while_simulate_left_plus_1 while_zero mult_1_right) + +text \Theorem 9.10\ + +lemma while_transitive: + "x \ (x \ y) = x \ y" + by (metis order.eq_iff sup_bot_right sup_ge2 while_left_dist_sup while_increasing while_left_unfold while_simulate_absorb) + +text \Theorem 9.25\ + +lemma while_slide: + "(x * y) \ (x * z) = x * ((y * x) \ z)" + by (metis mult_left_dist_sup while_productstar mult_assoc while_left_unfold) + +text \Theorem 9.21\ + +lemma while_zero_2: + "(x * bot) \ y = x * bot \ y" + by (metis mult_left_zero sup_commute mult_assoc while_left_unfold) + +text \Theorem 9.5\ + +lemma while_mult_star_exchange: + "x * (x \ y) = x \ (x * y)" + by (metis mult_left_one while_slide) + +text \Theorem 9.18\ + +lemma while_right_unfold: + "x \ y = y \ (x \ (x * y))" + by (metis while_left_unfold while_mult_star_exchange) + +text \Theorem 9.7\ + +lemma while_one_mult_below: + "(x \ 1) * y \ x \ y" + by (metis mult_left_one while_sub_associative) + +lemma while_plus_one: + "x \ y = y \ (x \ y)" + by (simp add: sup.absorb2 while_increasing) + +text \Theorem 9.19\ + +lemma while_rtc_2: + "y \ x * y \ (x \ (x \ y)) = x \ y" + by (simp add: sup_absorb2 while_increasing while_mult_increasing while_transitive) + +text \Theorem 9.6\ + +lemma while_left_plus_below: + "x * (x \ y) \ x \ y" + by (metis sup_right_divisibility while_left_unfold) + +lemma while_right_plus_below: + "x \ (x * y) \ x \ y" + using while_left_plus_below while_mult_star_exchange by auto + +lemma while_right_plus_below_2: + "(x \ x) * y \ x \ y" + by (smt order_trans while_right_plus_below while_sub_associative) + +text \Theorem 9.47\ + +lemma while_mult_transitive: + "x \ z \ y \ y \ z \ w \ x \ z \ w" + by (smt order_trans while_right_isotone while_transitive) + +text \Theorem 9.48\ + +lemma while_mult_upper_bound: + "x \ z \ 1 \ y \ z \ w \ x * y \ z \ w" + by (metis order.trans mult_isotone while_one_mult_below while_transitive) + +lemma while_one_mult_while_below: + "(y \ 1) * (y \ v) \ y \ v" + by (simp add: while_mult_upper_bound) + +text \Theorem 9.34\ + +lemma while_sub_dist: + "x \ z \ (x \ y) \ z" + by (simp add: while_left_isotone) + +lemma while_sub_dist_1: + "x * z \ (x \ y) \ z" + using order.trans while_mult_increasing while_sub_dist by blast + +lemma while_sub_dist_2: + "x * y * z \ (x \ y) \ z" + by (metis sup_commute mult_assoc while_mult_transitive while_sub_dist_1) + +text \Theorem 9.36\ + +lemma while_sub_dist_3: + "x \ (y \ z) \ (x \ y) \ z" + by (metis sup_commute while_mult_transitive while_sub_dist) + +text \Theorem 9.44\ + +lemma while_absorb_2: + "x \ y \ y \ (x \ z) = y \ z" + using sup_left_divisibility while_sumstar while_transitive by auto + +lemma while_simulate_right_plus_1: + "z * x \ y * (y \ z) \ z * (x \ w) \ y \ (z * w)" + by (metis sup_bot_right mult_left_zero while_simulate_right_plus) + +text \Theorem 9.39\ + +lemma while_sumstar_1_below: + "x \ ((y * (x \ 1)) \ z) \ ((x \ 1) * y) \ (x \ z)" +proof - + have 1: "x * (((x \ 1) * y) \ (x \ z)) \ ((x \ 1) * y) \ (x \ z)" + by (smt sup_mono sup_ge2 mult_assoc mult_left_dist_sup mult_right_sub_dist_sup_right while_left_unfold) + have "x \ ((y * (x \ 1)) \ z) \ (x \ z) \ (x \ (y * (((x \ 1) * y) \ ((x \ 1) * z))))" + by (metis eq_refl while_left_dist_sup while_productstar) + also have "... \ (x \ z) \ (x \ ((x \ 1) * y * (((x \ 1) * y) \ ((x \ 1) * z))))" + by (metis sup_right_isotone mult_assoc mult_left_one mult_right_sub_dist_sup_left while_left_unfold while_right_isotone) + also have "... \ (x \ z) \ (x \ (((x \ 1) * y) \ ((x \ 1) * z)))" + using semiring.add_left_mono while_left_plus_below while_right_isotone by blast + also have "... \ x \ (((x \ 1) * y) \ (x \ z))" + by (meson order.trans le_supI while_increasing while_one_mult_below while_right_isotone) + also have "... \ (((x \ 1) * y) \ (x \ z)) \ (x \ bot)" + using 1 while_simulate_absorb by auto + also have "... = ((x \ 1) * y) \ (x \ z)" + by (smt sup_assoc sup_commute sup_bot_left while_left_dist_sup while_left_unfold) + finally show ?thesis + . +qed + +lemma while_sumstar_2_below: + "((x \ 1) * y) \ (x \ z) \ (x \ y) \ (x \ z)" + by (simp add: while_left_isotone while_one_mult_below) + +text \Theorem 9.38\ + +lemma while_sup_1_below: + "x \ ((y * (x \ 1)) \ z) \ (x \ y) \ z" +proof - + have "((x \ 1) * y) \ ((x \ 1) * z) \ (x \ y) \ z" + using while_sumstar while_isotone while_one_mult_below by auto + hence "(y * (x \ 1)) \ z \ z \ y * ((x \ y) \ z)" + by (metis sup_right_isotone mult_right_isotone while_productstar) + also have "... \ (x \ y) \ z" + by (metis sup_right_isotone sup_ge2 mult_left_isotone while_left_unfold) + finally show ?thesis + using while_mult_transitive while_sub_dist by blast +qed + +text \Theorem 9.16\ + +lemma while_while_while: + "((x \ 1) \ 1) \ y = (x \ 1) \ y" + by (smt (z3) sup.absorb1 while_sumstar while_absorb_2 while_increasing while_one_increasing) + +lemma while_one: + "(1 \ 1) \ y = 1 \ y" + by (metis while_while_while while_zero) + +text \Theorem 9.22\ + +lemma while_sup_below: + "x \ y \ x \ (y \ 1)" + by (metis le_supI le_supI1 while_left_dist_sup while_left_unfold while_one_increasing) + +text \Theorem 9.32\ + +lemma while_sup_2: + "(x \ y) \ z \ (x \ (y \ 1)) \ z" + using while_left_isotone while_sup_below by auto + +text \Theorem 9.45\ + +lemma while_sup_one_left_unfold: + "1 \ x \ x * (x \ y) = x \ y" + by (metis order.antisym mult_1_left mult_left_isotone while_left_plus_below) + +lemma while_sup_one_right_unfold: + "1 \ x \ x \ (x * y) = x \ y" + using while_mult_star_exchange while_sup_one_left_unfold by auto + +text \Theorem 9.30\ + +lemma while_decompose_7: + "(x \ y) \ z = x \ (y \ ((x \ y) \ z))" + by (metis order.eq_iff order_trans while_increasing while_sub_dist_3 while_transitive) + +text \Theorem 9.31\ + +lemma while_decompose_8: + "(x \ y) \ z = (x \ y) \ (x \ (y \ z))" + using while_absorb_2 by auto + +text \Theorem 9.27\ + +lemma while_decompose_9: + "(x \ (y \ 1)) \ z = x \ (y \ ((x \ (y \ 1)) \ z))" + by (smt sup_commute le_iff_sup order_trans while_sup_below while_increasing while_sub_dist_3) + +lemma while_decompose_10: + "(x \ (y \ 1)) \ z = (x \ (y \ 1)) \ (x \ (y \ z))" +proof - + have 1: "(x \ (y \ 1)) \ z \ (x \ (y \ 1)) \ (x \ (y \ z))" + by (meson order.trans while_increasing while_right_isotone) + have "x \ (y \ 1) \ x \ (y \ 1)" + using while_increasing while_sup_below by auto + hence "(x \ (y \ 1)) \ (x \ (y \ z)) \ (x \ (y \ 1)) \ z" + using while_absorb_2 while_sup_below by force + thus ?thesis + using 1 order.antisym by blast +qed + +lemma while_back_loop_fixpoint: + "z * (y \ (y * x)) \ z * x = z * (y \ x)" + by (metis sup_commute mult_left_dist_sup while_right_unfold) + +lemma while_back_loop_prefixpoint: + "z * (y \ 1) * y \ z \ z * (y \ 1)" + by (metis le_supI le_supI2 mult_1_right mult_right_isotone mult_assoc while_increasing while_one_mult_below while_right_unfold) + +text \Theorem 9\ + +lemma while_loop_is_fixpoint: + "is_fixpoint (\x . y * x \ z) (y \ z)" + using is_fixpoint_def sup_commute while_left_unfold by auto + +text \Theorem 9\ + +lemma while_back_loop_is_prefixpoint: + "is_prefixpoint (\x . x * y \ z) (z * (y \ 1))" + using is_prefixpoint_def while_back_loop_prefixpoint by auto + +text \Theorem 9.20\ + +lemma while_while_sup: + "(1 \ x) \ y = (x \ 1) \ y" + by (metis sup_commute while_decompose_10 while_sumstar while_zero) + +lemma while_while_mult_sub: + "x \ (1 \ y) \ (x \ 1) \ y" + by (metis sup_commute while_sub_dist_3 while_while_sup) + +text \Theorem 9.11\ + +lemma while_right_plus: + "(x \ x) \ y = x \ y" + by (metis sup_idem while_plus_one while_sumstar while_transitive) + +text \Theorem 9.12\ + +lemma while_left_plus: + "(x * (x \ 1)) \ y = x \ y" + by (simp add: while_mult_star_exchange while_right_plus) + +text \Theorem 9.9\ + +lemma while_below_while_one: + "x \ x \ x \ 1" + by (meson eq_refl while_mult_transitive while_one_increasing) + +lemma while_below_while_one_mult: + "x * (x \ x) \ x * (x \ 1)" + by (simp add: mult_right_isotone while_below_while_one) + +text \Theorem 9.23\ + +lemma while_sup_sub_sup_one: + "x \ (x \ y) \ x \ (1 \ y)" + using semiring.add_right_mono while_left_dist_sup while_below_while_one by auto + +lemma while_sup_sub_sup_one_mult: + "x * (x \ (x \ y)) \ x * (x \ (1 \ y))" + by (simp add: mult_right_isotone while_sup_sub_sup_one) + +lemma while_elimination: + "x * y = bot \ x * (y \ z) = x * z" + by (metis sup_bot_right mult_assoc mult_left_dist_sup mult_left_zero while_left_unfold) + +text \Theorem 9.8\ + +lemma while_square: + "(x * x) \ y \ x \ y" + by (metis while_left_isotone while_mult_increasing while_right_plus) + +text \Theorem 9.35\ + +lemma while_mult_sub_sup: + "(x * y) \ z \ (x \ y) \ z" + by (metis while_increasing while_isotone while_mult_increasing while_sumstar) + +text \Theorem 9.43\ + +lemma while_absorb_1: + "x \ y \ x \ (y \ z) = y \ z" + by (metis order.antisym le_iff_sup while_increasing while_sub_dist_3) + +lemma while_absorb_3: + "x \ y \ x \ (y \ z) = y \ (x \ z)" + by (simp add: while_absorb_1 while_absorb_2) + +text \Theorem 9.24\ + +lemma while_square_2: + "(x * x) \ ((x \ 1) * y) \ x \ y" + by (metis le_supI while_increasing while_mult_transitive while_mult_upper_bound while_one_increasing while_square) + +lemma while_separate_unfold_below: + "(y * (x \ 1)) \ z \ (y \ z) \ (y \ (y * x * (x \ ((y * (x \ 1)) \ z))))" +proof - + have "(y * (x \ 1)) \ z = (y \ (y * x * (x \ 1))) \ (y \ z)" + by (metis mult_assoc mult_left_dist_sup mult_1_right while_left_unfold while_sumstar) + hence "(y * (x \ 1)) \ z = (y \ z) \ (y \ (y * x * (x \ 1))) * ((y * (x \ 1)) \ z)" + using while_left_unfold by auto + also have "... \ (y \ z) \ (y \ (y * x * (x \ 1)) * ((y * (x \ 1)) \ z))" + using sup_right_isotone while_sub_associative by auto + also have "... \ (y \ z) \ (y \ (y * x * (x \ ((y * (x \ 1)) \ z))))" + by (smt sup_right_isotone mult_assoc mult_right_isotone while_one_mult_below while_right_isotone) + finally show ?thesis + . +qed + +text \Theorem 9.33\ + +lemma while_mult_zero_sup: + "(x \ y * bot) \ z = x \ ((y * bot) \ z)" +proof - + have "(x \ y * bot) \ z = (x \ (y * bot)) \ (x \ z)" + by (simp add: while_sumstar) + also have "... = (x \ z) \ (x \ (y * bot)) * ((x \ (y * bot)) \ (x \ z))" + using while_left_unfold by auto + also have "... \ (x \ z) \ (x \ (y * bot))" + by (metis sup_right_isotone mult_assoc mult_left_zero while_sub_associative) + also have "... = x \ ((y * bot) \ z)" + by (simp add: sup_commute while_left_dist_sup while_zero_2) + finally show ?thesis + by (simp add: order.antisym while_sub_dist_3) +qed + +lemma while_sup_mult_zero: + "(x \ y * bot) \ y = x \ y" + by (simp add: sup_absorb2 zero_right_mult_decreasing while_mult_zero_sup while_zero_2) + +lemma while_mult_zero_sup_2: + "(x \ y * bot) \ z = (x \ z) \ (x \ (y * bot))" + by (simp add: sup_commute while_left_dist_sup while_mult_zero_sup while_zero_2) + +lemma while_sup_zero_star: + "(x \ y * bot) \ z = x \ (y * bot \ z)" + by (simp add: while_mult_zero_sup while_zero_2) + +lemma while_unfold_sum: + "(x \ y) \ z = (x \ z) \ (x \ (y * ((x \ y) \ z)))" + apply (rule order.antisym) + apply (metis semiring.add_left_mono while_sub_associative while_sumstar while_left_unfold) + by (metis le_supI while_decompose_7 while_mult_increasing while_right_isotone while_sub_dist) + +lemma while_simulate_left: + "x * z \ z * y \ w \ x \ (z * v) \ z * (y \ v) \ (x \ (w * (y \ v)))" + by (metis sup_left_isotone mult_right_isotone order_trans while_one_increasing while_simulate_left_plus) + +lemma while_simulate_right: + assumes "z * x \ y * z \ w" + shows "z * (x \ v) \ y \ (z * v \ w * (x \ v))" +proof - + have "y * z \ w \ y * (y \ z) \ w" + using sup_left_isotone while_increasing while_mult_star_exchange by force + thus ?thesis + by (meson assms order.trans while_simulate_right_plus) +qed + +lemma while_simulate: + "z * x \ y * z \ z * (x \ v) \ y \ (z * v)" + by (metis sup_bot_right mult_left_zero while_simulate_right) + +text \Theorem 9.14\ + +lemma while_while_mult: + "1 \ (x \ y) = (x \ 1) \ y" +proof - + have "(x \ 1) \ y \ (x \ 1) * ((x \ 1) \ y)" + by (simp add: while_increasing while_sup_one_left_unfold) + also have "... \ 1 \ ((x \ 1) * y)" + by (simp add: while_one_mult_while_below while_simulate) + also have "... \ 1 \ (x \ y)" + by (simp add: while_isotone while_one_mult_below) + finally show ?thesis + by (metis order.antisym while_sub_dist_3 while_while_sup) +qed + +lemma while_simulate_left_1: + "x * z \ z * y \ x \ (z * v) \ z * (y \ v) \ (x \ bot)" + by (meson order.trans mult_right_isotone while_one_increasing while_simulate_left_plus_1) + +text \Theorem 9.46\ + +lemma while_associative_1: + assumes "1 \ z" + shows "x \ (y * z) = (x \ y) * z" +proof - + have "x \ (y * z) \ x \ ((x \ y) * z)" + by (simp add: mult_isotone while_increasing while_right_isotone) + also have "... \ (x \ y) * (bot \ z) \ (x \ bot)" + by (metis mult_assoc mult_right_sub_dist_sup_right while_left_unfold while_simulate_absorb while_zero) + also have "... \ (x \ y) * z \ (x \ bot) * z" + by (metis assms le_supI sup_ge1 sup_ge2 case_split_right while_plus_one while_zero) + also have "... = (x \ y) * z" + by (metis sup_bot_right mult_right_dist_sup while_left_dist_sup) + finally show ?thesis + by (simp add: order.antisym while_sub_associative) +qed + +text \Theorem 9.29\ + +lemma while_associative_while_1: + "x \ (y * (z \ 1)) = (x \ y) * (z \ 1)" + by (simp add: while_associative_1 while_increasing) + +text \Theorem 9.13\ + +lemma while_one_while: + "(x \ 1) * (y \ 1) = x \ (y \ 1)" + by (metis mult_left_one while_associative_while_1) + +lemma while_decompose_5_below: + "(x \ (y \ 1)) \ z \ (y \ (x \ 1)) \ z" + by (smt (z3) while_left_dist_sup while_sumstar while_absorb_2 while_one_increasing while_plus_one while_sub_dist) + +text \Theorem 9.26\ + +lemma while_decompose_5: + "(x \ (y \ 1)) \ z = (y \ (x \ 1)) \ z" + by (simp add: order.antisym while_decompose_5_below) + +lemma while_decompose_4: + "(x \ (y \ 1)) \ z = x \ ((y \ (x \ 1)) \ z)" + using while_absorb_1 while_decompose_5 while_sup_below by auto + +text \Theorem 11.7\ + +lemma while_simulate_2: + "y * (x \ 1) \ x \ (y \ 1) \ y \ (x \ 1) \ x \ (y \ 1)" +proof + assume "y * (x \ 1) \ x \ (y \ 1)" + hence "y * (x \ 1) \ (x \ 1) * (y \ 1)" + by (simp add: while_one_while) + hence "y \ ((x \ 1) * 1) \ (x \ 1) * (y \ 1) \ (y \ bot)" + using while_simulate_left_plus_1 by blast + hence "y \ (x \ 1) \ (x \ (y \ 1)) \ (y \ bot)" + by (simp add: while_one_while) + also have "... = x \ (y \ 1)" + by (metis sup_commute le_iff_sup order_trans while_increasing while_right_isotone bot_least) + finally show "y \ (x \ 1) \ x \ (y \ 1)" + . +next + assume "y \ (x \ 1) \ x \ (y \ 1)" + thus "y * (x \ 1) \ x \ (y \ 1)" + using order_trans while_mult_increasing by blast +qed + +lemma while_simulate_1: + "y * x \ x * y \ y \ (x \ 1) \ x \ (y \ 1)" + by (metis order_trans while_mult_increasing while_right_isotone while_simulate while_simulate_2) + +lemma while_simulate_3: + "y * (x \ 1) \ x \ 1 \ y \ (x \ 1) \ x \ (y \ 1)" + by (meson order.trans while_increasing while_right_isotone while_simulate_2) + +text \Theorem 9.28\ + +lemma while_extra_while: + "(y * (x \ 1)) \ z = (y * (y \ (x \ 1))) \ z" +proof - + have "y * (y \ (x \ 1)) \ y * (x \ 1) * (y * (x \ 1) \ 1)" + using while_back_loop_prefixpoint while_left_isotone while_mult_star_exchange by auto + hence 1: "(y * (y \ (x \ 1))) \ z \ (y * (x \ 1)) \ z" + by (metis while_simulate_right_plus_1 mult_left_one) + have "(y * (x \ 1)) \ z \ (y * (y \ (x \ 1))) \ z" + by (simp add: while_increasing while_left_isotone while_mult_star_exchange) + thus ?thesis + using 1 order.antisym by auto +qed + +text \Theorem 11.6\ + +lemma while_separate_4: + assumes "y * x \ x * (x \ (1 \ y))" + shows "(x \ y) \ z = x \ (y \ z)" +proof - + have "(1 \ y) * x \ x * (x \ (1 \ y))" + by (smt assms sup_assoc le_supI mult_left_one mult_left_sub_dist_sup_left mult_right_dist_sup mult_1_right while_left_unfold) + hence 1: "(1 \ y) * (x \ 1) \ x \ (1 \ y)" + by (metis mult_1_right while_simulate_right_plus_1) + have "y * x * (x \ 1) \ x * (x \ ((1 \ y) * (x \ 1)))" + by (smt assms le_iff_sup mult_assoc mult_right_dist_sup while_associative_1 while_increasing) + also have "... \ x * (x \ (1 \ y))" + using 1 mult_right_isotone while_mult_transitive by blast + also have "... \ x * (x \ 1) * (y \ 1)" + by (simp add: mult_right_isotone mult_assoc while_increasing while_one_increasing while_one_while while_right_isotone) + finally have "y \ (x * (x \ 1)) \ x * (x \ 1) * (y \ 1) \ (y \ bot)" + by (metis mult_assoc mult_1_right while_simulate_left_plus_1) + hence "(y \ 1) * (y \ x) \ x * (x \ y \ 1) \ (y \ bot)" + by (smt le_iff_sup mult_assoc mult_1_right order_refl order_trans while_absorb_2 while_left_dist_sup while_mult_star_exchange while_one_mult_below while_one_while while_plus_one) + hence "(y \ 1) * ((y \ x) \ (y \ z)) \ x \ ((y \ 1) * (y \ z) \ (y \ bot) * ((y \ x) \ (y \ z)))" + by (simp add: while_simulate_right_plus) + also have "... \ x \ ((y \ z) \ (y \ bot))" + by (metis sup_mono mult_left_zero order_refl while_absorb_2 while_one_mult_below while_right_isotone while_sub_associative) + also have "... = x \ y \ z" + using sup.absorb_iff1 while_right_isotone by auto + finally show ?thesis + by (smt sup_commute le_iff_sup mult_left_one mult_right_dist_sup while_plus_one while_sub_associative while_sumstar) +qed + +lemma while_separate_5: + "y * x \ x * (x \ (x \ y)) \ (x \ y) \ z = x \ (y \ z)" + using order_lesseq_imp while_separate_4 while_sup_sub_sup_one_mult by blast + +lemma while_separate_6: + "y * x \ x * (x \ y) \ (x \ y) \ z = x \ (y \ z)" + by (smt order_trans while_increasing while_mult_star_exchange while_separate_5) + +text \Theorem 11.4\ + +lemma while_separate_1: + "y * x \ x * y \ (x \ y) \ z = x \ (y \ z)" + using mult_left_sub_dist_sup_right order_lesseq_imp while_separate_6 by blast + +text \Theorem 11.2\ + +lemma while_separate_mult_1: + "y * x \ x * y \ (x * y) \ z \ x \ (y \ z)" + by (metis while_mult_sub_sup while_separate_1) + +text \Theorem 11.5\ + +lemma separation: + assumes "y * x \ x * (y \ 1)" + shows "(x \ y) \ z = x \ (y \ z)" +proof - + have "y \ x \ x * (y \ 1) \ (y \ bot)" + by (metis assms mult_1_right while_simulate_left_plus_1) + also have "... \ x * (x \ y \ 1) \ (y \ bot)" + using sup_left_isotone while_increasing while_mult_star_exchange by force + finally have "(y \ 1) * (y \ x) \ x * (x \ y \ 1) \ (y \ bot)" + using order.trans while_one_mult_while_below by blast + hence "(y \ 1) * ((y \ x) \ (y \ z)) \ x \ ((y \ 1) * (y \ z) \ (y \ bot) * ((y \ x) \ (y \ z)))" + by (simp add: while_simulate_right_plus) + also have "... \ x \ ((y \ z) \ (y \ bot))" + by (metis sup_mono mult_left_zero order_refl while_absorb_2 while_one_mult_below while_right_isotone while_sub_associative) + also have "... = x \ y \ z" + using sup.absorb_iff1 while_right_isotone by auto + finally show ?thesis + by (smt sup_commute le_iff_sup mult_left_one mult_right_dist_sup while_plus_one while_sub_associative while_sumstar) +qed + +text \Theorem 11.5\ + +lemma while_separate_left: + "y * x \ x * (y \ 1) \ y \ (x \ z) \ x \ (y \ z)" + by (metis sup_commute separation while_sub_dist_3) + +text \Theorem 11.6\ + +lemma while_simulate_4: + "y * x \ x * (x \ (1 \ y)) \ y \ (x \ z) \ x \ (y \ z)" + by (metis sup_commute while_separate_4 while_sub_dist_3) + +lemma while_simulate_5: + "y * x \ x * (x \ (x \ y)) \ y \ (x \ z) \ x \ (y \ z)" + by (smt order_trans while_sup_sub_sup_one_mult while_simulate_4) + +lemma while_simulate_6: + "y * x \ x * (x \ y) \ y \ (x \ z) \ x \ (y \ z)" + by (smt order_trans while_increasing while_mult_star_exchange while_simulate_5) + +text \Theorem 11.3\ + +lemma while_simulate_7: + "y * x \ x * y \ y \ (x \ z) \ x \ (y \ z)" + using mult_left_sub_dist_sup_right order_lesseq_imp while_simulate_6 by blast + +lemma while_while_mult_1: + "x \ (1 \ y) = 1 \ (x \ y)" + by (metis sup_commute mult_left_one mult_1_right order_refl while_separate_1) + +text \Theorem 9.15\ + +lemma while_while_mult_2: + "x \ (1 \ y) = (x \ 1) \ y" + by (simp add: while_while_mult while_while_mult_1) + +text \Theorem 11.8\ + +lemma while_import: + assumes "p \ p * p \ p \ 1 \ p * x \ x * p" + shows "p * (x \ y) = p * ((p * x) \ y)" +proof - + have "p * (x \ y) \ (p * x) \ (p * y)" + using assms test_preserves_equation while_simulate by auto + also have "... \ (p * x) \ y" + by (metis assms le_iff_sup mult_left_one mult_right_dist_sup while_right_isotone) + finally have 2: "p * (x \ y) \ p * ((p * x) \ y)" + by (smt assms sup_commute le_iff_sup mult_assoc mult_left_dist_sup mult_1_right) + have "p * ((p * x) \ y) \ p * (x \ y)" + by (metis assms mult_left_isotone mult_left_one mult_right_isotone while_left_isotone) + thus ?thesis + using 2 order.antisym by auto +qed + +text \Theorem 11.8\ + +lemma while_preserve: + assumes "p \ p * p" + and "p \ 1" + and "p * x \ x * p" + shows "p * (x \ y) = p * (x \ (p * y))" +proof (rule order.antisym) + show "p * (x \ y) \ p * (x \ (p * y))" + by (metis assms order.antisym coreflexive_transitive mult_right_isotone mult_assoc while_simulate) + show "p * (x \ (p * y)) \ p * (x \ y)" + by (metis assms(2) mult_left_isotone mult_left_one mult_right_isotone while_right_isotone) +qed + +lemma while_plus_below_while: + "(x \ 1) * x \ x \ 1" + by (simp add: while_mult_upper_bound while_one_increasing) + +text \Theorem 9.40\ + +lemma while_01: + "(w * (x \ 1)) \ (y * z) \ (x \ w) \ ((x \ y) * z)" +proof - + have "(w * (x \ 1)) \ (y * z) = y * z \ w * (((x \ 1) * w) \ ((x \ 1) * y * z))" + by (metis mult_assoc while_productstar) + also have "... \ y * z \ w * ((x \ w) \ ((x \ y) * z))" + by (metis sup_right_isotone mult_left_isotone mult_right_isotone while_isotone while_one_mult_below) + also have "... \ (x \ y) * z \ (x \ w) * ((x \ w) \ ((x \ y) * z))" + by (meson mult_left_isotone semiring.add_mono while_increasing) + finally show ?thesis + using while_left_unfold by auto +qed + +text \Theorem 9.37\ + +lemma while_while_sub_associative: + "x \ (y \ z) \ ((x \ y) \ z) \ (x \ z)" +proof - + have 1: "x * (x \ 1) \ (x \ 1) * ((x \ y) \ 1)" + by (metis le_supE while_back_loop_prefixpoint while_mult_increasing while_mult_transitive while_one_while) + have "x \ (y \ z) \ x \ ((x \ 1) * (y \ z))" + by (metis mult_left_isotone mult_left_one while_increasing while_right_isotone) + also have "... \ (x \ 1) * ((x \ y) \ (y \ z)) \ (x \ bot)" + using 1 while_simulate_left_plus_1 by auto + also have "... \ (x \ 1) * ((x \ y) \ z) \ (x \ z)" + by (simp add: le_supI1 sup_commute while_absorb_2 while_increasing while_right_isotone) + also have "... = (x \ 1) * z \ (x \ 1) * (x \ y) * ((x \ y) \ z) \ (x \ z)" + by (metis mult_assoc mult_left_dist_sup while_left_unfold) + also have "... = (x \ y) * ((x \ y) \ z) \ (x \ z)" + by (smt sup_assoc sup_commute le_iff_sup mult_left_one mult_right_dist_sup order_refl while_absorb_1 while_plus_one while_sub_associative) + also have "... \ ((x \ y) \ z) \ (x \ z)" + using sup_left_isotone while_left_plus_below by auto + finally show ?thesis + . +qed + +lemma while_induct: + "x * z \ z \ y \ z \ x \ 1 \ z \ x \ y \ z" + by (metis le_supI1 sup_commute sup_bot_left le_iff_sup while_right_isotone while_simulate_absorb) + +(* +lemma while_sumstar_4_below: "(x \ y) \ ((x \ 1) * z) \ x \ ((y * (x \ 1)) \ z)" oops +lemma while_sumstar_2: "(x \ y) \ z = x \ ((y * (x \ 1)) \ z)" oops +lemma while_sumstar_3: "(x \ y) \ z = ((x \ 1) * y) \ (x \ z)" oops +lemma while_decompose_6: "x \ ((y * (x \ 1)) \ z) = y \ ((x * (y \ 1)) \ z)" oops +lemma while_finite_associative: "x \ bot = bot \ (x \ y) * z = x \ (y * z)" oops +lemma atomicity_refinement: "s = s * q \ x = q * x \ q * b = bot \ r * b \ b * r \ r * l \ l * r \ x * l \ l * x \ b * l \ l * b \ q * l \ l * q \ r \ q \ q * (r \ 1) \ q \ 1 \ s * ((x \ b \ r \ l) \ (q * z)) \ s * ((x * (b \ q) \ r \ l) \ z)" oops + +lemma while_separate_right_plus: "y * x \ x * (x \ (1 \ y)) \ 1 \ y \ (x \ z) \ x \ (y \ z)" oops +lemma while_square_1: "x \ 1 = (x * x) \ (x \ 1)" oops +lemma while_absorb_below_one: "y * x \ x \ y \ x \ 1 \ x" oops +lemma "y \ (x \ 1) \ x \ (y \ 1) \ (x \ y) \ 1 = x \ (y \ 1)" oops +lemma "y * x \ (1 \ x) * (y \ 1) \ (x \ y) \ 1 = x \ (y \ 1)" oops +*) + +end + +class bounded_binary_itering = bounded_idempotent_left_zero_semiring + binary_itering +begin + +text \Theorem 9\ + +lemma while_right_top: + "x \ top = top" + by (metis sup_left_top while_left_unfold) + +text \Theorem 9\ + +lemma while_left_top: + "top * (x \ 1) = top" + by (meson order.antisym le_supE top_greatest while_back_loop_prefixpoint) + +end + +class extended_binary_itering = binary_itering + + assumes while_denest_0: "w * (x \ (y * z)) \ (w * (x \ y)) \ (w * (x \ y) * z)" +begin + +text \Theorem 10.2\ + +lemma while_denest_1: + "w * (x \ (y * z)) \ (w * (x \ y)) \ z" + using while_denest_0 while_mult_increasing while_mult_transitive by blast + +lemma while_mult_sub_while_while: + "x \ (y * z) \ (x \ y) \ z" + by (metis mult_left_one while_denest_1) + +lemma while_zero_zero: + "(x \ bot) \ bot = x \ bot" + by (metis order.antisym mult_left_zero sup_bot_left while_left_unfold while_sub_associative while_mult_sub_while_while) + +text \Theorem 10.11\ + +lemma while_mult_zero_zero: + "(x * (y \ bot)) \ bot = x * (y \ bot)" + apply (rule order.antisym) + apply (metis sup_bot_left while_left_unfold mult_assoc le_supI1 mult_left_zero mult_right_isotone while_left_dist_sup while_sub_associative) + by (metis mult_left_zero while_denest_1) + +text \Theorem 10.3\ + +lemma while_denest_2: + "w * ((x \ (y * w)) \ z) = w * (((x \ y) * w) \ z)" + apply (rule order.antisym) + apply (metis mult_assoc while_denest_0 while_simulate_right_plus_1 while_slide) + by (simp add: mult_isotone while_left_isotone while_sub_associative) + +text \Theorem 10.12\ + +lemma while_denest_3: + "(x \ w) \ (x \ bot) = (x \ w) \ bot" + by (metis while_absorb_2 while_right_isotone while_zero_zero bot_least) + +text \Theorem 10.15\ + +lemma while_02: + "x \ ((x \ w) \ ((x \ y) * z)) = (x \ w) \ ((x \ y) * z)" +proof - + have "x * ((x \ w) \ ((x \ y) * z)) = x * (x \ y) * z \ x * (x \ w) * ((x \ w) \ ((x \ y) * z))" + by (metis mult_assoc mult_left_dist_sup while_left_unfold) + also have "... \ (x \ w) \ ((x \ y) * z)" + by (metis sup_mono mult_right_sub_dist_sup_right while_left_unfold) + finally have "x \ ((x \ w) \ ((x \ y) * z)) \ ((x \ w) \ ((x \ y) * z)) \ (x \ bot)" + using while_simulate_absorb by auto + also have "... = (x \ w) \ ((x \ y) * z)" + by (metis sup_commute le_iff_sup order_trans while_mult_sub_while_while while_right_isotone bot_least) + finally show ?thesis + by (simp add: order.antisym while_increasing) +qed + +lemma while_sumstar_3_below: + "(x \ y) \ (x \ z) \ (x \ y) \ ((x \ 1) * z)" +proof - + have "(x \ y) \ (x \ z) = (x \ z) \ ((x \ y) \ ((x \ y) * (x \ z)))" + using while_right_unfold by blast + also have "... \ (x \ z) \ ((x \ y) \ (x \ (y * (x \ z))))" + by (meson sup_right_isotone while_right_isotone while_sub_associative) + also have "... \ (x \ z) \ ((x \ y) \ (x \ ((x \ y) \ (x \ z))))" + by (smt sup_right_isotone order_trans while_increasing while_mult_upper_bound while_one_increasing while_right_isotone) + also have "... \ (x \ z) \ ((x \ y) \ (x \ ((x \ y) \ ((x \ 1) * z))))" + by (metis sup_right_isotone mult_left_isotone mult_left_one order_trans while_increasing while_right_isotone while_sumstar while_transitive) + also have "... = (x \ z) \ ((x \ y) \ ((x \ 1) * z))" + by (simp add: while_transitive while_02) + also have "... = (x \ y) \ ((x \ 1) * z)" + by (smt sup_assoc mult_left_one mult_right_dist_sup while_02 while_left_dist_sup while_plus_one) + finally show ?thesis + . +qed + +lemma while_sumstar_4_below: + "(x \ y) \ ((x \ 1) * z) \ x \ ((y * (x \ 1)) \ z)" +proof - + have "(x \ y) \ ((x \ 1) * z) = (x \ 1) * z \ (x \ y) * ((x \ y) \ ((x \ 1) * z))" + using while_left_unfold by auto + also have "... \ (x \ z) \ (x \ (y * ((x \ y) \ ((x \ 1) * z))))" + by (meson sup_mono while_one_mult_below while_sub_associative) + also have "... = (x \ z) \ (x \ (y * (((x \ 1) * y) \ ((x \ 1) * z))))" + by (metis mult_left_one while_denest_2) + also have "... = x \ ((y * (x \ 1)) \ z)" + by (metis while_left_dist_sup while_productstar) + finally show ?thesis + . +qed + +text \Theorem 10.10\ + +lemma while_sumstar_1: + "(x \ y) \ z = (x \ y) \ ((x \ 1) * z)" + by (smt order.eq_iff order_trans while_sup_1_below while_sumstar while_sumstar_3_below while_sumstar_4_below) + +text \Theorem 10.8\ + +lemma while_sumstar_2: + "(x \ y) \ z = x \ ((y * (x \ 1)) \ z)" + using order.antisym while_sup_1_below while_sumstar_1 while_sumstar_4_below by auto + +text \Theorem 10.9\ + +lemma while_sumstar_3: + "(x \ y) \ z = ((x \ 1) * y) \ (x \ z)" + using order.antisym while_sumstar while_sumstar_1_below while_sumstar_2_below while_sumstar_2 by force + +text \Theorem 10.6\ + +lemma while_decompose_6: + "x \ ((y * (x \ 1)) \ z) = y \ ((x * (y \ 1)) \ z)" + by (metis sup_commute while_sumstar_2) + +text \Theorem 10.4\ + +lemma while_denest_4: + "(x \ w) \ (x \ (y * z)) = (x \ w) \ ((x \ y) * z)" +proof - + have "(x \ w) \ (x \ (y * z)) = x \ ((w * (x \ 1)) \ (y * z))" + using while_sumstar while_sumstar_2 by force + also have "... \ (x \ w) \ ((x \ y) * z)" + by (metis while_01 while_right_isotone while_02) + finally show ?thesis + using order.antisym while_right_isotone while_sub_associative by auto +qed + +text \Theorem 10.13\ + +lemma while_denest_5: + "w * ((x \ (y * w)) \ (x \ (y * z))) = w * (((x \ y) * w) \ ((x \ y) * z))" + by (simp add: while_denest_2 while_denest_4) + +text \Theorem 10.5\ + +lemma while_denest_6: + "(w * (x \ y)) \ z = z \ w * ((x \ y * w) \ (y * z))" + by (metis while_denest_5 while_productstar while_sumstar) + +text \Theorem 10.1\ + +lemma while_sum_below_one: + "y * ((x \ y) \ z) \ (y * (x \ 1)) \ z" + by (simp add: while_denest_6) + +text \Theorem 10.14\ + +lemma while_separate_unfold: + "(y * (x \ 1)) \ z = (y \ z) \ (y \ (y * x * (x \ ((y * (x \ 1)) \ z))))" +proof - + have "y \ (y * x * (x \ ((y * (x \ 1)) \ z))) \ y \ (y * ((x \ y) \ z))" + using mult_right_isotone while_left_plus_below while_right_isotone mult_assoc while_sumstar_2 by auto + also have "... \ (y * (x \ 1)) \ z" + by (metis sup_commute sup_ge1 while_absorb_1 while_mult_star_exchange while_sum_below_one) + finally have "(y \ z) \ (y \ (y * x * (x \ ((y * (x \ 1)) \ z)))) \ (y * (x \ 1)) \ z" + using sup.bounded_iff while_back_loop_prefixpoint while_left_isotone by auto + thus ?thesis + by (simp add: order.antisym while_separate_unfold_below) +qed + +text \Theorem 10.7\ + +lemma while_finite_associative: + "x \ bot = bot \ (x \ y) * z = x \ (y * z)" + by (metis while_denest_4 while_zero) + +text \Theorem 12\ + +lemma atomicity_refinement: + assumes "s = s * q" + and "x = q * x" + and "q * b = bot" + and "r * b \ b * r" + and "r * l \ l * r" + and "x * l \ l * x" + and "b * l \ l * b" + and "q * l \ l * q" + and "r \ q \ q * (r \ 1) \ q \ 1" + shows "s * ((x \ b \ r \ l) \ (q * z)) \ s * ((x * (b \ q) \ r \ l) \ z)" +proof - + have 1: "(x \ b \ r) * l \ l * (x \ b \ r)" + by (smt assms(5-7) mult_left_dist_sup semiring.add_mono semiring.distrib_right) + have "q * ((x * (b \ r \ 1) * q) \ z) \ (x * (b \ r \ 1) * q) \ z" + using assms(9) order_lesseq_imp while_increasing while_mult_upper_bound by blast + also have "... \ (x * (b \ ((r \ 1) * q))) \ z" + by (simp add: mult_right_isotone while_left_isotone while_sub_associative mult_assoc) + also have "... \ (x * (b \ r \ q)) \ z" + by (simp add: mult_right_isotone while_left_isotone while_one_mult_below while_right_isotone) + also have "... \ (x * (b \ (q * (r \ 1)))) \ z" + by (simp add: assms(9) mult_right_isotone while_left_isotone while_right_isotone) + finally have 2: "q * ((x * (b \ r \ 1) * q) \ z) \ (x * (b \ q) * (r \ 1)) \ z" + using while_associative_while_1 mult_assoc by auto + have "s * ((x \ b \ r \ l) \ (q * z)) = s * (l \ (x \ b \ r) \ (q * z))" + using 1 sup_commute while_separate_1 by fastforce + also have "... = s * q * (l \ b \ r \ (q * x * (b \ r \ 1)) \ (q * z))" + by (smt assms(1,2,4) sup_assoc sup_commute while_sumstar_2 while_separate_1) + also have "... = s * q * (l \ b \ r \ (q * ((x * (b \ r \ 1) * q) \ z)))" + by (simp add: while_slide mult_assoc) + also have "... \ s * q * (l \ b \ r \ (x * (b \ q) * (r \ 1)) \ z)" + using 2 by (meson mult_right_isotone while_right_isotone) + also have "... \ s * (l \ q * (b \ r \ (x * (b \ q) * (r \ 1)) \ z))" + by (simp add: assms(8) mult_right_isotone while_simulate mult_assoc) + also have "... = s * (l \ q * (r \ (x * (b \ q) * (r \ 1)) \ z))" + using assms(3) while_elimination by auto + also have "... \ s * (l \ r \ (x * (b \ q) * (r \ 1)) \ z)" + by (meson assms(9) order.trans mult_right_isotone order.refl while_increasing while_mult_upper_bound while_right_isotone) + also have "... = s * (l \ (r \ x * (b \ q)) \ z)" + by (simp add: while_sumstar_2) + also have "... \ s * ((x * (b \ q) \ r \ l) \ z)" + using mult_right_isotone sup_commute while_sub_dist_3 by auto + finally show ?thesis + . +qed + +end + +class bounded_extended_binary_itering = bounded_binary_itering + extended_binary_itering + +end + diff --git a/thys/Correctness_Algebras/Binary_Iterings_Nonstrict.thy b/thys/Correctness_Algebras/Binary_Iterings_Nonstrict.thy new file mode 100644 --- /dev/null +++ b/thys/Correctness_Algebras/Binary_Iterings_Nonstrict.thy @@ -0,0 +1,505 @@ +(* Title: Nonstrict Binary Iterings + Author: Walter Guttmann + Maintainer: Walter Guttmann +*) + +section \Nonstrict Binary Iterings\ + +theory Binary_Iterings_Nonstrict + +imports Omega_Algebras Binary_Iterings + +begin + +class nonstrict_itering = bounded_left_zero_omega_algebra + while + + assumes while_def: "x \ y = x\<^sup>\ \ x\<^sup>\ * y" +begin + +text \Theorem 8.2\ + +subclass bounded_binary_itering +proof (unfold_locales) + fix x y z + show "(x * y) \ z = z \ x * ((y * x) \ (y * z))" + by (metis sup_commute mult_assoc mult_left_dist_sup omega_loop_fixpoint omega_slide star.circ_slide while_def) +next + fix x y z + show "(x \ y) \ z = (x \ y) \ (x \ z)" + proof - + have 1: "(x \ y) \ z = (x\<^sup>\ * y)\<^sup>\ \ (x\<^sup>\ * y)\<^sup>\ * (x\<^sup>\ \ x\<^sup>\ * z)" + using mult_left_dist_sup omega_decompose star.circ_sup_9 sup_assoc while_def mult_assoc by auto + hence 2: "(x \ y) \ z \ (x \ y) \ (x \ z)" + by (smt sup_mono sup_ge2 le_iff_sup mult_left_isotone omega_sub_dist star.circ_sub_dist while_def) + let ?rhs = "x\<^sup>\ * y * ((x\<^sup>\ \ x\<^sup>\ * y)\<^sup>\ \ (x\<^sup>\ \ x\<^sup>\ * y)\<^sup>\ * (x\<^sup>\ \ x\<^sup>\ * z)) \ (x\<^sup>\ \ x\<^sup>\ * z)" + have "x\<^sup>\ * (x\<^sup>\ \ x\<^sup>\ * y)\<^sup>\ \ x\<^sup>\" + by (simp add: omega_sub_vector) + hence "x\<^sup>\ * (x\<^sup>\ \ x\<^sup>\ * y)\<^sup>\ \ x\<^sup>\ * y * (x\<^sup>\ \ x\<^sup>\ * y)\<^sup>\ \ ?rhs" + by (smt sup_commute sup_mono sup_ge1 mult_left_dist_sup order_trans) + hence 3: "(x\<^sup>\ \ x\<^sup>\ * y)\<^sup>\ \ ?rhs" + by (metis mult_right_dist_sup omega_unfold) + have "x\<^sup>\ * (x\<^sup>\ \ x\<^sup>\ * y)\<^sup>\ * (x\<^sup>\ \ x\<^sup>\ * z) \ x\<^sup>\" + by (simp add: omega_mult_star_2 omega_sub_vector) + hence "x\<^sup>\ * (x\<^sup>\ \ x\<^sup>\ * y)\<^sup>\ * (x\<^sup>\ \ x\<^sup>\ * z) \ x\<^sup>\ * y * (x\<^sup>\ \ x\<^sup>\ * y)\<^sup>\ * (x\<^sup>\ \ x\<^sup>\ * z) \ ?rhs" + by (smt sup_commute sup_mono sup_ge2 mult_assoc mult_left_dist_sup order_trans) + hence "(x\<^sup>\ \ x\<^sup>\ * y)\<^sup>\ * (x\<^sup>\ \ x\<^sup>\ * z) \ ?rhs" + by (smt sup_assoc sup_ge2 le_iff_sup mult_assoc mult_right_dist_sup star.circ_loop_fixpoint) + hence "(x\<^sup>\ \ x\<^sup>\ * y)\<^sup>\ \ (x\<^sup>\ \ x\<^sup>\ * y)\<^sup>\ * (x\<^sup>\ \ x\<^sup>\ * z) \ ?rhs" + using 3 by simp + hence "(x\<^sup>\ \ x\<^sup>\ * y)\<^sup>\ \ (x\<^sup>\ \ x\<^sup>\ * y)\<^sup>\ * (x\<^sup>\ \ x\<^sup>\ * z) \ (x\<^sup>\ * y)\<^sup>\ \ (x\<^sup>\ * y)\<^sup>\ * (x\<^sup>\ \ x\<^sup>\ * z)" + by (metis sup_commute omega_induct) + thus ?thesis + using 1 2 order.antisym while_def by force + qed +next + fix x y z + show "x \ (y \ z) = (x \ y) \ (x \ z)" + using mult_left_dist_sup sup_assoc sup_commute while_def by auto +next + fix x y z + show "(x \ y) * z \ x \ (y * z)" + using mult_semi_associative omega_sub_vector semiring.add_mono semiring.distrib_right while_def by fastforce +next + fix v w x y z + show "x * z \ z * (y \ 1) \ w \ x \ (z * v) \ z * (y \ v) \ (x \ (w * (y \ v)))" + proof + assume "x * z \ z * (y \ 1) \ w" + hence 1: "x * z \ z * y\<^sup>\ \ z * y\<^sup>\ \ w" + by (metis mult_left_dist_sup mult_1_right while_def) + let ?rhs = "z * (y\<^sup>\ \ y\<^sup>\ * v) \ x\<^sup>\ \ x\<^sup>\ * w * (y\<^sup>\ \ y\<^sup>\ * v)" + have 2: "z * v \ ?rhs" + by (metis le_supI1 mult_left_sub_dist_sup_right omega_loop_fixpoint) + have "x * z * (y\<^sup>\ \ y\<^sup>\ * v) \ ?rhs" + proof - + have "x * z * (y\<^sup>\ \ y\<^sup>\ * v) \ (z * y\<^sup>\ \ z * y\<^sup>\ \ w) * (y\<^sup>\ \ y\<^sup>\ * v)" + using 1 mult_left_isotone by auto + also have "... = z * (y\<^sup>\ * (y\<^sup>\ \ y\<^sup>\ * v) \ y\<^sup>\ * (y\<^sup>\ \ y\<^sup>\ * v)) \ w * (y\<^sup>\ \ y\<^sup>\ * v)" + by (smt mult_assoc mult_left_dist_sup mult_right_dist_sup) + also have "... = z * (y\<^sup>\ * (y\<^sup>\ \ y\<^sup>\ * v) \ y\<^sup>\ \ y\<^sup>\ * v) \ w * (y\<^sup>\ \ y\<^sup>\ * v)" + by (smt sup_assoc mult_assoc mult_left_dist_sup star.circ_transitive_equal star_mult_omega) + also have "... \ z * (y\<^sup>\ \ y\<^sup>\ * v) \ x\<^sup>\ * w * (y\<^sup>\ \ y\<^sup>\ * v)" + by (smt sup_commute sup_mono sup_left_top mult_left_dist_sup mult_left_one mult_right_dist_sup mult_right_sub_dist_sup_left omega_vector order_refl star.circ_plus_one) + finally show ?thesis + by (smt sup_assoc sup_commute le_iff_sup) + qed + hence "x * ?rhs \ ?rhs" + by (smt sup_assoc sup_commute sup_ge1 le_iff_sup mult_assoc mult_left_dist_sup mult_right_dist_sup omega_unfold star.circ_increasing star.circ_transitive_equal) + hence "z * v \ x * ?rhs \ ?rhs" + using 2 le_supI by blast + hence "x\<^sup>\ * z * v \ ?rhs" + by (simp add: star_left_induct mult_assoc) + hence "x\<^sup>\ \ x\<^sup>\ * z * v \ ?rhs" + by (meson order_trans sup_ge1 sup_ge2 sup_least) + thus "x \ (z * v) \ z * (y \ v) \ (x \ (w * (y \ v)))" + by (simp add: sup_assoc while_def mult_assoc) + qed +next + fix v w x y z + show "z * x \ y * (y \ z) \ w \ z * (x \ v) \ y \ (z * v \ w * (x \ v))" + proof + assume "z * x \ y * (y \ z) \ w" + hence "z * x \ y * (y\<^sup>\ \ y\<^sup>\ * z) \ w" + by (simp add: while_def) + hence 1: "z * x \ y\<^sup>\ \ y * y\<^sup>\ * z \ w" + using mult_left_dist_sup omega_unfold mult_assoc by auto + let ?rhs = "y\<^sup>\ \ y\<^sup>\ * z * v \ y\<^sup>\ * w * (x\<^sup>\ \ x\<^sup>\ * v)" + have 2: "z * x\<^sup>\ \ ?rhs" + proof - + have "z * x\<^sup>\ \ y * y\<^sup>\ * z * x\<^sup>\ \ y\<^sup>\ * x\<^sup>\ \ w * x\<^sup>\" + using 1 by (smt sup_commute le_iff_sup mult_assoc mult_right_dist_sup omega_unfold) + also have "... \ y * y\<^sup>\ * z * x\<^sup>\ \ y\<^sup>\ \ w * x\<^sup>\" + using omega_sub_vector semiring.add_mono by blast + also have "... = y * y\<^sup>\ * (z * x\<^sup>\) \ (y\<^sup>\ \ w * x\<^sup>\)" + by (simp add: sup_assoc mult_assoc) + finally have "z * x\<^sup>\ \ (y * y\<^sup>\)\<^sup>\ \ (y * y\<^sup>\)\<^sup>\ * (y\<^sup>\ \ w * x\<^sup>\)" + by (simp add: omega_induct sup_commute) + also have "... = y\<^sup>\ \ y\<^sup>\ * w * x\<^sup>\" + by (simp add: left_plus_omega semiring.distrib_left star.left_plus_circ star_mult_omega mult_assoc) + also have "... \ ?rhs" + using mult_left_sub_dist_sup_left sup.mono sup_ge1 by blast + finally show ?thesis + . + qed + let ?rhs2 = "y\<^sup>\ \ y\<^sup>\ * z \ y\<^sup>\ * w * (x\<^sup>\ \ x\<^sup>\)" + have "?rhs2 * x \ ?rhs2" + proof - + have 3: "y\<^sup>\ * x \ ?rhs2" + by (simp add: le_supI1 omega_sub_vector) + have "y\<^sup>\ * z * x \ y\<^sup>\ * (y\<^sup>\ \ y * y\<^sup>\ * z \ w)" + using 1 mult_right_isotone mult_assoc by auto + also have "... = y\<^sup>\ \ y\<^sup>\ * y * y\<^sup>\ * z \ y\<^sup>\ * w" + by (simp add: semiring.distrib_left star_mult_omega mult_assoc) + also have "... = y\<^sup>\ \ y * y\<^sup>\ * z \ y\<^sup>\ * w" + by (simp add: star.circ_plus_same star.circ_transitive_equal mult_assoc) + also have "... \ y\<^sup>\ \ y\<^sup>\ * z \ y\<^sup>\ * w" + by (metis sup_left_isotone sup_right_isotone mult_left_isotone star.left_plus_below_circ) + also have "... \ y\<^sup>\ \ y\<^sup>\ * z \ y\<^sup>\ * w * x\<^sup>\" + using semiring.add_left_mono star.circ_back_loop_prefixpoint by auto + finally have 4: "y\<^sup>\ * z * x \ ?rhs2" + using mult_left_sub_dist_sup_right order_lesseq_imp semiring.add_left_mono by blast + have "(x\<^sup>\ \ x\<^sup>\) * x \ x\<^sup>\ \ x\<^sup>\" + using omega_sub_vector semiring.distrib_right star.left_plus_below_circ star_plus sup_mono by fastforce + hence "y\<^sup>\ * w * (x\<^sup>\ \ x\<^sup>\) * x \ ?rhs2" + by (simp add: le_supI2 mult_right_isotone mult_assoc) + thus ?thesis + using 3 4 mult_right_dist_sup by force + qed + hence "z \ ?rhs2 * x \ ?rhs2" + by (metis omega_loop_fixpoint sup.boundedE sup_ge1 sup_least) + hence 5: "z * x\<^sup>\ \ ?rhs2" + using star_right_induct by blast + have "z * x\<^sup>\ * v \ ?rhs" + proof - + have "z * x\<^sup>\ * v \ ?rhs2 * v" + using 5 mult_left_isotone by auto + also have "... = y\<^sup>\ * v \ y\<^sup>\ * z * v \ y\<^sup>\ * w * (x\<^sup>\ * v \ x\<^sup>\ * v)" + using mult_right_dist_sup mult_assoc by auto + also have "... \ y\<^sup>\ \ y\<^sup>\ * z * v \ y\<^sup>\ * w * (x\<^sup>\ * v \ x\<^sup>\ * v)" + using omega_sub_vector semiring.add_right_mono by blast + also have "... \ ?rhs" + using mult_right_isotone omega_sub_vector semiring.add_left_mono semiring.add_right_mono by auto + finally show ?thesis + . + qed + hence "z * (x\<^sup>\ \ x\<^sup>\ * v) \ ?rhs" + using 2 semiring.distrib_left mult_assoc by force + thus "z * (x \ v) \ y \ (z * v \ w * (x \ v))" + by (simp add: semiring.distrib_left sup_assoc while_def mult_assoc) + qed +qed + +text \Theorem 13.8\ + +lemma while_top: + "top \ x = top" + by (metis sup_left_top star.circ_top star_omega_top while_def) + +text \Theorem 13.7\ + +lemma while_one_top: + "1 \ x = top" + by (simp add: omega_one while_def) + +lemma while_finite_associative: + "x\<^sup>\ = bot \ (x \ y) * z = x \ (y * z)" + by (simp add: while_def mult_assoc) + +lemma star_below_while: + "x\<^sup>\ * y \ x \ y" + by (simp add: while_def) + +text \Theorem 13.9\ + +lemma while_sub_mult_one: + "x * (1 \ y) \ 1 \ x" + by (simp add: omega_one while_def) + +lemma while_while_one: + "y \ (x \ 1) = y\<^sup>\ \ y\<^sup>\ * x\<^sup>\ \ y\<^sup>\ * x\<^sup>\" + using mult_left_dist_sup sup_assoc while_def by auto + +lemma while_simulate_4_plus: + assumes "y * x \ x * (x \ (1 \ y))" + shows "y * x * x\<^sup>\ \ x * (x \ (1 \ y))" +proof - + have 1: "x * (x \ (1 \ y)) = x\<^sup>\ \ x * x\<^sup>\ \ x * x\<^sup>\ * y" + using mult_left_dist_sup omega_unfold sup_assoc while_def mult_assoc by force + hence "y * x * x\<^sup>\ \ (x\<^sup>\ \ x * x\<^sup>\ \ x * x\<^sup>\ * y) * x\<^sup>\" + using assms mult_left_isotone by auto + also have "... = x\<^sup>\ * x\<^sup>\ \ x * x\<^sup>\ * x\<^sup>\ \ x * x\<^sup>\ * y * x\<^sup>\" + using mult_right_dist_sup by force + also have "... = x * x\<^sup>\ * (y * x * x\<^sup>\) \ x\<^sup>\ \ x * x\<^sup>\ \ x * x\<^sup>\ * y" + by (smt sup_assoc sup_commute mult_assoc omega_mult_star_2 star.circ_back_loop_fixpoint star.circ_plus_same star.circ_transitive_equal) + finally have "y * x * x\<^sup>\ \ x * x\<^sup>\ * (y * x * x\<^sup>\) \ (x\<^sup>\ \ x * x\<^sup>\ \ x * x\<^sup>\ * y)" + using sup_assoc by force + hence "y * x * x\<^sup>\ \ (x * x\<^sup>\)\<^sup>\ \ (x * x\<^sup>\)\<^sup>\ * (x\<^sup>\ \ x * x\<^sup>\ \ x * x\<^sup>\ * y)" + by (simp add: omega_induct sup_monoid.add_commute) + also have "... = x\<^sup>\ \ x\<^sup>\ * (x\<^sup>\ \ x * x\<^sup>\ \ x * x\<^sup>\ * y)" + by (simp add: left_plus_omega star.left_plus_circ) + finally show ?thesis + using 1 by (metis while_def while_mult_star_exchange while_transitive) +qed + +lemma while_simulate_4_omega: + assumes "y * x \ x * (x \ (1 \ y))" + shows "y * x\<^sup>\ \ x\<^sup>\" +proof - + have "x * (x \ (1 \ y)) = x\<^sup>\ \ x * x\<^sup>\ \ x * x\<^sup>\ * y" + using mult_1_right mult_left_dist_sup omega_unfold sup_assoc while_def mult_assoc by auto + hence "y * x\<^sup>\ \ (x\<^sup>\ \ x * x\<^sup>\ \ x * x\<^sup>\ * y) * x\<^sup>\" + by (smt assms le_iff_sup mult_assoc mult_right_dist_sup omega_unfold) + also have "... = x\<^sup>\ * x\<^sup>\ \ x * x\<^sup>\ * x\<^sup>\ \ x * x\<^sup>\ * y * x\<^sup>\" + using semiring.distrib_right by auto + also have "... = x * x\<^sup>\ * (y * x\<^sup>\) \ x\<^sup>\" + by (metis sup_commute le_iff_sup mult_assoc omega_sub_vector omega_unfold star_mult_omega) + finally have "y * x\<^sup>\ \ x * x\<^sup>\ * (y * x\<^sup>\) \ x\<^sup>\" + . + hence "y * x\<^sup>\ \ (x * x\<^sup>\)\<^sup>\ \ (x * x\<^sup>\)\<^sup>\ * x\<^sup>\" + by (simp add: omega_induct sup_commute) + thus ?thesis + by (metis sup_idem left_plus_omega star_mult_omega) +qed + +text \Theorem 13.11\ + +lemma while_unfold_below: + "x = z \ y * x \ x \ y \ z" + by (simp add: omega_induct while_def) + +text \Theorem 13.12\ + +lemma while_unfold_below_sub: + "x \ z \ y * x \ x \ y \ z" + by (simp add: omega_induct while_def) + +text \Theorem 13.10\ + +lemma while_unfold_below_1: + "x = y * x \ x \ y \ 1" + by (simp add: while_unfold_below_sub) + +lemma while_square_1: + "x \ 1 = (x * x) \ (x \ 1)" + by (metis mult_1_right omega_square star_square_2 while_def) + +lemma while_absorb_below_one: + "y * x \ x \ y \ x \ 1 \ x" + by (simp add: while_unfold_below_sub) + +lemma while_loop_is_greatest_postfixpoint: + "is_greatest_postfixpoint (\x . y * x \ z) (y \ z)" +proof - + have "(y \ z) \ (\x . y * x \ z) (y \ z)" + using sup_commute while_left_unfold by force + thus ?thesis + by (simp add: is_greatest_postfixpoint_def sup_commute while_unfold_below_sub) +qed + +lemma while_loop_is_greatest_fixpoint: + "is_greatest_fixpoint (\x . y * x \ z) (y \ z)" + by (simp add: omega_loop_is_greatest_fixpoint while_def) + +(* +lemma while_sumstar_4_below: "(x \ y) \ ((x \ 1) * z) \ x \ ((y * (x \ 1)) \ z)" nitpick [expect=genuine,card=6] oops +lemma while_sumstar_2: "(x \ y) \ z = x \ ((y * (x \ 1)) \ z)" nitpick [expect=genuine,card=6] oops +lemma while_sumstar_3: "(x \ y) \ z = ((x \ 1) * y) \ (x \ z)" oops +lemma while_decompose_6: "x \ ((y * (x \ 1)) \ z) = y \ ((x * (y \ 1)) \ z)" nitpick [expect=genuine,card=6] oops +lemma while_finite_associative: "x \ bot = bot \ (x \ y) * z = x \ (y * z)" oops +lemma atomicity_refinement: "s = s * q \ x = q * x \ q * b = bot \ r * b \ b * r \ r * l \ l * r \ x * l \ l * x \ b * l \ l * b \ q * l \ l * q \ r \ q \ q * (r \ 1) \ q \ 1 \ s * ((x \ b \ r \ l) \ (q * z)) \ s * ((x * (b \ q) \ r \ l) \ z)" oops + +lemma while_separate_right_plus: "y * x \ x * (x \ (1 \ y)) \ 1 \ y \ (x \ z) \ x \ (y \ z)" oops +lemma "y \ (x \ 1) \ x \ (y \ 1) \ (x \ y) \ 1 = x \ (y \ 1)" oops +lemma "y * x \ (1 \ x) * (y \ 1) \ (x \ y) \ 1 = x \ (y \ 1)" oops + +lemma while_mult_sub_while_while: "x \ (y * z) \ (x \ y) \ z" oops +lemma while_zero_zero: "(x \ bot) \ bot = x \ bot" oops +lemma while_denest_3: "(x \ w) \ (x \ bot) = (x \ w) \ bot" oops +lemma while_02: "x \ ((x \ w) \ ((x \ y) * z)) = (x \ w) \ ((x \ y) * z)" oops +lemma while_sumstar_3_below: "(x \ y) \ (x \ z) \ (x \ y) \ ((x \ 1) * z)" oops +lemma while_sumstar_1: "(x \ y) \ z = (x \ y) \ ((x \ 1) * z)" oops +lemma while_denest_4: "(x \ w) \ (x \ (y * z)) = (x \ w) \ ((x \ y) * z)" oops +*) + +end + +class nonstrict_itering_zero = nonstrict_itering + + assumes mult_right_zero: "x * bot = bot" +begin + +lemma while_finite_associative_2: + "x \ bot = bot \ (x \ y) * z = x \ (y * z)" + by (metis sup_bot_left sup_bot_right mult_assoc mult_right_zero while_def) + +text \Theorem 13 counterexamples\ + +(* +lemma while_mult_top: "(x * top) \ z = z \ x * top" nitpick [expect=genuine,card=3] oops +lemma tarski_mult_top_idempotent: "x * top = x * top * x * top" nitpick [expect=genuine,card=3] oops + +lemma while_denest_0: "w * (x \ (y * z)) \ (w * (x \ y)) \ (w * (x \ y) * z)" nitpick [expect=genuine,card=3] oops +lemma while_denest_1: "w * (x \ (y * z)) \ (w * (x \ y)) \ z" nitpick [expect=genuine,card=3] oops +lemma while_mult_zero_zero: "(x * (y \ bot)) \ bot = x * (y \ bot)" nitpick [expect=genuine,card=3] oops +lemma while_denest_2: "w * ((x \ (y * w)) \ z) = w * (((x \ y) * w) \ z)" nitpick [expect=genuine,card=3] oops +lemma while_denest_5: "w * ((x \ (y * w)) \ (x \ (y * z))) = w * (((x \ y) * w) \ ((x \ y) * z))" nitpick [expect=genuine,card=3] oops +lemma while_denest_6: "(w * (x \ y)) \ z = z \ w * ((x \ y * w) \ (y * z))" nitpick [expect=genuine,card=3] oops +lemma while_sum_below_one: "y * ((x \ y) \ z) \ (y * (x \ 1)) \ z" nitpick [expect=genuine,card=3] oops +lemma while_separate_unfold: "(y * (x \ 1)) \ z = (y \ z) \ (y \ (y * x * (x \ ((y * (x \ 1)) \ z))))" nitpick [expect=genuine,card=3] oops + +lemma while_sub_while_zero: "x \ z \ (x \ y) \ z" nitpick [expect=genuine,card=4] oops +lemma while_while_sub_associative: "x \ (y \ z) \ (x \ y) \ z" nitpick [expect=genuine,card=4] oops +lemma tarski: "x \ x * top * x * top" nitpick [expect=genuine,card=3] oops +lemma tarski_top_omega_below: "x * top \ (x * top)\<^sup>\" nitpick [expect=genuine,card=3] oops +lemma tarski_top_omega: "x * top = (x * top)\<^sup>\" nitpick [expect=genuine,card=3] oops +lemma tarski_below_top_omega: "x \ (x * top)\<^sup>\" nitpick [expect=genuine,card=3] oops +lemma tarski_mult_omega_omega: "(x * y\<^sup>\)\<^sup>\ = x * y\<^sup>\" nitpick [expect=genuine,card=3] oops +lemma tarski_mult_omega_omega: "(\z . z\<^sup>\\<^sup>\ = z\<^sup>\) \ (x * y\<^sup>\)\<^sup>\ = x * y\<^sup>\" nitpick [expect=genuine,card=3] oops +lemma tarski: "x = bot \ top * x * top = top" nitpick [expect=genuine,card=3] oops +*) + +end + +class nonstrict_itering_tarski = nonstrict_itering + + assumes tarski: "x \ x * top * x * top" +begin + +text \Theorem 13.14\ + +lemma tarski_mult_top_idempotent: + "x * top = x * top * x * top" + by (metis sup_commute le_iff_sup mult_assoc star.circ_back_loop_fixpoint star.circ_left_top tarski top_mult_top) + +lemma tarski_top_omega_below: + "x * top \ (x * top)\<^sup>\" + using omega_induct_mult order.refl mult_assoc tarski_mult_top_idempotent by auto + +lemma tarski_top_omega: + "x * top = (x * top)\<^sup>\" + by (simp add: order.eq_iff mult_top_omega tarski_top_omega_below) + +lemma tarski_below_top_omega: + "x \ (x * top)\<^sup>\" + using top_right_mult_increasing tarski_top_omega by auto + +lemma tarski_mult_omega_omega: + "(x * y\<^sup>\)\<^sup>\ = x * y\<^sup>\" + by (metis mult_assoc omega_vector tarski_top_omega) + +lemma tarski_omega_idempotent: + "x\<^sup>\\<^sup>\ = x\<^sup>\" + by (metis omega_vector tarski_top_omega) + +lemma while_denest_2a: + "w * ((x \ (y * w)) \ z) = w * (((x \ y) * w) \ z)" +proof - + have "(x\<^sup>\ \ x\<^sup>\ * y * w)\<^sup>\ = (x\<^sup>\ * y * w)\<^sup>\ * x\<^sup>\ * (((x\<^sup>\ * y * w)\<^sup>\ * x\<^sup>\)\<^sup>\ \ ((x\<^sup>\ * y * w)\<^sup>\ * x\<^sup>\)\<^sup>\ * (x\<^sup>\ * y * w)\<^sup>\) \ (x\<^sup>\ * y * w)\<^sup>\" + by (metis sup_commute omega_decompose omega_loop_fixpoint) + also have "... \ (x\<^sup>\ * y * w)\<^sup>\ * x\<^sup>\ \ (x\<^sup>\ * y * w)\<^sup>\" + by (metis sup_left_isotone mult_assoc mult_right_isotone omega_sub_vector) + finally have 1: "w * (x\<^sup>\ \ x\<^sup>\ * y * w)\<^sup>\ \ (w * x\<^sup>\ * y)\<^sup>\ * w * x\<^sup>\ \ (w * x\<^sup>\ * y)\<^sup>\" + by (smt sup_commute le_iff_sup mult_assoc mult_left_dist_sup while_def while_slide) + have "(x\<^sup>\ \ x\<^sup>\ * y * w)\<^sup>\ * z = (x\<^sup>\ * y * w)\<^sup>\ * x\<^sup>\ * ((x\<^sup>\ * y * w)\<^sup>\ * x\<^sup>\)\<^sup>\ * (x\<^sup>\ * y * w)\<^sup>\ * z \ (x\<^sup>\ * y * w)\<^sup>\ * z" + by (smt sup_commute mult_assoc star.circ_sup star.circ_loop_fixpoint) + also have "... \ (x\<^sup>\ * y * w)\<^sup>\ * x\<^sup>\ \ (x\<^sup>\ * y * w)\<^sup>\ * z" + by (smt sup_commute sup_right_isotone mult_assoc mult_right_isotone omega_sub_vector) + finally have "w * (x\<^sup>\ \ x\<^sup>\ * y * w)\<^sup>\ * z \ (w * x\<^sup>\ * y)\<^sup>\ * w * x\<^sup>\ \ (w * x\<^sup>\ * y)\<^sup>\ * w * z" + by (metis mult_assoc mult_left_dist_sup mult_right_isotone star.circ_slide) + hence "w * (x\<^sup>\ \ x\<^sup>\ * y * w)\<^sup>\ \ w * (x\<^sup>\ \ x\<^sup>\ * y * w)\<^sup>\ * z \ (w * x\<^sup>\ * y)\<^sup>\ * (w * x\<^sup>\)\<^sup>\ \ (w * x\<^sup>\ * y)\<^sup>\ \ (w * x\<^sup>\ * y)\<^sup>\ * w * z" + using 1 by (smt sup_assoc sup_commute le_iff_sup mult_assoc tarski_mult_omega_omega) + also have "... \ (w * x\<^sup>\ \ w * x\<^sup>\ * y)\<^sup>\ * (w * x\<^sup>\ \ w * x\<^sup>\ * y)\<^sup>\ \ (w * x\<^sup>\ \ w * x\<^sup>\ * y)\<^sup>\ \ (w * x\<^sup>\ \ w * x\<^sup>\ * y)\<^sup>\ * w * z" + by (metis sup_mono sup_ge1 sup_ge2 mult_isotone mult_left_isotone omega_isotone star.circ_isotone) + also have "... = (w * x\<^sup>\ \ w * x\<^sup>\ * y)\<^sup>\ \ (w * x\<^sup>\ \ w * x\<^sup>\ * y)\<^sup>\ * w * z" + by (simp add: star_mult_omega) + finally have "w * ((x\<^sup>\ \ x\<^sup>\ * y * w)\<^sup>\ \ (x\<^sup>\ \ x\<^sup>\ * y * w)\<^sup>\ * z) \ w * ((x\<^sup>\ \ x\<^sup>\ * y) * w)\<^sup>\ \ w * ((x\<^sup>\ \ x\<^sup>\ * y) * w)\<^sup>\ * z" + by (smt mult_assoc mult_left_dist_sup omega_slide star.circ_slide) + hence 2: "w * ((x \ (y * w)) \ z) \ w * (((x \ y) * w) \ z)" + by (simp add: mult_left_dist_sup while_def mult_assoc) + have "w * (((x \ y) * w) \ z) \ w * ((x \ (y * w)) \ z)" + by (simp add: mult_right_isotone while_left_isotone while_sub_associative) + thus ?thesis + using 2 order.antisym by auto +qed + +lemma while_denest_3: + "(x \ w) \ x\<^sup>\ = (x \ w)\<^sup>\" +proof - + have 1: "(x \ w) \ x\<^sup>\ = (x \ w)\<^sup>\ \ (x \ w)\<^sup>\ * x\<^sup>\\<^sup>\" + by (simp add: while_def tarski_omega_idempotent) + also have "... \ (x \ w)\<^sup>\ \ (x \ w)\<^sup>\ * (x\<^sup>\ \ x\<^sup>\ * w)\<^sup>\" + using mult_right_isotone omega_sub_dist semiring.add_left_mono by blast + also have "... = (x \ w)\<^sup>\" + by (simp add: star_mult_omega while_def) + finally show ?thesis + using 1 by (simp add: sup.order_iff) +qed + +lemma while_denest_4a: + "(x \ w) \ (x \ (y * z)) = (x \ w) \ ((x \ y) * z)" +proof - + have "(x \ w) \ (x \ (y * z)) = (x \ w)\<^sup>\ \ ((x \ w) \ (x\<^sup>\ * y * z))" + using while_def while_denest_3 while_left_dist_sup mult_assoc by auto + also have "... \ (x \ w)\<^sup>\ \ ((x \ w) \ ((x \ y) * z))" + using mult_right_sub_dist_sup_right order.refl semiring.add_mono while_def while_right_isotone by auto + finally have 1: "(x \ w) \ (x \ (y * z)) \ (x \ w) \ ((x \ y) * z)" + by (simp add: while_def) + have "(x \ w) \ ((x \ y) * z) \ (x \ w) \ (x \ (y * z))" + by (simp add: while_right_isotone while_sub_associative) + thus ?thesis + using 1 order.antisym by auto +qed + +text \Theorem 8.3\ + +subclass bounded_extended_binary_itering + apply unfold_locales + by (smt mult_assoc while_denest_2a while_denest_4a while_increasing while_slide) + +text \Theorem 13.13\ + +lemma while_mult_top: + "(x * top) \ z = z \ x * top" +proof - + have 1: "z \ x * top \ (x * top) \ z" + by (metis le_supI sup_ge1 while_def while_increasing tarski_top_omega) + have "(x * top) \ z = z \ x * top * ((x * top) \ z)" + using while_left_unfold by auto + also have "... \ z \ x * top" + using mult_right_isotone sup_right_isotone top_greatest mult_assoc by auto + finally show ?thesis + using 1 order.antisym by auto +qed + +lemma tarski_top_omega_below_2: + "x * top \ (x * top) \ bot" + by (simp add: while_mult_top) + +lemma tarski_top_omega_2: + "x * top = (x * top) \ bot" + by (simp add: while_mult_top) + +lemma tarski_below_top_omega_2: + "x \ (x * top) \ bot" + using top_right_mult_increasing tarski_top_omega_2 by auto + +(* +lemma "1 = (x * bot) \ 1" nitpick [expect=genuine,card=3] oops +*) + +end + +class nonstrict_itering_tarski_zero = nonstrict_itering_tarski + nonstrict_itering_zero +begin + +lemma while_bot_1: + "1 = (x * bot) \ 1" + by (simp add: mult_right_zero while_zero) + +text \Theorem 13 counterexamples\ + +(* +lemma while_associative: "(x \ y) * z = x \ (y * z)" nitpick [expect=genuine,card=2] oops +lemma "(x \ 1) * y = x \ y" nitpick [expect=genuine,card=2] oops +lemma while_one_mult: "(x \ 1) * x = x \ x" nitpick [expect=genuine,card=4] oops +lemma "(x \ y) \ z = ((x \ 1) * y) \ ((x \ 1) * z)" nitpick [expect=genuine,card=2] oops +lemma while_mult_top_2: "(x * top) \ z = z \ x * top * z" nitpick [expect=genuine,card=2] oops +lemma while_top_2: "top \ z = top * z" nitpick [expect=genuine,card=2] oops + +lemma tarski: "x = bot \ top * x * top = top" nitpick [expect=genuine,card=3] oops +lemma while_back_loop_is_fixpoint: "is_fixpoint (\x . x * y \ z) (z * (y \ 1))" nitpick [expect=genuine,card=4] oops +lemma "1 \ x * bot = x \ 1" nitpick [expect=genuine,card=3] oops +lemma "x = x * (x \ 1)" nitpick [expect=genuine,card=3] oops +lemma "x * (x \ 1) = x \ 1" nitpick [expect=genuine,card=2] oops +lemma "x \ 1 = x \ (1 \ 1)" nitpick [expect=genuine,card=3] oops +lemma "(x \ y) \ 1 = (x \ (y \ 1)) \ 1" nitpick [expect=genuine,card=3] oops +lemma "z \ y * x = x \ y \ z \ x" nitpick [expect=genuine,card=2] oops +lemma "y * x = x ==> y \ x \ x" nitpick [expect=genuine,card=2] oops +lemma "z \ x * y = x \ z * (y \ 1) \ x" nitpick [expect=genuine,card=3] oops +lemma "x * y = x \ x * (y \ 1) \ x" nitpick [expect=genuine,card=3] oops +lemma "x * z = z * y \ x \ z \ z * (y \ 1)" nitpick [expect=genuine,card=2] oops + +lemma tarski: "x = bot \ top * x * top = top" nitpick [expect=genuine,card=3] oops +lemma tarski_case: assumes t1: "x = bot \ P x" and t2: "top * x * top = top \ P x" shows "P x" nitpick [expect=genuine,card=3] oops +*) + +end + +end + diff --git a/thys/Correctness_Algebras/Binary_Iterings_Strict.thy b/thys/Correctness_Algebras/Binary_Iterings_Strict.thy new file mode 100644 --- /dev/null +++ b/thys/Correctness_Algebras/Binary_Iterings_Strict.thy @@ -0,0 +1,157 @@ +(* Title: Strict Binary Iterings + Author: Walter Guttmann + Maintainer: Walter Guttmann +*) + +section \Strict Binary Iterings\ + +theory Binary_Iterings_Strict + +imports Stone_Kleene_Relation_Algebras.Iterings Binary_Iterings + +begin + +class strict_itering = itering + while + + assumes while_def: "x \ y = x\<^sup>\ * y" +begin + +text \Theorem 8.1\ + +subclass extended_binary_itering + apply unfold_locales + apply (metis circ_loop_fixpoint circ_slide_1 sup_commute while_def mult_assoc) + apply (metis circ_sup mult_assoc while_def) + apply (simp add: mult_left_dist_sup while_def) + apply (simp add: while_def mult_assoc) + apply (metis circ_simulate_left_plus mult_assoc mult_left_isotone mult_right_dist_sup mult_1_right while_def) + apply (metis circ_simulate_right_plus mult_assoc mult_left_isotone mult_right_dist_sup while_def) + by (metis circ_loop_fixpoint mult_right_sub_dist_sup_right while_def mult_assoc) + +text \Theorem 13.1\ + +lemma while_associative: + "(x \ y) * z = x \ (y * z)" + by (simp add: while_def mult_assoc) + +text \Theorem 13.3\ + +lemma while_one_mult: + "(x \ 1) * x = x \ x" + by (simp add: while_def) + +lemma while_back_loop_is_fixpoint: + "is_fixpoint (\x . x * y \ z) (z * (y \ 1))" + by (simp add: circ_back_loop_is_fixpoint while_def) + +text \Theorem 13.4\ + +lemma while_sumstar_var: + "(x \ y) \ z = ((x \ 1) * y) \ ((x \ 1) * z)" + by (simp add: while_sumstar_3 while_associative) + +text \Theorem 13.2\ + +lemma while_mult_1_assoc: + "(x \ 1) * y = x \ y" + by (simp add: while_def) + +(* +lemma "y \ (x \ 1) \ x \ (y \ 1) \ (x \ y) \ 1 = x \ (y \ 1)" oops +lemma "y * x \ (1 \ x) * (y \ 1) \ (x \ y) \ 1 = x \ (y \ 1)" oops +lemma while_square_1: "x \ 1 = (x * x) \ (x \ 1)" oops +lemma while_absorb_below_one: "y * x \ x \ y \ x \ 1 \ x" oops +*) + +end + +class bounded_strict_itering = bounded_itering + strict_itering +begin + +subclass bounded_extended_binary_itering .. + +text \Theorem 13.6\ + +lemma while_top_2: + "top \ z = top * z" + by (simp add: circ_top while_def) + +text \Theorem 13.5\ + +lemma while_mult_top_2: + "(x * top) \ z = z \ x * top * z" + by (metis circ_left_top mult_assoc while_def while_left_unfold) + +text \Theorem 13 counterexamples\ + +(* +lemma while_one_top: "1 \ x = top" nitpick [expect=genuine,card=2] oops +lemma while_top: "top \ x = top" nitpick [expect=genuine,card=2] oops +lemma while_sub_mult_one: "x * (1 \ y) \ 1 \ x" oops +lemma while_unfold_below_1: "x = y * x \ x \ y \ 1" oops +lemma while_unfold_below: "x = z \ y * x \ x \ y \ z" nitpick [expect=genuine,card=2] oops +lemma while_unfold_below: "x \ z \ y * x \ x \ y \ z" nitpick [expect=genuine,card=2] oops +lemma while_mult_top: "(x * top) \ z = z \ x * top" nitpick [expect=genuine,card=2] oops +lemma tarski_mult_top_idempotent: "x * top = x * top * x * top" oops + +lemma while_loop_is_greatest_postfixpoint: "is_greatest_postfixpoint (\x . y * x \ z) (y \ z)" nitpick [expect=genuine,card=2] oops +lemma while_loop_is_greatest_fixpoint: "is_greatest_fixpoint (\x . y * x \ z) (y \ z)" nitpick [expect=genuine,card=2] oops +lemma while_sub_while_zero: "x \ z \ (x \ y) \ z" oops +lemma while_while_sub_associative: "x \ (y \ z) \ (x \ y) \ z" oops +lemma tarski: "x \ x * top * x * top" oops +lemma tarski_top_omega_below: "x * top \ (x * top) \ bot" nitpick [expect=genuine,card=2] oops +lemma tarski_top_omega: "x * top = (x * top) \ bot" nitpick [expect=genuine,card=2] oops +lemma tarski_below_top_omega: "x \ (x * top) \ bot" nitpick [expect=genuine,card=2] oops +lemma tarski: "x = bot \ top * x * top = top" oops +lemma "1 = (x * bot) \ 1" oops +lemma "1 \ x * bot = x \ 1" oops +lemma "x = x * (x \ 1)" oops +lemma "x * (x \ 1) = x \ 1" oops +lemma "x \ 1 = x \ (1 \ 1)" oops +lemma "(x \ y) \ 1 = (x \ (y \ 1)) \ 1" oops +lemma "z \ y * x = x \ y \ z \ x" oops +lemma "y * x = x \ y \ x \ x" oops +lemma "z \ x * y = x \ z * (y \ 1) \ x" oops +lemma "x * y = x \ x * (y \ 1) \ x" oops +lemma "x * z = z * y \ x \ z \ z * (y \ 1)" oops +*) + +end + +class binary_itering_unary = extended_binary_itering + circ + + assumes circ_def: "x\<^sup>\ = x \ 1" +begin + +text \Theorem 50.7\ + +subclass left_conway_semiring + apply unfold_locales + using circ_def while_left_unfold apply simp + apply (metis circ_def mult_1_right while_one_mult_below while_slide) + using circ_def while_one_while while_sumstar_2 by auto + +end + +class strict_binary_itering = binary_itering + circ + + assumes while_associative: "(x \ y) * z = x \ (y * z)" + assumes circ_def: "x\<^sup>\ = x \ 1" +begin + +text \Theorem 2.8\ + +subclass itering + apply unfold_locales + apply (simp add: circ_def while_associative while_sumstar) + apply (metis circ_def mult_1_right while_associative while_productstar while_slide) + apply (metis circ_def mult_1_right while_associative mult_1_left while_slide while_simulate_right_plus) + by (metis circ_def mult_1_right while_associative mult_1_left while_simulate_left_plus mult_right_dist_sup) + +text \Theorem 8.5\ + +subclass extended_binary_itering + apply unfold_locales + by (simp add: while_associative while_increasing mult_assoc) + +end + +end + diff --git a/thys/Correctness_Algebras/Boolean_Semirings.thy b/thys/Correctness_Algebras/Boolean_Semirings.thy new file mode 100644 --- /dev/null +++ b/thys/Correctness_Algebras/Boolean_Semirings.thy @@ -0,0 +1,621 @@ +(* Title: Boolean Semirings + Author: Walter Guttmann + Maintainer: Walter Guttmann +*) + +section \Boolean Semirings\ + +theory Boolean_Semirings + +imports Stone_Algebras.P_Algebras Lattice_Ordered_Semirings + +begin + +class complemented_distributive_lattice = bounded_distrib_lattice + uminus + + assumes inf_complement: "x \ (-x) = bot" + assumes sup_complement: "x \ (-x) = top" +begin + +sublocale boolean_algebra where minus = "\x y . x \ (-y)" and inf = inf and sup = sup and bot = bot and top = top + apply unfold_locales + apply (simp add: inf_complement) + apply (simp add: sup_complement) + by simp + +end + +text \M0-algebra\ + +context lattice_ordered_pre_left_semiring +begin + +text \Section 7\ + +lemma vector_1: + "vector x \ x * top \ x" + by (simp add: antisym_conv top_right_mult_increasing) + +definition zero_vector :: "'a \ bool" where "zero_vector x \ x \ x * bot" +definition one_vector :: "'a \ bool" where "one_vector x \ x * bot \ x" + +lemma zero_vector_left_zero: + assumes "zero_vector x" + shows "x * y = x * bot" +proof - + have "x * y \ x * bot" + by (metis assms mult_isotone top.extremum vector_mult_closed zero_vector zero_vector_def) + thus ?thesis + by (simp add: order.antisym mult_right_isotone) +qed + +lemma zero_vector_1: + "zero_vector x \ (\y . x * y = x * bot)" + by (metis top_right_mult_increasing zero_vector_def zero_vector_left_zero) + +lemma zero_vector_2: + "zero_vector x \ (\y . x * y \ x * bot)" + by (metis eq_refl order_trans top_right_mult_increasing zero_vector_def zero_vector_left_zero) + +lemma zero_vector_3: + "zero_vector x \ x * 1 = x * bot" + by (metis mult_sub_right_one zero_vector_def zero_vector_left_zero) + +lemma zero_vector_4: + "zero_vector x \ x * 1 \ x * bot" + using order.antisym mult_right_isotone zero_vector_3 by auto + +lemma zero_vector_5: + "zero_vector x \ x * top = x * bot" + by (metis top_right_mult_increasing zero_vector_def zero_vector_left_zero) + +lemma zero_vector_6: + "zero_vector x \ x * top \ x * bot" + by (meson mult_right_isotone order_trans top.extremum zero_vector_2) + +lemma zero_vector_7: + "zero_vector x \ (\y . x * top = x * y)" + by (metis zero_vector_1) + +lemma zero_vector_8: + "zero_vector x \ (\y . x * top \ x * y)" + by (metis zero_vector_6 zero_vector_left_zero) + +lemma zero_vector_9: + "zero_vector x \ (\y . x * 1 = x * y)" + by (metis zero_vector_1) + +lemma zero_vector_0: + "zero_vector x \ (\y z . x * y = x * z)" + by (metis zero_vector_5 zero_vector_left_zero) + +text \Theorem 6 / Figure 2: relations between properties\ + +lemma co_vector_zero_vector_one_vector: + "co_vector x \ zero_vector x \ one_vector x" + using co_vector_def one_vector_def zero_vector_def by auto + +lemma up_closed_one_vector: + "up_closed x \ one_vector x" + by (metis bot_least mult_right_isotone up_closed_def one_vector_def) + +lemma zero_vector_dense: + "zero_vector x \ dense_rel x" + by (metis zero_vector_0 zero_vector_def) + +lemma zero_vector_sup_distributive: + "zero_vector x \ sup_distributive x" + by (metis sup_distributive_def sup_idem zero_vector_0) + +lemma zero_vector_inf_distributive: + "zero_vector x \ inf_distributive x" + by (metis inf_idem inf_distributive_def zero_vector_0) + +lemma up_closed_zero_vector_vector: + "up_closed x \ zero_vector x \ vector x" + by (metis up_closed_def zero_vector_0) + +lemma zero_vector_one_vector_vector: + "zero_vector x \ one_vector x \ vector x" + by (metis one_vector_def vector_1 zero_vector_0) + +lemma co_vector_vector: + "co_vector x \ vector x" + by (simp add: co_vector_zero_vector_one_vector zero_vector_one_vector_vector) + +text \Theorem 10 / Figure 3: closure properties\ + +text \zero-vector\ + +lemma zero_zero_vector: + "zero_vector bot" + by (simp add: zero_vector_def) + +lemma sup_zero_vector: + "zero_vector x \ zero_vector y \ zero_vector (x \ y)" + by (simp add: mult_right_dist_sup zero_vector_3) + +lemma comp_zero_vector: + "zero_vector x \ zero_vector y \ zero_vector (x * y)" + by (metis mult_one_associative zero_vector_0) + +text \one-vector\ + +lemma zero_one_vector: + "one_vector bot" + by (simp add: one_vector_def) + +lemma one_one_vector: + "one_vector 1" + by (simp add: one_up_closed up_closed_one_vector) + +lemma top_one_vector: + "one_vector top" + by (simp add: one_vector_def) + +lemma sup_one_vector: + "one_vector x \ one_vector y \ one_vector (x \ y)" + by (simp add: mult_right_dist_sup order_trans one_vector_def) + +lemma inf_one_vector: + "one_vector x \ one_vector y \ one_vector (x \ y)" + by (meson order.trans inf.boundedI mult_right_sub_dist_inf_left mult_right_sub_dist_inf_right one_vector_def) + +lemma comp_one_vector: + "one_vector x \ one_vector y \ one_vector (x * y)" + using mult_isotone mult_semi_associative order_lesseq_imp one_vector_def by blast + +end + +context multirelation_algebra_1 +begin + +text \Theorem 10 / Figure 3: closure properties\ + +text \zero-vector\ + +lemma top_zero_vector: + "zero_vector top" + by (simp add: mult_left_top zero_vector_def) + +end + +text \M1-algebra\ + +context multirelation_algebra_2 +begin + +text \Section 7\ + +lemma zero_vector_10: + "zero_vector x \ x * top = x * 1" + by (metis mult_one_associative mult_top_associative zero_vector_7) + +lemma zero_vector_11: + "zero_vector x \ x * top \ x * 1" + using order.antisym mult_right_isotone zero_vector_10 by fastforce + +text \Theorem 6 / Figure 2: relations between properties\ + +lemma vector_zero_vector: + "vector x \ zero_vector x" + by (simp add: zero_vector_def vector_left_annihilator) + +lemma vector_up_closed_zero_vector: + "vector x \ up_closed x \ zero_vector x" + using up_closed_zero_vector_vector vector_up_closed vector_zero_vector by blast + +lemma vector_zero_vector_one_vector: + "vector x \ zero_vector x \ one_vector x" + by (simp add: co_vector_zero_vector_one_vector vector_co_vector) + +(* +lemma "(x * bot \ y) * 1 = x * bot \ y * 1" nitpick [expect=genuine,card=7] oops +*) + +end + +text \M3-algebra\ + +context up_closed_multirelation_algebra +begin + +lemma up_closed: + "up_closed x" + by (simp add: up_closed_def) + +lemma dedekind_1_left: + "x * 1 \ y \ (x \ y * 1) * 1" + by simp + +text \Theorem 10 / Figure 3: closure properties\ + +text \zero-vector\ + +lemma zero_vector_dual: + "zero_vector x \ zero_vector (x\<^sup>d)" + using up_closed_zero_vector_vector vector_dual vector_zero_vector up_closed by blast + +end + +text \complemented M0-algebra\ + +class lattice_ordered_pre_left_semiring_b = lattice_ordered_pre_left_semiring + complemented_distributive_lattice +begin + +definition down_closed :: "'a \ bool" where "down_closed x \ -x * 1 \ -x" + +text \Theorem 10 / Figure 3: closure properties\ + +text \down-closed\ + +lemma zero_down_closed: + "down_closed bot" + by (simp add: down_closed_def) + +lemma top_down_closed: + "down_closed top" + by (simp add: down_closed_def) + +lemma complement_down_closed_up_closed: + "down_closed x \ up_closed (-x)" + using down_closed_def order.antisym mult_sub_right_one up_closed_def by auto + +lemma sup_down_closed: + "down_closed x \ down_closed y \ down_closed (x \ y)" + by (simp add: complement_down_closed_up_closed inf_up_closed) + +lemma inf_down_closed: + "down_closed x \ down_closed y \ down_closed (x \ y)" + by (simp add: complement_down_closed_up_closed sup_up_closed) + +end + +class multirelation_algebra_1b = multirelation_algebra_1 + complemented_distributive_lattice +begin + +subclass lattice_ordered_pre_left_semiring_b .. + +text \Theorem 7.1\ + +lemma complement_mult_zero_sub: + "-(x * bot) \ -x * bot" +proof - + have "top = -x * bot \ x * bot" + by (metis compl_sup_top mult_left_top mult_right_dist_sup) + thus ?thesis + by (simp add: heyting.implies_order sup.commute) +qed + +text \Theorem 7.2\ + +lemma transitive_zero_vector_complement: + "transitive x \ zero_vector (-x)" + by (meson complement_mult_zero_sub compl_mono mult_right_isotone order_trans zero_vector_def bot_least) + +lemma transitive_dense_complement: + "transitive x \ dense_rel (-x)" + by (simp add: zero_vector_dense transitive_zero_vector_complement) + +lemma transitive_sup_distributive_complement: + "transitive x \ sup_distributive (-x)" + by (simp add: zero_vector_sup_distributive transitive_zero_vector_complement) + +lemma transitive_inf_distributive_complement: + "transitive x \ inf_distributive (-x)" + by (simp add: zero_vector_inf_distributive transitive_zero_vector_complement) + +lemma up_closed_zero_vector_complement: + "up_closed x \ zero_vector (-x)" + by (meson complement_mult_zero_sub compl_le_swap2 one_vector_def order_trans up_closed_one_vector zero_vector_def) + +lemma up_closed_dense_complement: + "up_closed x \ dense_rel (-x)" + by (simp add: zero_vector_dense up_closed_zero_vector_complement) + +lemma up_closed_sup_distributive_complement: + "up_closed x \ sup_distributive (-x)" + by (simp add: zero_vector_sup_distributive up_closed_zero_vector_complement) + +lemma up_closed_inf_distributive_complement: + "up_closed x \ inf_distributive (-x)" + by (simp add: zero_vector_inf_distributive up_closed_zero_vector_complement) + +text \Theorem 10 / Figure 3: closure properties\ + +text \closure under complement\ + +lemma co_total_total: + "co_total x \ total (-x)" + by (metis complement_mult_zero_sub co_total_def compl_bot_eq mult_left_sub_dist_sup_right sup_bot_right top_le) + +lemma complement_one_vector_zero_vector: + "one_vector x \ zero_vector (-x)" + using compl_mono complement_mult_zero_sub one_vector_def order_trans zero_vector_def by blast + +text \Theorem 6 / Figure 2: relations between properties\ + +lemma down_closed_zero_vector: + "down_closed x \ zero_vector x" + using complement_down_closed_up_closed up_closed_zero_vector_complement by force + +lemma down_closed_one_vector_vector: + "down_closed x \ one_vector x \ vector x" + by (simp add: down_closed_zero_vector zero_vector_one_vector_vector) + +(* +lemma complement_vector: "vector x \ vector (-x)" nitpick [expect=genuine,card=8] oops +*) + +end + +class multirelation_algebra_1c = multirelation_algebra_1b + + assumes dedekind_top_left: "x * top \ y \ (x \ y * top) * top" + assumes comp_zero_inf: "(x * bot \ y) * bot \ (x \ y) * bot" +begin + +text \Theorem 7.3\ + +lemma schroeder_top_sub: + "-(x * top) * top \ -x" +proof - + have "-(x * top) * top \ x \ bot" + by (metis dedekind_top_left p_inf zero_vector) + thus ?thesis + by (simp add: shunting_1) +qed + +text \Theorem 7.4\ + +lemma schroeder_top: + "x * top \ y \ -y * top \ -x" + apply (rule iffI) + using compl_mono inf.order_trans mult_left_isotone schroeder_top_sub apply blast + by (metis compl_mono double_compl mult_left_isotone order_trans schroeder_top_sub) + +text \Theorem 7.5\ + +lemma schroeder_top_eq: + "-(x * top) * top = -(x * top)" + using vector_1 vector_mult_closed vector_top_closed schroeder_top by auto + +lemma schroeder_one_eq: + "-(x * top) * 1 = -(x * top)" + by (metis top_mult_right_one schroeder_top_eq) + +text \Theorem 7.6\ + +lemma vector_inf_comp: + "x * top \ y * z = (x * top \ y) * z" +proof (rule order.antisym) + have "x * top \ y * z = x * top \ ((x * top \ y) \ (-(x * top) \ y)) * z" + by (simp add: inf_commute) + also have "... = x * top \ ((x * top \ y) * z \ (-(x * top) \ y) * z)" + by (simp add: inf_sup_distrib2 mult_right_dist_sup) + also have "... = (x * top \ (x * top \ y) * z) \ (x * top \ (-(x * top) \ y) * z)" + by (simp add: inf_sup_distrib1) + also have "... \ (x * top \ y) * z \ (x * top \ (-(x * top) \ y) * z)" + by (simp add: le_infI2) + also have "... \ (x * top \ y) * z \ (x * top \ -(x * top) * z)" + by (metis inf.sup_left_isotone inf_commute mult_right_sub_dist_inf_left sup_right_isotone) + also have "... \ (x * top \ y) * z \ (x * top \ -(x * top) * top)" + using inf.sup_right_isotone mult_right_isotone sup_right_isotone by auto + also have "... = (x * top \ y) * z" + by (simp add: schroeder_top_eq) + finally show "x * top \ y * z \ (x * top \ y) * z" + . +next + show "(x * top \ y) * z \ x * top \ y * z" + by (metis inf.bounded_iff mult_left_top mult_right_sub_dist_inf_left mult_right_sub_dist_inf_right mult_semi_associative order_lesseq_imp) +qed + +(* +lemma dedekind_top_left: + "x * top \ y \ (x \ y * top) * top" + by (metis inf.commute top_right_mult_increasing vector_inf_comp) +*) + +text \Theorem 7.7\ + +lemma vector_zero_inf_comp: + "(x * bot \ y) * z = x * bot \ y * z" + by (metis vector_inf_comp vector_mult_closed zero_vector) + +lemma vector_zero_inf_comp_2: + "(x * bot \ y) * z = (x * bot \ y * 1) * z" + by (simp add: vector_zero_inf_comp) + +text \Theorem 7.8\ + +lemma comp_zero_inf_2: + "x * bot \ y * bot = (x \ y) * bot" + using order.antisym mult_right_sub_dist_inf comp_zero_inf vector_zero_inf_comp by auto + +lemma comp_zero_inf_3: + "x * bot \ y * bot = (x * bot \ y) * bot" + by (simp add: vector_zero_inf_comp) + +lemma comp_zero_inf_4: + "x * bot \ y * bot = (x * bot \ y * bot) * bot" + by (metis comp_zero_inf_2 inf.commute vector_zero_inf_comp) + +lemma comp_zero_inf_5: + "x * bot \ y * bot = (x * 1 \ y * 1) * bot" + by (metis comp_zero_inf_2 mult_one_associative) + +lemma comp_zero_inf_6: + "x * bot \ y * bot = (x * 1 \ y * bot) * bot" + using inf.sup_monoid.add_commute vector_zero_inf_comp by fastforce + +lemma comp_zero_inf_7: + "x * bot \ y * bot = (x * 1 \ y) * bot" + by (metis comp_zero_inf_2 mult_one_associative) + +text \Theorem 10 / Figure 3: closure properties\ + +text \zero-vector\ + +lemma inf_zero_vector: + "zero_vector x \ zero_vector y \ zero_vector (x \ y)" + by (metis comp_zero_inf_2 inf.sup_mono zero_vector_def) + +text \down-closed\ + +lemma comp_down_closed: + "down_closed x \ down_closed y \ down_closed (x * y)" + by (metis complement_down_closed_up_closed down_closed_zero_vector up_closed_def zero_vector_0 schroeder_one_eq) + +text \closure under complement\ + +lemma complement_vector: + "vector x \ vector (-x)" + using vector_1 schroeder_top by blast + +lemma complement_zero_vector_one_vector: + "zero_vector x \ one_vector (-x)" + by (metis comp_zero_inf_2 order.antisym complement_mult_zero_sub double_compl inf.sup_monoid.add_commute mult_left_zero one_vector_def order.refl pseudo_complement top_right_mult_increasing zero_vector_0) + +lemma complement_zero_vector_one_vector_iff: + "zero_vector x \ one_vector (-x)" + using complement_zero_vector_one_vector complement_one_vector_zero_vector by force + +lemma complement_one_vector_zero_vector_iff: + "one_vector x \ zero_vector (-x)" + using complement_zero_vector_one_vector complement_one_vector_zero_vector by force + +text \Theorem 6 / Figure 2: relations between properties\ + +lemma vector_down_closed: + "vector x \ down_closed x" + using complement_vector complement_down_closed_up_closed vector_up_closed by blast + +lemma co_vector_down_closed: + "co_vector x \ down_closed x" + by (simp add: co_vector_vector vector_down_closed) + +lemma vector_down_closed_one_vector: + "vector x \ down_closed x \ one_vector x" + using down_closed_one_vector_vector up_closed_one_vector vector_up_closed vector_down_closed by blast + +lemma vector_up_closed_down_closed: + "vector x \ up_closed x \ down_closed x" + using down_closed_zero_vector up_closed_zero_vector_vector vector_up_closed vector_down_closed by blast + +text \Section 7\ + +lemma vector_b1: + "vector x \ -x * top = -x" + using complement_vector by auto + +lemma vector_b2: + "vector x \ -x * bot = -x" + by (metis down_closed_zero_vector vector_mult_closed zero_vector zero_vector_left_zero vector_b1 vector_down_closed) + +lemma covector_b1: + "co_vector x \ -x * top = -x" + using co_vector_def co_vector_vector vector_b1 vector_b2 by force + +lemma covector_b2: + "co_vector x \ -x * bot = -x" + using covector_b1 vector_b1 vector_b2 by auto + +lemma vector_co_vector_iff: + "vector x \ co_vector x" + by (simp add: covector_b2 vector_b2) + +lemma zero_vector_b: + "zero_vector x \ -x * bot \ -x" + by (simp add: complement_zero_vector_one_vector_iff one_vector_def) + +lemma one_vector_b1: + "one_vector x \ -x \ -x * bot" + by (simp add: complement_one_vector_zero_vector_iff zero_vector_def) + +lemma one_vector_b0: + "one_vector x \ (\y z . -x * y = -x * z)" + by (simp add: complement_one_vector_zero_vector_iff zero_vector_0) + +(* +lemma schroeder_one: "x * -1 \ y \ -y * -1 \ -x" nitpick [expect=genuine,card=8] oops +*) + +end + +class multirelation_algebra_2b = multirelation_algebra_2 + complemented_distributive_lattice +begin + +subclass multirelation_algebra_1b .. + +(* +lemma "-x * bot \ -(x * bot)" nitpick [expect=genuine,card=8] oops +*) + +end + +text \complemented M1-algebra\ + +class multirelation_algebra_2c = multirelation_algebra_2b + multirelation_algebra_1c + +class multirelation_algebra_3b = multirelation_algebra_3 + complemented_distributive_lattice +begin + +subclass lattice_ordered_pre_left_semiring_b .. + +lemma dual_complement_commute: + "-(x\<^sup>d) = (-x)\<^sup>d" + by (metis compl_unique dual_dist_sup dual_dist_inf dual_top dual_zero inf_complement sup_compl_top) + +end + +text \complemented M2-algebra\ + +class multirelation_algebra_5b = multirelation_algebra_5 + complemented_distributive_lattice +begin + +subclass multirelation_algebra_2b .. + +subclass multirelation_algebra_3b .. + +lemma dual_down_closed: + "down_closed x \ down_closed (x\<^sup>d)" + using complement_down_closed_up_closed dual_complement_commute dual_up_closed by auto + +end + +class multirelation_algebra_5c = multirelation_algebra_5b + multirelation_algebra_1c +begin + +lemma complement_mult_zero_below: + "-x * bot \ -(x * bot)" + by (simp add: comp_zero_inf_2 shunting_1) + +(* +lemma "x * 1 \ y * 1 \ (x \ y) * 1" nitpick [expect=genuine,card=4] oops +lemma "x * 1 \ (y * 1) \ (x * 1 \ y) * 1" nitpick [expect=genuine,card=4] oops +*) + +end + +class up_closed_multirelation_algebra_b = up_closed_multirelation_algebra + complemented_distributive_lattice +begin + +subclass multirelation_algebra_5c + apply unfold_locales + apply (metis inf.sup_monoid.add_commute top_right_mult_increasing vector_inf_comp) + using mult_right_dist_inf vector_zero_inf_comp by auto + +lemma complement_zero_vector: + "zero_vector x \ zero_vector (-x)" + by (simp add: zero_right_mult_decreasing zero_vector_b) + +lemma down_closed: + "down_closed x" + by (simp add: down_closed_def) + +lemma vector: + "vector x" + by (simp add: down_closed up_closed_def vector_up_closed_down_closed) + +end + +end + diff --git a/thys/Correctness_Algebras/Capped_Omega_Algebras.thy b/thys/Correctness_Algebras/Capped_Omega_Algebras.thy new file mode 100644 --- /dev/null +++ b/thys/Correctness_Algebras/Capped_Omega_Algebras.thy @@ -0,0 +1,312 @@ +(* Title: Capped Omega Algebras + Author: Walter Guttmann + Maintainer: Walter Guttmann +*) + +section \Capped Omega Algebras\ + +theory Capped_Omega_Algebras + +imports Omega_Algebras + +begin + +class capped_omega = + fixes capped_omega :: "'a \ 'a \ 'a" ("_\<^sup>\\<^sub>_" [100,100] 100) + +class capped_omega_algebra = bounded_left_zero_kleene_algebra + bounded_distrib_lattice + capped_omega + + assumes capped_omega_unfold: "y\<^sup>\\<^sub>v = y * y\<^sup>\\<^sub>v \ v" + assumes capped_omega_induct: "x \ (y * x \ z) \ v \ x \ y\<^sup>\\<^sub>v \ y\<^sup>\ * z" + +text \AACP Theorem 6.1\ + +notation + top ("\") + +sublocale capped_omega_algebra < capped: bounded_left_zero_omega_algebra where omega = "(\y . y\<^sup>\\<^sub>\)" + apply unfold_locales + apply (metis capped_omega_unfold inf_top_right) + by (simp add: capped_omega_induct sup_commute) + +context capped_omega_algebra +begin + +text \AACP Theorem 6.2\ + +lemma capped_omega_below_omega: + "y\<^sup>\\<^sub>v \ y\<^sup>\\<^sub>\" + using capped.omega_induct_mult capped_omega_unfold order.eq_iff by force + +text \AACP Theorem 6.3\ + +lemma capped_omega_below: + "y\<^sup>\\<^sub>v \ v" + using capped_omega_unfold order.eq_iff by force + +text \AACP Theorem 6.4\ + +lemma capped_omega_one: + "1\<^sup>\\<^sub>v = v" +proof - + have "v \ (1 * v \ bot) \ v" + by simp + hence "v \ 1\<^sup>\\<^sub>v \ 1\<^sup>\ * bot" + by (simp add: capped_omega_induct) + also have "... = 1\<^sup>\\<^sub>v" + by (simp add: star_one) + finally show ?thesis + by (simp add: capped_omega_below order.antisym) +qed + +text \AACP Theorem 6.5\ + +lemma capped_omega_zero: + "bot\<^sup>\\<^sub>v = bot" + by (metis capped_omega_below_omega bot_unique capped.omega_bot) + +lemma star_below_cap: + "y \ u \ z \ v \ u * v \ v \ y\<^sup>\ * z \ v" + by (metis le_sup_iff order.trans mult_left_isotone star_left_induct) + +lemma capped_fix: + assumes "y \ u" + and "z \ v" + and "u * v \ v" + shows "(y * (y\<^sup>\\<^sub>v \ y\<^sup>\ * z) \ z) \ v = y\<^sup>\\<^sub>v \ y\<^sup>\ * z" +proof - + have "(y * (y\<^sup>\\<^sub>v \ y\<^sup>\ * z) \ z) \ v = (y * y\<^sup>\\<^sub>v \ y\<^sup>\ * z) \ v" + by (simp add: mult_left_dist_sup star.circ_loop_fixpoint sup_assoc) + also have "... = (y * y\<^sup>\\<^sub>v \ v) \ (y\<^sup>\ * z \ v)" + by (simp add: inf_sup_distrib2) + also have "... = y\<^sup>\\<^sub>v \ y\<^sup>\ * z" + using assms capped_omega_unfold le_iff_inf star_below_cap by auto + finally show ?thesis + . +qed + +lemma capped_fixpoint: + "y \ u \ z \ v \ u * v \ v \ is_fixpoint (\x . (y * x \ z) \ v) (y\<^sup>\\<^sub>v \ y\<^sup>\ * z)" + by (simp add: capped_fix is_fixpoint_def) + +lemma capped_greatest_fixpoint: + "y \ u \ z \ v \ u * v \ v \ is_greatest_fixpoint (\x . (y * x \ z) \ v) (y\<^sup>\\<^sub>v \ y\<^sup>\ * z)" + by (smt capped_fix order_refl capped_omega_induct is_greatest_fixpoint_def) + +lemma capped_postfixpoint: + "y \ u \ z \ v \ u * v \ v \ is_postfixpoint (\x . (y * x \ z) \ v) (y\<^sup>\\<^sub>v \ y\<^sup>\ * z)" + using capped_fix inf.eq_refl is_postfixpoint_def by auto + +lemma capped_greatest_postfixpoint: + "y \ u \ z \ v \ u * v \ v \ is_greatest_postfixpoint (\x . (y * x \ z) \ v) (y\<^sup>\\<^sub>v \ y\<^sup>\ * z)" + by (smt capped_fix order_refl capped_omega_induct is_greatest_postfixpoint_def) + +text \AACP Theorem 6.6\ + +lemma capped_nu: + "y \ u \ z \ v \ u * v \ v \ \(\x . (y * x \ z) \ v) = y\<^sup>\\<^sub>v \ y\<^sup>\ * z" + by (metis capped_greatest_fixpoint greatest_fixpoint_same) + +lemma capped_pnu: + "y \ u \ z \ v \ u * v \ v \ p\(\x . (y * x \ z) \ v) = y\<^sup>\\<^sub>v \ y\<^sup>\ * z" + by (metis capped_greatest_postfixpoint greatest_postfixpoint_same) + +text \AACP Theorem 6.7\ + +lemma unfold_capped_omega: + "y \ u \ u * v \ v \ y * y\<^sup>\\<^sub>v = y\<^sup>\\<^sub>v" + by (smt (verit, ccfv_SIG) capped_omega_below capped_omega_unfold inf.order_lesseq_imp le_iff_inf mult_isotone) + +text \AACP Theorem 6.8\ + +lemma star_mult_capped_omega: + assumes "y \ u" + and "u * v \ v" + shows "y\<^sup>\ * y\<^sup>\\<^sub>v = y\<^sup>\\<^sub>v" +proof - + have "y * y\<^sup>\\<^sub>v = y\<^sup>\\<^sub>v" + using assms unfold_capped_omega by auto + hence "y\<^sup>\ * y\<^sup>\\<^sub>v \ y\<^sup>\\<^sub>v" + by (simp add: star_left_induct_mult) + thus ?thesis + by (metis sup_ge2 order.antisym star.circ_loop_fixpoint) +qed + +text \AACP Theorem 6.9\ + +lemma star_zero_below_capped_omega_zero: + assumes "y \ u" + and "u * v \ v" + shows "y\<^sup>\ * bot \ y\<^sup>\\<^sub>v * bot" +proof - + have "y * y\<^sup>\\<^sub>v \ v" + using assms capped_omega_below unfold_capped_omega by auto + hence "y * y\<^sup>\\<^sub>v = y\<^sup>\\<^sub>v" + using assms unfold_capped_omega by auto + thus ?thesis + by (metis bot_least eq_refl mult_assoc star_below_cap) +qed + +lemma star_zero_below_capped_omega: + "y \ u \ u * v \ v \ y\<^sup>\ * bot \ y\<^sup>\\<^sub>v" + by (simp add: star_loop_least_fixpoint unfold_capped_omega) + +lemma capped_omega_induct_meet_zero: + "x \ y * x \ v \ x \ y\<^sup>\\<^sub>v \ y\<^sup>\ * bot" + by (simp add: capped_omega_induct) + +text \AACP Theorem 6.10\ + +lemma capped_omega_induct_meet: + "y \ u \ u * v \ v \ x \ y * x \ v \ x \ y\<^sup>\\<^sub>v" + by (metis capped_omega_induct_meet_zero sup_commute le_iff_sup star_zero_below_capped_omega) + +lemma capped_omega_induct_equal: + "x = (y * x \ z) \ v \ x \ y\<^sup>\\<^sub>v \ y\<^sup>\ * z" + using capped_omega_induct inf.le_iff_sup by auto + +text \AACP Theorem 6.11\ + +lemma capped_meet_nu: + assumes "y \ u" + and "u * v \ v" + shows "\(\x . y * x \ v) = y\<^sup>\\<^sub>v" +proof - + have "y\<^sup>\\<^sub>v \ y\<^sup>\ * bot = y\<^sup>\\<^sub>v" + by (smt assms star_zero_below_capped_omega le_iff_sup sup_commute) + hence "\(\x . (y * x \ bot) \ v) = y\<^sup>\\<^sub>v" + by (metis assms capped_nu bot_least) + thus ?thesis + by simp +qed + +lemma capped_meet_pnu: + assumes "y \ u" + and "u * v \ v" + shows "p\(\x . y * x \ v) = y\<^sup>\\<^sub>v" +proof - + have "y\<^sup>\\<^sub>v \ y\<^sup>\ * bot = y\<^sup>\\<^sub>v" + by (smt assms star_zero_below_capped_omega le_iff_sup sup_commute) + hence "p\(\x . (y * x \ bot) \ v) = y\<^sup>\\<^sub>v" + by (metis assms capped_pnu bot_least) + thus ?thesis + by simp +qed + +text \AACP Theorem 6.12\ + +lemma capped_omega_isotone: + "y \ u \ u * v \ v \ t \ y \ t\<^sup>\\<^sub>v \ y\<^sup>\\<^sub>v" + by (metis capped_omega_induct_meet capped_omega_unfold le_iff_sup inf.sup_left_isotone mult_right_sub_dist_sup_left) + +text \AACP Theorem 6.13\ + +lemma capped_omega_simulation: + assumes "y \ u" + and "s \ u" + and "u * v \ v" + and "s * t \ y * s" + shows "s * t\<^sup>\\<^sub>v \ y\<^sup>\\<^sub>v" +proof - + have "s * t\<^sup>\\<^sub>v \ s * t * t\<^sup>\\<^sub>v \ s * v" + by (metis capped_omega_below capped_omega_unfold inf.boundedI inf.cobounded1 mult_right_isotone mult_assoc) + also have "... \ s * t * t\<^sup>\\<^sub>v \ v" + using assms(2,3) inf.order_lesseq_imp inf.sup_right_isotone mult_left_isotone by blast + also have "... \ y * s * t\<^sup>\\<^sub>v \ v" + using assms(4) inf.sup_left_isotone mult_left_isotone by auto + finally show ?thesis + using assms(1,3) capped_omega_induct_meet mult_assoc by auto +qed + +lemma capped_omega_slide_sub: + assumes "s \ u" + and "y \ u" + and "u * u \ u" + and "u * v \ v" + shows "s * (y * s)\<^sup>\\<^sub>v \ (s * y)\<^sup>\\<^sub>v" +proof - + have "s * y \ u" + by (meson assms(1-3) mult_isotone order_trans) + thus ?thesis + using assms(1,4) capped_omega_simulation mult_assoc by auto +qed + +text \AACP Theorem 6.14\ + +lemma capped_omega_slide: + "s \ u \ y \ u \ u * u \ u \ u * v \ v \ s * (y * s)\<^sup>\\<^sub>v = (s * y)\<^sup>\\<^sub>v" + by (smt (verit) order.antisym mult_assoc mult_right_isotone capped_omega_unfold capped_omega_slide_sub inf.sup_ge1 order_trans) + +lemma capped_omega_sub_dist: + "s \ u \ y \ u \ u * v \ v \ s\<^sup>\\<^sub>v \ (s \ y)\<^sup>\\<^sub>v" + by (simp add: capped_omega_isotone) + +text \AACP Theorem 6.15\ + +lemma capped_omega_simulation_2: + assumes "s \ u" + and "y \ u" + and "u * u \ u" + and "u * v \ v" + and "y * s \ s * y" + shows "(s * y)\<^sup>\\<^sub>v \ s\<^sup>\\<^sub>v" +proof - + have 1: "s * y \ u" + using assms(1-3) inf.order_lesseq_imp mult_isotone by blast + have 2: "s * (s * y)\<^sup>\\<^sub>v \ v" + by (meson assms(1,4) capped_omega_below order.trans mult_isotone) + have "(s * y)\<^sup>\\<^sub>v = s * (y * s)\<^sup>\\<^sub>v" + using assms(1-4) capped_omega_slide by auto + also have "... \ s * (s * y)\<^sup>\\<^sub>v" + using 1 assms(4,5) capped_omega_isotone mult_right_isotone by blast + also have "... = s * (s * y)\<^sup>\\<^sub>v \ v" + using 2 inf.order_iff by auto + finally show ?thesis + using assms(1,4) capped_omega_induct_meet by blast +qed + +text \AACP Theorem 6.16\ + +lemma left_plus_capped_omega: + assumes "y \ u" + and "u * u \ u" + and "u * v \ v" + shows "(y * y\<^sup>\)\<^sup>\\<^sub>v = y\<^sup>\\<^sub>v" +proof - + have 1: "y * y\<^sup>\ \ u" + by (metis assms(1,2) star_plus star_below_cap) + hence "y * y\<^sup>\ * (y * y\<^sup>\)\<^sup>\\<^sub>v \ v" + using assms(3) capped_omega_below unfold_capped_omega by auto + hence "y * y\<^sup>\ * (y * y\<^sup>\)\<^sup>\\<^sub>v = (y * y\<^sup>\)\<^sup>\\<^sub>v" + using 1 assms(3) unfold_capped_omega by blast + hence "(y * y\<^sup>\)\<^sup>\\<^sub>v \ y\<^sup>\\<^sub>v" + using 1 by (smt assms(1,3) capped_omega_simulation mult_assoc mult_semi_associative star.circ_transitive_equal star_simulation_right_equal) + thus ?thesis + using 1 by (meson assms(3) capped_omega_isotone order.antisym star.circ_mult_increasing) +qed + +text \AACP Theorem 6.17\ + +lemma capped_omega_sub_vector: + assumes "z \ v" + and "y \ u" + and "u * v \ v" + shows "y\<^sup>\\<^sub>u * z \ y\<^sup>\\<^sub>v" +proof - + have "y\<^sup>\\<^sub>u * z \ y * y\<^sup>\\<^sub>u * z \ u * z" + by (metis capped_omega_below capped_omega_unfold eq_refl inf.boundedI inf.cobounded1 mult_isotone) + also have "... \ y * y\<^sup>\\<^sub>u * z \ v" + by (metis assms(1,3) inf.sup_left_isotone inf_commute mult_right_isotone order_trans) + finally show ?thesis + using assms(2,3) capped_omega_induct_meet mult_assoc by auto +qed + +text \AACP Theorem 6.18\ + +lemma capped_omega_omega: + "y \ u \ u * v \ v \ (y\<^sup>\\<^sub>u)\<^sup>\\<^sub>v \ y\<^sup>\\<^sub>v" + by (metis capped_omega_below capped_omega_sub_vector unfold_capped_omega) + +end + +end + diff --git a/thys/Correctness_Algebras/Complete_Domain.thy b/thys/Correctness_Algebras/Complete_Domain.thy new file mode 100644 --- /dev/null +++ b/thys/Correctness_Algebras/Complete_Domain.thy @@ -0,0 +1,38 @@ +(* Title: Complete Domain + Author: Walter Guttmann + Maintainer: Walter Guttmann +*) + +section \Complete Domain\ + +theory Complete_Domain + +imports Relative_Domain Complete_Tests + +begin + +class complete_antidomain_semiring = relative_antidomain_semiring + complete_tests + + assumes a_dist_Sum: "ascending_chain f \ -(Sum f) = Prod (\n . -f n)" + assumes a_dist_Prod: "descending_chain f \ -(Prod f) = Sum (\n . -f n)" +begin + +lemma a_ascending_chain: + "ascending_chain f \ descending_chain (\n . -f n)" + by (simp add: a_antitone ascending_chain_def descending_chain_def) + +lemma a_descending_chain: + "descending_chain f \ ascending_chain (\n . -f n)" + by (simp add: a_antitone ord.ascending_chain_def ord.descending_chain_def) + +lemma d_dist_Sum: + "ascending_chain f \ d(Sum f) = Sum (\n . d(f n))" + by (simp add: d_def a_ascending_chain a_dist_Prod a_dist_Sum) + +lemma d_dist_Prod: + "descending_chain f \ d(Prod f) = Prod (\n . d(f n))" + by (simp add: d_def a_dist_Sum a_dist_Prod a_descending_chain) + +end + +end + diff --git a/thys/Correctness_Algebras/Complete_Tests.thy b/thys/Correctness_Algebras/Complete_Tests.thy new file mode 100644 --- /dev/null +++ b/thys/Correctness_Algebras/Complete_Tests.thy @@ -0,0 +1,144 @@ +(* Title: Complete Tests + Author: Walter Guttmann + Maintainer: Walter Guttmann +*) + +section \Complete Tests\ + +theory Complete_Tests + +imports Tests + +begin + +class complete_tests = tests + Sup + Inf + + assumes sup_test: "test_set A \ Sup A = --Sup A" + assumes sup_upper: "test_set A \ x \ A \ x \ Sup A" + assumes sup_least: "test_set A \ (\x\A . x \ -y) \ Sup A \ -y" +begin + +lemma Sup_isotone: + "test_set B \ A \ B \ Sup A \ Sup B" + by (metis sup_least sup_test sup_upper test_set_closed subset_eq) + +lemma mult_right_dist_sup: + assumes "test_set A" + shows "Sup A * -p = Sup { x * -p | x . x \ A }" +proof - + have 1: "test_set { x * -p | x . x \ A }" + by (simp add: assms mult_right_dist_test_set) + have 2: "Sup { x * -p | x . x \ A } \ Sup A * -p" + by (smt (verit, del_insts) assms mem_Collect_eq tests_dual.sub_sup_left_isotone sub_mult_closed sup_test sup_least sup_upper test_set_def) + have "\x\A . x \ --(--Sup { x * -p | x . x \ A } \ --p)" + proof + fix x + assume 3: "x \ A" + hence "x * -p \ --p \ Sup { x * -p | x . x \ A } \ --p" + using 1 by (smt (verit, del_insts) assms mem_Collect_eq tests_dual.sub_inf_left_isotone sub_mult_closed sup_upper test_set_def sup_test) + thus "x \ --(--Sup { x * -p | x . x \ A } \ --p)" + using 1 3 by (smt (z3) assms tests_dual.inf_closed sub_comm test_set_def sup_test sub_mult_closed tests_dual.sba_dual.shunting_right tests_dual.sba_dual.sub_sup_left_isotone tests_dual.inf_absorb tests_dual.inf_less_eq_cases_3) + qed + hence "Sup A \ --(--Sup { x * -p | x . x \ A } \ --p)" + by (simp add: assms sup_least) + hence "Sup A * -p \ Sup { x * -p | x . x \ A }" + using 1 by (smt (z3) assms sup_test tests_dual.sba_dual.shunting tests_dual.sub_commutative tests_dual.sub_sup_closed tests_dual.sub_sup_demorgan) + thus ?thesis + using 1 2 by (smt (z3) assms sup_test tests_dual.sba_dual.sub_sup_closed tests_dual.antisymmetric tests_dual.inf_demorgan tests_dual.inf_idempotent) +qed + +lemma mult_left_dist_sup: + assumes "test_set A" + shows "-p * Sup A = Sup { -p * x | x . x \ A }" +proof - + have 1: "Sup A * -p = Sup { x * -p | x . x \ A }" + by (simp add: assms mult_right_dist_sup) + have 2: "-p * Sup A = Sup A * -p" + by (metis assms sub_comm sup_test) + have "{ -p * x | x . x \ A } = { x * -p | x . x \ A }" + by (metis assms test_set_def tests_dual.sub_commutative) + thus ?thesis + using 1 2 by simp +qed + +definition Sum :: "(nat \ 'a) \ 'a" + where "Sum f \ Sup { f n | n::nat . True }" + +lemma Sum_test: + "test_seq t \ Sum t = --Sum t" + using Sum_def sup_test test_seq_test_set by auto + +lemma Sum_upper: + "test_seq t \ t x \ Sum t" + using Sum_def sup_upper test_seq_test_set by auto + +lemma Sum_least: + "test_seq t \ (\n . t n \ -p) \ Sum t \ -p" + using Sum_def sup_least test_seq_test_set by force + +lemma mult_right_dist_Sum: + "test_seq t \ (\n . t n * -p \ -q) \ Sum t * -p \ -q" + by (smt (verit, del_insts) CollectD Sum_def sup_least sup_test test_seq_test_set test_set_def tests_dual.sba_dual.shunting_right tests_dual.sba_dual.sub_sup_closed) + +lemma mult_left_dist_Sum: + "test_seq t \ (\n . -p * t n \ -q) \ -p * Sum t \ -q" + by (smt (verit, del_insts) Sum_def mem_Collect_eq mult_left_dist_sup sub_mult_closed sup_least test_seq_test_set test_set_def) + +lemma pSum_below_Sum: + "test_seq t \ pSum t m \ Sum t" + using Sum_test Sum_upper nat_test_def pSum_below_sum test_seq_def mult_right_dist_Sum by auto + +lemma pSum_sup: + assumes "test_seq t" + shows "pSum t m = Sup { t i | i . i \ {.. {..y\{ t i | i . i \ {.. --pSum t m" + using assms pSum_test pSum_upper by force + hence 2: "Sup { t i | i . i \ {.. --pSum t m" + using 1 by (simp add: sup_least) + have "pSum t m \ Sup { t i | i . i \ {.. {..x\{t i |i. i \ {.. --Sup {t i |i. i < Suc n}" + using 5 less_Suc_eq sup_upper by fastforce + hence 7: "Sup {t i |i. i \ {.. --Sup {t i |i. i < Suc n}" + using 4 by (simp add: sup_least) + have "t n \ {t i |i. i < Suc n}" + by auto + hence "t n \ Sup {t i |i. i < Suc n}" + using 5 by (simp add: sup_upper) + hence "pSum t n \ t n \ Sup {t i |i. i 'a) \ 'a" + where "Prod f \ Inf { f n | n::nat . True }" + +lemma Sum_range: + "Sum f = Sup (range f)" + by (simp add: Sum_def image_def) + +lemma Prod_range: + "Prod f = Inf (range f)" + by (simp add: Prod_def image_def) + +end + +end + diff --git a/thys/Correctness_Algebras/Domain.thy b/thys/Correctness_Algebras/Domain.thy new file mode 100644 --- /dev/null +++ b/thys/Correctness_Algebras/Domain.thy @@ -0,0 +1,364 @@ +(* Title: Domain + Author: Walter Guttmann + Maintainer: Walter Guttmann +*) + +section \Domain\ + +theory Domain + +imports Stone_Relation_Algebras.Semirings Tests + +begin + +context idempotent_left_semiring +begin + +sublocale ils: il_semiring where inf = times and sup = sup and bot = bot and less_eq = less_eq and less = less and top = 1 + apply unfold_locales + apply (simp add: sup_assoc) + apply (simp add: sup_commute) + apply simp + apply simp + apply (simp add: mult_assoc) + apply (simp add: mult_right_dist_sup) + apply simp + apply simp + apply simp + apply (simp add: mult_right_isotone) + apply (simp add: le_iff_sup) + by (simp add: less_le_not_le) + +end + +class left_zero_domain_semiring = idempotent_left_zero_semiring + dom + + assumes d_restrict: "x \ d(x) * x = d(x) * x" + assumes d_mult_d : "d(x * y) = d(x * d(y))" + assumes d_plus_one: "d(x) \ 1 = 1" + assumes d_zero : "d(bot) = bot" + assumes d_dist_sup: "d(x \ y) = d(x) \ d(y)" +begin + +text \Many lemmas in this class are taken from Georg Struth's theories.\ + +lemma d_restrict_equals: + "x = d(x) * x" + by (metis sup_commute d_plus_one d_restrict mult_left_one mult_right_dist_sup) + +lemma d_involutive: + "d(d(x)) = d(x)" + by (metis d_mult_d mult_left_one) + +lemma d_fixpoint: + "(\y . x = d(y)) \ x = d(x)" + using d_involutive by auto + +lemma d_type: + "\P . (\x . x = d(x) \ P(x)) \ (\x . P(d(x)))" + by (metis d_involutive) + +lemma d_mult_sub: + "d(x * y) \ d(x)" + by (metis d_dist_sup d_mult_d d_plus_one le_iff_sup mult_left_sub_dist_sup_left mult_1_right) + +lemma d_sub_one: + "x \ 1 \ x \ d(x)" + by (metis d_restrict_equals mult_right_isotone mult_1_right) + +lemma d_strict: + "d(x) = bot \ x = bot" + by (metis d_restrict_equals d_zero mult_left_zero) + +lemma d_one: + "d(1) = 1" + by (metis d_restrict_equals mult_1_right) + +lemma d_below_one: + "d(x) \ 1" + by (simp add: d_plus_one le_iff_sup) + +lemma d_isotone: + "x \ y \ d(x) \ d(y)" + by (metis d_dist_sup le_iff_sup) + +lemma d_plus_left_upper_bound: + "d(x) \ d(x \ y)" + by (simp add: d_isotone) + +lemma d_export: + "d(d(x) * y) = d(x) * d(y)" + apply (rule order.antisym) + apply (metis d_below_one d_involutive d_mult_sub d_restrict_equals d_isotone d_mult_d mult_isotone mult_left_one) + by (metis d_below_one d_sub_one coreflexive_mult_closed d_mult_d) + +lemma d_idempotent: + "d(x) * d(x) = d(x)" + by (metis d_export d_restrict_equals) + +lemma d_commutative: + "d(x) * d(y) = d(y) * d(x)" + by (metis ils.il_inf_associative order.antisym d_export d_mult_d d_mult_sub d_one d_restrict_equals mult_isotone mult_left_one) + +lemma d_least_left_preserver: + "x \ d(y) * x \ d(x) \ d(y)" + by (metis d_below_one d_involutive d_mult_sub d_restrict_equals order.eq_iff mult_left_isotone mult_left_one) + +lemma d_weak_locality: + "x * y = bot \ x * d(y) = bot" + by (metis d_mult_d d_strict) + +lemma d_sup_closed: + "d(d(x) \ d(y)) = d(x) \ d(y)" + by (simp add: d_involutive d_dist_sup) + +lemma d_mult_closed: + "d(d(x) * d(y)) = d(x) * d(y)" + using d_export d_mult_d by auto + +lemma d_mult_left_lower_bound: + "d(x) * d(y) \ d(x)" + by (metis d_export d_involutive d_mult_sub) + +lemma d_mult_greatest_lower_bound: + "d(x) \ d(y) * d(z) \ d(x) \ d(y) \ d(x) \ d(z)" + by (metis d_commutative d_idempotent d_mult_left_lower_bound mult_isotone order_trans) + +lemma d_mult_left_absorb_sup: + "d(x) * (d(x) \ d(y)) = d(x)" + by (metis sup_commute d_idempotent d_plus_one mult_left_dist_sup mult_1_right) + +lemma d_sup_left_absorb_mult: + "d(x) \ d(x) * d(y) = d(x)" + using d_mult_left_lower_bound sup.absorb_iff1 by auto + +lemma d_sup_left_dist_mult: + "d(x) \ d(y) * d(z) = (d(x) \ d(y)) * (d(x) \ d(z))" + by (smt sup_assoc d_commutative d_idempotent d_mult_left_absorb_sup mult_left_dist_sup mult_right_dist_sup) + +lemma d_order: + "d(x) \ d(y) \ d(x) = d(x) * d(y)" + by (metis d_mult_greatest_lower_bound d_mult_left_absorb_sup le_iff_sup order_refl) + +lemma d_mult_below: + "d(x) * y \ y" + by (metis sup_left_divisibility d_plus_one mult_left_one mult_right_dist_sup) + +lemma d_preserves_equation: + "d(y) * x \ x * d(y) \ d(y) * x = d(y) * x * d(y)" + by (simp add: d_below_one d_idempotent test_preserves_equation) + +end + +class left_zero_antidomain_semiring = idempotent_left_zero_semiring + dom + uminus + + assumes a_restrict : "-x * x = bot" + assumes a_plus_mult_d: "-(x * y) \ -(x * --y) = -(x * --y)" + assumes a_complement : "--x \ -x = 1" + assumes d_def : "d(x) = --x" +begin + +sublocale aa: a_algebra where minus = "\x y . -(-x \ y)" and uminus = uminus and inf = times and sup = sup and bot = bot and less_eq = less_eq and less = less and top = 1 + apply unfold_locales + apply (simp add: a_restrict) + using a_complement sup_commute apply fastforce + apply (simp add: a_plus_mult_d le_iff_sup) + by simp + +subclass left_zero_domain_semiring + apply unfold_locales + apply (simp add: d_def aa.double_complement_above) + apply (simp add: aa.a_d.d3_eq d_def) + apply (simp add: d_def) + apply (simp add: d_def) + by (simp add: d_def aa.l15) + +subclass tests + apply unfold_locales + apply (simp add: mult_assoc) + apply (simp add: aa.sba_dual.sub_commutative) + apply (simp add: aa.sba_dual.sub_complement) + using aa.sba_dual.sub_sup_closed apply simp + apply simp + apply simp + apply (simp add: aa.sba_dual.sub_inf_def) + apply (simp add: aa.less_eq_inf) + by (simp add: less_le_not_le) + +text \Many lemmas in this class are taken from Georg Struth's theories.\ + +notation + uminus ("a") + +lemma a_greatest_left_absorber: + "a(x) * y = bot \ a(x) \ a(y)" + by (simp add: aa.l10_iff) + +lemma a_mult_d: + "a(x * y) = a(x * d(y))" + by (simp add: d_def aa.sba3_complement_inf_double_complement) + +lemma a_d_closed: + "d(a(x)) = a(x)" + by (simp add: d_def) + +lemma a_plus_left_lower_bound: + "a(x \ y) \ a(x)" + by (simp add: aa.l9) + +lemma a_mult_sup: + "a(x) * (y \ x) = a(x) * y" + by (simp add: aa.sba3_inf_complement_bot semiring.distrib_left) + +lemma a_3: + "a(x) * a(y) * d(x \ y) = bot" + using d_weak_locality aa.l12 aa.sba3_inf_complement_bot by force + +lemma a_export: + "a(a(x) * y) = d(x) \ a(y)" + using a_mult_d d_def aa.sba_dual.sub_inf_def by auto + +lemma a_fixpoint: + "\x . (a(x) = x \ (\y . y = bot))" + by (metis aa.a_d.d_fully_strict aa.sba2_bot_unit aa.sup_idempotent aa.sup_right_zero_var) + +lemma a_strict: + "a(x) = 1 \ x = bot" + using aa.a_d.d_fully_strict one_def by fastforce + +lemma d_complement_zero: + "d(x) * a(x) = bot" + by (simp add: aa.sba3_inf_complement_bot d_def) + +lemma a_complement_zero: + "a(x) * d(x) = bot" + by (simp add: d_def) + +lemma a_shunting_zero: + "a(x) * d(y) = bot \ a(x) \ a(y)" + by (simp add: aa.less_eq_inf_bot d_def) + +lemma a_antitone: + "x \ y \ a(y) \ a(x)" + by (simp add: aa.l9) + +lemma a_mult_deMorgan: + "a(a(x) * a(y)) = d(x \ y)" + by (simp add: aa.sup_demorgan d_def) + +lemma a_mult_deMorgan_1: + "a(a(x) * a(y)) = d(x) \ d(y)" + by (simp add: a_export d_def) + +lemma a_mult_deMorgan_2: + "a(d(x) * d(y)) = a(x) \ a(y)" + by (simp add: d_def sup_def) + +lemma a_plus_deMorgan: + "a(a(x) \ a(y)) = d(x) * d(y)" + by (simp add: aa.sub_sup_demorgan d_def) + +lemma a_plus_deMorgan_1: + "a(d(x) \ d(y)) = a(x) * a(y)" + by (simp add: aa.sup_demorgan d_def) + +lemma a_mult_left_upper_bound: + "a(x) \ a(x * y)" + using aa.l5 d_def d_mult_sub by auto + +lemma d_a_closed: + "a(d(x)) = a(x)" + by (simp add: d_def) + +lemma a_export_d: + "a(d(x) * y) = a(x) \ a(y)" + using a_mult_d a_mult_deMorgan_2 by auto + +lemma a_7: + "d(x) * a(d(y) \ d(z)) = d(x) * a(y) * a(z)" + by (simp add: a_plus_deMorgan_1 mult_assoc) + +lemma d_a_shunting: + "d(x) * a(y) \ d(z) \ d(x) \ d(z) \ d(y)" + using aa.sba_dual.shunting_right d_def by auto + +lemma d_d_shunting: + "d(x) * d(y) \ d(z) \ d(x) \ d(z) \ a(y)" + using d_a_shunting d_def by auto + +lemma d_cancellation_1: + "d(x) \ d(y) \ (d(x) * a(y))" + by (metis a_d_closed aa.sba2_export aa.sup_demorgan d_def eq_refl le_supE sup_commute) + +lemma d_cancellation_2: + "(d(z) \ d(y)) * a(y) \ d(z)" + by (metis d_a_shunting d_dist_sup eq_refl) + +lemma a_sup_closed: + "d(a(x) \ a(y)) = a(x) \ a(y)" + using aa.sub_sup_closed d_def by auto + +lemma a_mult_closed: + "d(a(x) * a(y)) = a(x) * a(y)" + using a_d_closed aa.l12 by auto + +lemma d_a_shunting_zero: + "d(x) * a(y) = bot \ d(x) \ d(y)" + by (simp add: aa.l10_iff d_def) + +lemma d_d_shunting_zero: + "d(x) * d(y) = bot \ d(x) \ a(y)" + by (simp add: aa.l10_iff d_def) + +lemma d_compl_intro: + "d(x) \ d(y) = d(x) \ a(x) * d(y)" + by (simp add: aa.sup_complement_intro d_def) + +lemma a_compl_intro: + "a(x) \ a(y) = a(x) \ d(x) * a(y)" + by (simp add: aa.sup_complement_intro d_def) + +lemma kat_2: + "y * a(z) \ a(x) * y \ d(x) * y * a(z) = bot" + by (smt a_export a_plus_left_lower_bound le_sup_iff d_d_shunting_zero d_export d_strict le_iff_sup mult_assoc) + +lemma kat_3: + "d(x) * y * a(z) = bot \ d(x) * y = d(x) * y * d(z)" + by (metis a_export_d aa.complement_bot d_complement_zero d_def mult_1_right mult_left_dist_sup sup_bot_left) + +lemma kat_4: + "d(x) * y = d(x) * y * d(z) \ d(x) * y \ y * d(z)" + using d_mult_below mult_assoc by auto + +lemma kat_2_equiv: + "y * a(z) \ a(x) * y \ d(x) * y * a(z) = bot" + apply (rule iffI) + apply (simp add: kat_2) + by (metis aa.top_greatest a_complement sup_bot_left d_def mult_left_one mult_right_dist_sup mult_right_isotone mult_1_right) + +lemma kat_4_equiv: + "d(x) * y = d(x) * y * d(z) \ d(x) * y \ y * d(z)" + apply (rule iffI) + apply (simp add: kat_4) + apply (rule order.antisym) + apply (metis d_idempotent le_iff_sup mult_assoc mult_left_dist_sup) + by (metis d_plus_one le_iff_sup mult_left_dist_sup mult_1_right) + +lemma kat_3_equiv_opp: + "a(z) * y * d(x) = bot \ y * d(x) = d(z) * y * d(x)" + by (metis a_complement a_restrict sup_bot_left d_a_closed d_def mult_assoc mult_left_one mult_left_zero mult_right_dist_sup) + +lemma kat_4_equiv_opp: + "y * d(x) = d(z) * y * d(x) \ y * d(x) \ d(z) * y" + using kat_2_equiv kat_3_equiv_opp d_def by auto + +lemma d_restrict_iff: + "(x \ y) \ (x \ d(x) * y)" + by (metis d_mult_below d_restrict_equals mult_isotone order_lesseq_imp) + +lemma d_restrict_iff_1: + "(d(x) * y \ z) \ (d(x) * y \ d(x) * z)" + by (metis sup_commute d_export d_mult_left_lower_bound d_plus_one d_restrict_iff mult_left_isotone mult_left_one mult_right_sub_dist_sup_right order_trans) + +end + +end + diff --git a/thys/Correctness_Algebras/Domain_Iterings.thy b/thys/Correctness_Algebras/Domain_Iterings.thy new file mode 100644 --- /dev/null +++ b/thys/Correctness_Algebras/Domain_Iterings.thy @@ -0,0 +1,356 @@ +(* Title: Domain Iterings + Author: Walter Guttmann + Maintainer: Walter Guttmann +*) + +section \Domain Iterings\ + +theory Domain_Iterings + +imports Domain Lattice_Ordered_Semirings Omega_Algebras + +begin + +class domain_semiring_lattice = left_zero_domain_semiring + lattice_ordered_pre_left_semiring +begin + +subclass bounded_idempotent_left_zero_semiring .. + +lemma d_top: + "d(top) = 1" + by (metis sup_left_top d_dist_sup d_one d_plus_one) + +lemma mult_domain_top: + "x * d(y) * top \ d(x * y) * top" + by (smt d_mult_d d_restrict_equals mult_assoc mult_right_isotone top_greatest) + +lemma domain_meet_domain: + "d(x \ d(y) * z) \ d(y)" + by (metis d_export d_isotone d_mult_greatest_lower_bound inf.cobounded2) + +lemma meet_domain: + "x \ d(y) * z = d(y) * (x \ z)" + apply (rule order.antisym) + apply (metis domain_meet_domain d_mult_below d_restrict_equals inf_mono mult_isotone) + by (meson d_mult_below le_inf_iff mult_left_sub_dist_inf_right) + +lemma meet_intro_domain: + "x \ y = d(y) * x \ y" + by (metis d_restrict_equals inf_commute meet_domain) + +lemma meet_domain_top: + "x \ d(y) * top = d(y) * x" + by (simp add: meet_domain) + +(* +lemma "d(x) = x * top \ 1" nitpick [expect=genuine,card=3] oops +*) + +lemma d_galois: + "d(x) \ d(y) \ x \ d(y) * top" + by (metis d_export d_isotone d_mult_left_absorb_sup d_plus_one d_restrict_equals d_top mult_isotone top.extremum) + +lemma vector_meet: + "x * top \ y \ d(x) * y" + by (metis d_galois d_mult_sub inf.sup_monoid.add_commute inf.sup_right_isotone meet_domain_top) + +end + +class domain_semiring_lattice_L = domain_semiring_lattice + L + + assumes l1: "x * L = x * bot \ d(x) * L" + assumes l2: "d(L) * x \ x * d(L)" + assumes l3: "d(L) * top \ L \ d(L * bot) * top" + assumes l4: "L * top \ L" + assumes l5: "x * bot \ L \ (x \ L) * bot" +begin + +lemma l8: + "(x \ L) * bot \ x * bot \ L" + by (meson inf.boundedE inf.boundedI mult_right_sub_dist_inf_left zero_right_mult_decreasing) + +lemma l9: + "x * bot \ L \ d(x * bot) * L" + by (metis vector_meet vector_mult_closed zero_vector) + +lemma l10: + "L * L = L" + by (metis d_restrict_equals l1 le_iff_sup zero_right_mult_decreasing) + +lemma l11: + "d(x) * L \ x * L" + by (metis l1 sup.cobounded2) + +lemma l12: + "d(x * bot) * L \ x * bot" + by (metis sup_right_divisibility l1 mult_assoc mult_left_zero) + +lemma l13: + "d(x * bot) * L \ x" + using l12 order_trans zero_right_mult_decreasing by blast + +lemma l14: + "x * L \ x * bot \ L" + by (metis d_mult_below l1 sup_right_isotone) + +lemma l15: + "x * d(y) * L = x * bot \ d(x * y) * L" + by (metis d_commutative d_mult_d d_zero l1 mult_assoc mult_left_zero) + +lemma l16: + "x * top \ L \ x * L" + using inf.order_lesseq_imp l11 vector_meet by blast + +lemma l17: + "d(x) * L \ d(x * L) * L" + by (metis d_mult_below l11 le_infE le_infI meet_intro_domain) + +lemma l18: + "d(x) * L = d(x * L) * L" + by (simp add: order.antisym d_mult_sub l17 mult_left_isotone) + +lemma l19: + "d(x * top * bot) * L \ d(x * L) * L" + by (metis d_mult_sub l18 mult_assoc mult_left_isotone) + +lemma l20: + "x \ y \ x \ y \ L \ x \ y \ d(y * bot) * top" + apply (rule iffI) + apply (simp add: le_supI1) + by (smt sup_commute sup_inf_distrib1 l13 le_iff_sup meet_domain_top) + +lemma l21: + "d(x * bot) * L \ x * bot \ L" + by (simp add: d_mult_below l12) + +lemma l22: + "x * bot \ L = d(x * bot) * L" + using l21 order.antisym l9 by auto + +lemma l23: + "x * top \ L = d(x) * L" + apply (rule order.antisym) + apply (simp add: vector_meet) + by (metis d_mult_below inf.le_sup_iff inf_top.left_neutral l1 le_supE mult_left_sub_dist_inf_left) + +lemma l29: + "L * d(L) = L" + by (metis d_preserves_equation d_restrict_equals l2) + +lemma l30: + "d(L) * x \ (x \ L) \ d(L * bot) * x" + by (metis inf.sup_right_divisibility inf_left_commute inf_sup_distrib1 l3 meet_domain_top) + +lemma l31: + "d(L) * x = (x \ L) \ d(L * bot) * x" + by (smt (z3) l30 d_dist_sup le_iff_sup meet_intro_domain semiring.combine_common_factor sup_commute sup_inf_absorb zero_right_mult_decreasing) + +lemma l40: + "L * x \ L" + by (meson bot_least inf.order_trans l4 semiring.mult_left_mono top.extremum) + +lemma l41: + "L * top = L" + by (simp add: l40 order.antisym top_right_mult_increasing) + +lemma l50: + "x * bot \ L = (x \ L) * bot" + using order.antisym l5 l8 by force + +lemma l51: + "d(x * bot) * L = (x \ L) * bot" + using l22 l50 by auto + +lemma l90: + "L * top * L = L" + by (simp add: l41 l10) + +lemma l91: + assumes "x = x * top" + shows "d(L * bot) * x \ d(x * bot) * top" +proof - + have "d(L * bot) * x \ d(d(L * bot) * x) * top" + using d_galois by blast + also have "... = d(d(L * bot) * d(x)) * top" + using d_mult_d by auto + also have "... = d(d(x) * L * bot) * top" + using d_commutative d_mult_d ils.il_inf_associative by auto + also have "... \ d(x * L * bot) * top" + by (metis d_isotone l11 mult_left_isotone) + also have "... \ d(x * top * bot) * top" + by (simp add: d_isotone mult_left_isotone mult_right_isotone) + finally show ?thesis + using assms by auto +qed + +lemma l92: + assumes "x = x * top" + shows "d(L * bot) * x \ d((x \ L) * bot) * top" +proof - + have "d(L * bot) * x = d(L) * d(L * bot) * x" + using d_commutative d_mult_sub d_order by auto + also have "... \ d(L) * d(x * bot) * top" + by (metis assms order.eq_iff l91 mult_assoc mult_isotone) + also have "... = d(d(x * bot) * L) * top" + by (simp add: d_commutative d_export) + also have "... \ d((x \ L) * bot) * top" + by (simp add: l51) + finally show ?thesis + . +qed + +end + +class domain_itering_lattice_L = bounded_itering + domain_semiring_lattice_L +begin + +lemma mult_L_circ: + "(x * L)\<^sup>\ = 1 \ x * L" + by (metis circ_back_loop_fixpoint circ_mult l40 le_iff_sup mult_assoc) + +lemma mult_L_circ_mult_below: + "(x * L)\<^sup>\ * y \ y \ x * L" + by (smt sup_right_isotone l40 mult_L_circ mult_assoc mult_left_one mult_right_dist_sup mult_right_isotone) + +lemma circ_L: + "L\<^sup>\ = L \ 1" + by (metis sup_commute l10 mult_L_circ) + +lemma circ_d0_L: + "x\<^sup>\ * d(x * bot) * L = x\<^sup>\ * bot" + by (metis sup_bot_right circ_loop_fixpoint circ_plus_same d_zero l15 mult_assoc mult_left_zero) + +lemma d0_circ_left_unfold: + "d(x\<^sup>\ * bot) = d(x * x\<^sup>\ * bot)" + by (metis sup_commute sup_bot_left circ_loop_fixpoint mult_assoc) + +lemma d_circ_import: + "d(y) * x \ x * d(y) \ d(y) * x\<^sup>\ = d(y) * (d(y) * x)\<^sup>\" + apply (rule order.antisym) + apply (simp add: circ_import d_idempotent d_plus_one le_iff_sup) + using circ_isotone d_mult_below mult_right_isotone by auto + +end + +class domain_omega_algebra_lattice_L = bounded_left_zero_omega_algebra + domain_semiring_lattice_L +begin + +lemma mult_L_star: + "(x * L)\<^sup>\ = 1 \ x * L" + by (metis l40 le_iff_sup mult_assoc star.circ_back_loop_fixpoint star.circ_mult) + +lemma mult_L_omega: + "(x * L)\<^sup>\ \ x * L" + by (metis l40 mult_right_isotone omega_slide) + +lemma mult_L_sup_star: + "(x * L \ y)\<^sup>\ = y\<^sup>\ \ y\<^sup>\ * x * L" +proof (rule order.antisym) + have "(x * L \ y) * (y\<^sup>\ \ y\<^sup>\ * x * L) = x * L * (y\<^sup>\ \ y\<^sup>\ * x * L) \ y * (y\<^sup>\ \ y\<^sup>\ * x * L)" + by (simp add: mult_right_dist_sup) + also have "... \ x * L \ y * (y\<^sup>\ \ y\<^sup>\ * x * L)" + by (metis sup_left_isotone l40 mult_assoc mult_right_isotone) + also have "... \ x * L \ y * y\<^sup>\ \ y\<^sup>\ * x * L" + by (smt sup_assoc sup_commute sup_ge2 mult_assoc mult_left_dist_sup star.circ_loop_fixpoint) + also have "... \ x * L \ y\<^sup>\ \ y\<^sup>\ * x * L" + by (meson order_refl star.left_plus_below_circ sup_mono) + also have "... = y\<^sup>\ \ y\<^sup>\ * x * L" + by (metis sup_assoc sup_commute mult_assoc star.circ_loop_fixpoint star.circ_reflexive star.circ_sup_one_right_unfold star_involutive) + finally have "1 \ (x * L \ y) * (y\<^sup>\ \ y\<^sup>\ * x * L) \ y\<^sup>\ \ y\<^sup>\ * x * L" + by (meson le_supI le_supI1 star.circ_reflexive) + thus "(x * L \ y)\<^sup>\ \ y\<^sup>\ \ y\<^sup>\ * x * L" + using star_left_induct by fastforce +next + show "y\<^sup>\ \ y\<^sup>\ * x * L \ (x * L \ y)\<^sup>\" + by (metis sup_commute le_sup_iff mult_assoc star.circ_increasing star.circ_mult_upper_bound star.circ_sub_dist) +qed + +lemma mult_L_sup_omega: + "(x * L \ y)\<^sup>\ \ y\<^sup>\ \ y\<^sup>\ * x * L" +proof - + have 1: "(y\<^sup>\ * x * L)\<^sup>\ \ y\<^sup>\ \ y\<^sup>\ * x * L" + by (simp add: le_supI2 mult_L_omega) + have "(y\<^sup>\ * x * L)\<^sup>\ * y\<^sup>\ \ y\<^sup>\ \ y\<^sup>\ * x * L" + by (metis sup_right_isotone l40 mult_assoc mult_right_isotone star_left_induct) + thus ?thesis + using 1 by (simp add: ils.il_inf_associative omega_decompose sup_monoid.add_commute) +qed + +end + +sublocale domain_omega_algebra_lattice_L < dL_star: itering where circ = star .. + +sublocale domain_omega_algebra_lattice_L < dL_star: domain_itering_lattice_L where circ = star .. + +context domain_omega_algebra_lattice_L +begin + +lemma d0_star_below_d0_omega: + "d(x\<^sup>\ * bot) \ d(x\<^sup>\ * bot)" + by (simp add: d_isotone star_bot_below_omega_bot) + +lemma d0_below_d0_omega: + "d(x * bot) \ d(x\<^sup>\ * bot)" + by (metis d0_star_below_d0_omega d_isotone mult_left_isotone order_trans star.circ_increasing) + +lemma star_L_split: + assumes "y \ z" + and "x * z * L \ x * bot \ z * L" + shows "x\<^sup>\ * y * L \ x\<^sup>\ * bot \ z * L" +proof - + have "x * (x\<^sup>\ * bot \ z * L) \ x\<^sup>\ * bot \ x * z * L" + by (metis sup_bot_right order.eq_iff mult_assoc mult_left_dist_sup star.circ_loop_fixpoint) + also have "... \ x\<^sup>\ * bot \ x * bot \ z * L" + using assms(2) semiring.add_left_mono sup_monoid.add_assoc by auto + also have "... = x\<^sup>\ * bot \ z * L" + using mult_isotone star.circ_increasing sup.absorb_iff1 by force + finally have "y * L \ x * (x\<^sup>\ * bot \ z * L) \ x\<^sup>\ * bot \ z * L" + by (simp add: assms(1) le_supI1 mult_left_isotone sup_monoid.add_commute) + thus ?thesis + by (simp add: star_left_induct mult.assoc) +qed + +lemma star_L_split_same: + "x * y * L \ x * bot \ y * L \ x\<^sup>\ * y * L = x\<^sup>\ * bot \ y * L" + apply (rule order.antisym) + using star_L_split apply blast + by (metis bot_least ils.il_inf_associative le_supI mult_isotone mult_left_one order_refl star.circ_reflexive) + +lemma star_d_L_split_equal: + "d(x * y) \ d(y) \ x\<^sup>\ * d(y) * L = x\<^sup>\ * bot \ d(y) * L" + by (metis sup_right_isotone l15 le_iff_sup mult_right_sub_dist_sup_left star_L_split_same) + +lemma d0_omega_mult: + "d(x\<^sup>\ * y * bot) = d(x\<^sup>\ * bot)" + apply (rule order.antisym) + apply (simp add: d_isotone mult_isotone omega_sub_vector) + by (metis d_isotone mult_assoc mult_right_isotone bot_least) + +lemma d_omega_export: + "d(y) * x \ x * d(y) \ d(y) * x\<^sup>\ = (d(y) * x)\<^sup>\" + apply (rule order.antisym) + apply (simp add: d_preserves_equation omega_simulation) + by (smt le_iff_sup mult_left_dist_sup omega_simulation_2 omega_slide) + +lemma d_omega_import: + "d(y) * x \ x * d(y) \ d(y) * x\<^sup>\ = d(y) * (d(y) * x)\<^sup>\" + using d_idempotent omega_import order.refl by auto + +lemma star_d_omega_top: + "x\<^sup>\ * d(x\<^sup>\) * top = x\<^sup>\ * bot \ d(x\<^sup>\) * top" + apply (rule order.antisym) + apply (metis le_supI2 mult_domain_top star_mult_omega) + by (metis ils.il_inf_associative le_supI mult_left_one mult_left_sub_dist_sup_right mult_right_sub_dist_sup_left star.circ_right_unfold_1 sup_monoid.add_0_right) + +lemma omega_meet_L: + "x\<^sup>\ \ L = d(x\<^sup>\) * L" + by (metis l23 omega_vector) + +(* +lemma d_star_mult: "d(x * y) \ d(y) \ d(x\<^sup>\ * y) = d(x\<^sup>\ * bot) \ d(y)" oops +lemma d0_split_omega_omega: "x\<^sup>\ \ x\<^sup>\ * bot \ d(x\<^sup>\ \ L) * top" nitpick [expect=genuine,card=2] oops +*) + +end + +end + diff --git a/thys/Correctness_Algebras/Domain_Recursion.thy b/thys/Correctness_Algebras/Domain_Recursion.thy new file mode 100644 --- /dev/null +++ b/thys/Correctness_Algebras/Domain_Recursion.thy @@ -0,0 +1,583 @@ +(* Title: Domain Recursion + Author: Walter Guttmann + Maintainer: Walter Guttmann +*) + +section \Domain Recursion\ + +theory Domain_Recursion + +imports Domain_Iterings Approximation + +begin + +class domain_semiring_lattice_apx = domain_semiring_lattice_L + apx + + assumes apx_def: "x \ y \ x \ y \ L \ d(L) * y \ x \ d(x * bot) * top" +begin + +lemma apx_transitive: + assumes "x \ y" + and "y \ z" + shows "x \ z" +proof - + have 1: "x \ z \ L" + by (smt assms sup_assoc sup_commute apx_def le_iff_sup) + have "d(d(L) * y * bot) * top \ d((x \ d(x * bot) * top) * bot) * top" + by (metis assms(1) apx_def d_isotone mult_left_isotone) + also have "... \ d(x * bot) * top" + by (metis le_sup_iff d_galois mult_left_isotone mult_right_dist_sup order_refl zero_right_mult_decreasing) + finally have 2: "d(d(L) * y * bot) * top \ d(x * bot) * top" + . + have "d(L) * z = d(L) * (d(L) * z)" + by (simp add: d_idempotent ils.il_inf_associative) + also have "... \ d(L) * y \ d(d(L) * y * bot) * top" + by (metis assms(2) apx_def d_export mult_assoc mult_left_dist_sup mult_right_isotone) + also have "... \ x \ d(x * bot) * top" + using 2 by (meson assms(1) apx_def le_supI2 sup_least) + finally show ?thesis + using 1 by (simp add: apx_def) +qed + +lemma apx_meet_L: + assumes "y \ x" + shows "x \ L \ y \ L" +proof - + have "x \ L = d(L) * x \ L" + using meet_intro_domain by auto + also have "... \ (y \ d(y * bot) * top) \ L" + using assms apx_def inf.sup_left_isotone by blast + also have "... \ y" + by (simp add: inf.sup_monoid.add_commute inf_sup_distrib1 l13 meet_domain_top) + finally show ?thesis + by simp +qed + +lemma sup_apx_left_isotone: + assumes "x \ y" + shows "x \ z \ y \ z" +proof - + have 1: "x \ z \ y \ z \ L" + by (smt assms sup_assoc sup_commute sup_left_isotone apx_def) + have "d(L) * (y \ z) = d(L) * y \ d(L) * z" + by (simp add: mult_left_dist_sup) + also have "... \ d(L) * y \ z" + by (simp add: d_mult_below le_supI1 sup_commute) + also have "... \ x \ d(x * bot) * top \ z" + using assms apx_def sup_left_isotone by blast + also have "... \ x \ z \ d((x \ z) * bot) * top" + by (simp add: d_dist_sup le_iff_sup semiring.distrib_right sup.left_commute sup_monoid.add_assoc) + finally show ?thesis + using 1 by (simp add: apx_def) +qed + +subclass apx_biorder + apply unfold_locales + apply (metis le_sup_iff sup_ge1 apx_def d_plus_one mult_left_one mult_right_dist_sup) + apply (meson apx_meet_L order.antisym apx_def relative_equality sup_same_context) + using apx_transitive by blast + +lemma mult_apx_left_isotone: + assumes "x \ y" + shows "x * z \ y * z" +proof - + have "x * z \ y * z \ L * z" + by (metis assms apx_def mult_left_isotone mult_right_dist_sup) + hence 1: "x * z \ y * z \ L" + using l40 order_lesseq_imp semiring.add_left_mono by blast + have "d(L) * y * z \ x * z \ d(x * bot) * top * z" + by (metis assms apx_def mult_left_isotone mult_right_dist_sup) + also have "... \ x * z \ d(x * z * bot) * top" + by (metis sup_right_isotone d_isotone mult_assoc mult_isotone mult_right_isotone top_greatest bot_least) + finally show ?thesis + using 1 by (simp add: apx_def mult_assoc) +qed + +lemma mult_apx_right_isotone: + assumes "x \ y" + shows "z * x \ z * y" +proof - + have "z * x \ z * y \ z * L" + by (metis assms apx_def mult_left_dist_sup mult_right_isotone) + also have "... \ z * y \ z * bot \ L" + using l14 semiring.add_left_mono sup_monoid.add_assoc by auto + finally have 1: "z * x \ z * y \ L" + using mult_right_isotone sup.order_iff by auto + have "d(L) * z * y \ z * d(L) * y" + by (simp add: l2 mult_left_isotone) + also have "... \ z * (x \ d(x * bot) * top)" + by (metis assms apx_def mult_assoc mult_right_isotone) + also have "... = z * x \ z * d(x * bot) * top" + by (simp add: mult_left_dist_sup mult_assoc) + also have "... \ z * x \ d(z * x * bot) * top" + by (metis sup_right_isotone mult_assoc mult_domain_top) + finally show ?thesis + using 1 by (simp add: apx_def mult_assoc) +qed + +subclass apx_semiring + apply unfold_locales + apply (metis sup_ge2 apx_def l3 mult_right_isotone order_trans top_greatest) + apply (simp add: sup_apx_left_isotone) + apply (simp add: mult_apx_left_isotone) + by (simp add: mult_apx_right_isotone) + +lemma meet_L_apx_isotone: + "x \ y \ x \ L \ y \ L" + by (smt (z3) inf.cobounded2 sup.coboundedI1 sup_absorb sup_commute apx_def apx_meet_L d_restrict_equals l20 inf_commute meet_domain) + +definition kappa_apx_meet :: "('a \ 'a) \ bool" + where "kappa_apx_meet f \ apx.has_least_fixpoint f \ has_apx_meet (\ f) (\ f) \ \ f = \ f \ \ f" + +definition kappa_mu_nu :: "('a \ 'a) \ bool" + where "kappa_mu_nu f \ apx.has_least_fixpoint f \ \ f = \ f \ (\ f \ L)" + +definition nu_below_mu_nu :: "('a \ 'a) \ bool" + where "nu_below_mu_nu f \ d(L) * \ f \ \ f \ (\ f \ L) \ d(\ f * bot) * top" + +definition nu_below_mu_nu_2 :: "('a \ 'a) \ bool" + where "nu_below_mu_nu_2 f \ d(L) * \ f \ \ f \ (\ f \ L) \ d((\ f \ (\ f \ L)) * bot) * top" + +definition mu_nu_apx_nu :: "('a \ 'a) \ bool" + where "mu_nu_apx_nu f \ \ f \ (\ f \ L) \ \ f" + +definition mu_nu_apx_meet :: "('a \ 'a) \ bool" + where "mu_nu_apx_meet f \ has_apx_meet (\ f) (\ f) \ \ f \ \ f = \ f \ (\ f \ L)" + +definition apx_meet_below_nu :: "('a \ 'a) \ bool" + where "apx_meet_below_nu f \ has_apx_meet (\ f) (\ f) \ \ f \ \ f \ \ f" + +lemma mu_below_l: + "\ f \ \ f \ (\ f \ L)" + by simp + +lemma l_below_nu: + "has_least_fixpoint f \ has_greatest_fixpoint f \ \ f \ (\ f \ L) \ \ f" + by (simp add: mu_below_nu) + +lemma n_l_nu: + "has_least_fixpoint f \ has_greatest_fixpoint f \ (\ f \ (\ f \ L)) \ L = \ f \ L" + by (meson l_below_nu inf.sup_same_context inf_le1 order_trans sup.cobounded2) + +lemma l_apx_mu: + "\ f \ (\ f \ L) \ \ f" + by (simp add: apx_def d_mult_below le_supI1 sup_inf_distrib1) + +lemma nu_below_mu_nu_nu_below_mu_nu_2: + assumes "nu_below_mu_nu f" + shows "nu_below_mu_nu_2 f" +proof - + have "d(L) * \ f = d(L) * (d(L) * \ f)" + by (simp add: d_idempotent ils.il_inf_associative) + also have "... \ d(L) * (\ f \ (\ f \ L) \ d(\ f * bot) * top)" + using assms mult_isotone nu_below_mu_nu_def by blast + also have "... = d(L) * (\ f \ (\ f \ L)) \ d(L) * d(\ f * bot) * top" + by (simp add: ils.il_inf_associative mult_left_dist_sup) + also have "... \ \ f \ (\ f \ L) \ d(L) * d(\ f * bot) * top" + using d_mult_below sup_left_isotone by auto + also have "... = \ f \ (\ f \ L) \ d(d(\ f * bot) * L) * top" + by (simp add: d_commutative d_export) + also have "... = \ f \ (\ f \ L) \ d((\ f \ L) * bot) * top" + using l51 by auto + also have "... \ \ f \ (\ f \ L) \ d((\ f \ (\ f \ L)) * bot) * top" + by (meson d_isotone inf.eq_refl mult_isotone semiring.add_left_mono sup.cobounded2) + finally show ?thesis + using nu_below_mu_nu_2_def by auto +qed + +lemma nu_below_mu_nu_2_nu_below_mu_nu: + assumes "has_least_fixpoint f" + and "has_greatest_fixpoint f" + and "nu_below_mu_nu_2 f" + shows "nu_below_mu_nu f" +proof - + have "d(L) * \ f \ \ f \ (\ f \ L) \ d((\ f \ (\ f \ L)) * bot) * top" + using assms(3) nu_below_mu_nu_2_def by blast + also have "... \ \ f \ (\ f \ L) \ d(\ f * bot) * top" + by (metis assms(1,2) d_isotone inf.sup_monoid.add_commute inf.sup_right_divisibility le_supI le_supI2 mu_below_nu mult_left_isotone sup_left_divisibility) + finally show ?thesis + by (simp add: nu_below_mu_nu_def) +qed + +lemma nu_below_mu_nu_equivalent: + "has_least_fixpoint f \ has_greatest_fixpoint f \ (nu_below_mu_nu f \ nu_below_mu_nu_2 f)" + using nu_below_mu_nu_2_nu_below_mu_nu nu_below_mu_nu_nu_below_mu_nu_2 by blast + +lemma nu_below_mu_nu_2_mu_nu_apx_nu: + assumes "has_least_fixpoint f" + and "has_greatest_fixpoint f" + and "nu_below_mu_nu_2 f" + shows "mu_nu_apx_nu f" +proof - + have "\ f \ (\ f \ L) \ \ f \ L" + using assms(1,2) l_below_nu le_supI1 by blast + thus ?thesis + using assms(3) apx_def mu_nu_apx_nu_def nu_below_mu_nu_2_def by blast +qed + +lemma mu_nu_apx_nu_mu_nu_apx_meet: + assumes "mu_nu_apx_nu f" + shows "mu_nu_apx_meet f" +proof - + let ?l = "\ f \ (\ f \ L)" + have "is_apx_meet (\ f) (\ f) ?l" + apply (unfold is_apx_meet_def, intro conjI) + apply (simp add: l_apx_mu) + using assms mu_nu_apx_nu_def apply blast + by (metis apx_meet_L le_supI2 sup.order_iff sup_apx_left_isotone sup_inf_absorb) + thus ?thesis + by (smt apx_meet_char mu_nu_apx_meet_def) +qed + +lemma mu_nu_apx_meet_apx_meet_below_nu: + "has_least_fixpoint f \ has_greatest_fixpoint f \ mu_nu_apx_meet f \ apx_meet_below_nu f" + using apx_meet_below_nu_def l_below_nu mu_nu_apx_meet_def by auto + +lemma apx_meet_below_nu_nu_below_mu_nu_2: + assumes "apx_meet_below_nu f" + shows "nu_below_mu_nu_2 f" +proof - + let ?l = "\ f \ (\ f \ L)" + have "\m . m \ \ f \ m \ \ f \ m \ \ f \ d(L) * \ f \ ?l \ d(?l * bot) * top" + proof + fix m + show "m \ \ f \ m \ \ f \ m \ \ f \ d(L) * \ f \ ?l \ d(?l * bot) * top" + proof + assume 1: "m \ \ f \ m \ \ f \ m \ \ f" + hence "m \ ?l" + by (metis apx_def ils.il_associative sup.orderE sup.orderI sup_inf_distrib1 sup_inf_distrib2) + hence "m \ d(m * bot) * top \ ?l \ d(?l * bot) * top" + by (meson d_isotone order.trans le_supI le_supI2 mult_left_isotone sup.cobounded1) + thus "d(L) * \ f \ ?l \ d(?l * bot) * top" + using 1 apx_def order_lesseq_imp by blast + qed + qed + thus ?thesis + by (smt (verit) assms apx_meet_below_nu_def apx_meet_same apx_meet_unique is_apx_meet_def nu_below_mu_nu_2_def) +qed + +lemma has_apx_least_fixpoint_kappa_apx_meet: + assumes "has_least_fixpoint f" + and "has_greatest_fixpoint f" + and "apx.has_least_fixpoint f" + shows "kappa_apx_meet f" +proof - + have 1: "\w . w \ \ f \ w \ \ f \ d(L) * \ f \ w \ d(w * bot) * top" + by (metis assms(2,3) apx_def mult_right_isotone order_trans kappa_below_nu) + have "\w . w \ \ f \ w \ \ f \ w \ \ f \ L" + by (metis assms(1,3) sup_left_isotone apx_def mu_below_kappa order_trans) + hence "\w . w \ \ f \ w \ \ f \ w \ \ f" + using 1 apx_def by blast + hence "is_apx_meet (\ f) (\ f) (\ f)" + using assms apx_meet_char is_apx_meet_def kappa_apx_below_mu kappa_apx_below_nu kappa_apx_meet_def by presburger + thus ?thesis + by (simp add: assms(3) kappa_apx_meet_def apx_meet_char) +qed + +lemma kappa_apx_meet_apx_meet_below_nu: + "has_greatest_fixpoint f \ kappa_apx_meet f \ apx_meet_below_nu f" + using apx_meet_below_nu_def kappa_apx_meet_def kappa_below_nu by force + +lemma apx_meet_below_nu_kappa_mu_nu: + assumes "has_least_fixpoint f" + and "has_greatest_fixpoint f" + and "isotone f" + and "apx.isotone f" + and "apx_meet_below_nu f" + shows "kappa_mu_nu f" +proof - + let ?l = "\ f \ (\ f \ L)" + let ?m = "\ f \ \ f" + have 1: "?m = ?l" + by (metis assms(1,2,5) apx_meet_below_nu_nu_below_mu_nu_2 mu_nu_apx_meet_def mu_nu_apx_nu_mu_nu_apx_meet nu_below_mu_nu_2_mu_nu_apx_nu) + have 2: "?l \ f(?l) \ L" + proof - + have "?l \ \ f \ L" + using sup_right_isotone by auto + also have "... = f(\ f) \ L" + by (simp add: assms(1) mu_unfold) + also have "... \ f(?l) \ L" + by (metis assms(3) sup_left_isotone sup_ge1 isotone_def) + finally show ?thesis + . + qed + have "d(L) * f(?l) \ ?l \ d(?l * bot) * top" + proof - + have "d(L) * f(?l) \ d(L) * f(\ f)" + by (metis assms(1-3) l_below_nu mult_right_isotone ord.isotone_def) + also have "... = d(L) * \ f" + by (metis assms(2) nu_unfold) + also have "... \ ?l \ d(?l * bot) * top" + using apx_meet_below_nu_nu_below_mu_nu_2 assms(5) nu_below_mu_nu_2_def by blast + finally show ?thesis + . + qed + hence 3: "?l \ f(?l)" + using 2 by (simp add: apx_def) + have 4: "f(?l) \ \ f" + proof - + have "?l \ \ f" + by (simp add: l_apx_mu) + thus ?thesis + by (metis assms(1,4) mu_unfold ord.isotone_def) + qed + have 5: "f(?l) \ \ f" + proof - + have "?l \ \ f" + by (meson apx_meet_below_nu_nu_below_mu_nu_2 assms(1,2,5) l_below_nu apx_def le_supI1 nu_below_mu_nu_2_def) + thus ?thesis + by (metis assms(2,4) nu_unfold ord.isotone_def) + qed + hence "f(?l) \ ?l" + using 1 4 apx_meet_below_nu_def assms(5) apx_greatest_lower_bound by fastforce + hence 6: "f(?l) = ?l" + using 3 apx.order.antisym by blast + have "\y . f(y) = y \ ?l \ y" + proof + fix y + show "f(y) = y \ ?l \ y" + proof + assume 7: "f(y) = y" + hence 8: "?l \ y \ L" + using assms(1) inf.cobounded2 is_least_fixpoint_def least_fixpoint semiring.add_mono by blast + have "y \ \ f" + using 7 assms(2) greatest_fixpoint is_greatest_fixpoint_def by auto + hence "d(L) * y \ ?l \ d(?l * bot) * top" + using 3 5 by (smt (z3) apx.order.trans apx_def semiring.distrib_left sup.absorb_iff2 sup_assoc) + thus "?l \ y" + using 8 by (simp add: apx_def) + qed + qed + thus ?thesis + using 1 6 by (smt (verit) kappa_mu_nu_def apx.is_least_fixpoint_def apx.least_fixpoint_char) +qed + +lemma kappa_mu_nu_has_apx_least_fixpoint: + "kappa_mu_nu f \ apx.has_least_fixpoint f" + by (simp add: kappa_mu_nu_def) + +lemma nu_below_mu_nu_kappa_mu_nu: + "has_least_fixpoint f \ has_greatest_fixpoint f \ isotone f \ apx.isotone f \ nu_below_mu_nu f \ kappa_mu_nu f" + using apx_meet_below_nu_kappa_mu_nu mu_nu_apx_meet_apx_meet_below_nu mu_nu_apx_nu_mu_nu_apx_meet nu_below_mu_nu_2_mu_nu_apx_nu nu_below_mu_nu_nu_below_mu_nu_2 by blast + +lemma kappa_mu_nu_nu_below_mu_nu: + "has_least_fixpoint f \ has_greatest_fixpoint f \ kappa_mu_nu f \ nu_below_mu_nu f" + by (simp add: apx_meet_below_nu_nu_below_mu_nu_2 has_apx_least_fixpoint_kappa_apx_meet kappa_apx_meet_apx_meet_below_nu kappa_mu_nu_def nu_below_mu_nu_2_nu_below_mu_nu) + +definition kappa_mu_nu_L :: "('a \ 'a) \ bool" + where "kappa_mu_nu_L f \ apx.has_least_fixpoint f \ \ f = \ f \ d(\ f * bot) * L" + +definition nu_below_mu_nu_L :: "('a \ 'a) \ bool" + where "nu_below_mu_nu_L f \ d(L) * \ f \ \ f \ d(\ f * bot) * top" + +definition mu_nu_apx_nu_L :: "('a \ 'a) \ bool" + where "mu_nu_apx_nu_L f \ \ f \ d(\ f * bot) * L \ \ f" + +definition mu_nu_apx_meet_L :: "('a \ 'a) \ bool" + where "mu_nu_apx_meet_L f \ has_apx_meet (\ f) (\ f) \ \ f \ \ f = \ f \ d(\ f * bot) * L" + +lemma n_below_l: + "x \ d(y * bot) * L \ x \ (y \ L)" + using d_mult_below l13 sup_right_isotone by auto + +lemma n_equal_l: + assumes "nu_below_mu_nu_L f" + shows"\ f \ d(\ f * bot) * L = \ f \ (\ f \ L)" +proof - + have "\ f \ L \ (\ f \ d(\ f * bot) * top) \ L" + using assms l31 nu_below_mu_nu_L_def by force + also have "... \ \ f \ d(\ f * bot) * L" + using distrib(4) inf.sup_monoid.add_commute meet_domain_top sup_left_isotone by force + finally have "\ f \ (\ f \ L) \ \ f \ d(\ f * bot) * L" + by auto + thus ?thesis + by (meson order.antisym n_below_l) +qed + +lemma nu_below_mu_nu_L_nu_below_mu_nu: + "nu_below_mu_nu_L f \ nu_below_mu_nu f" + using order_lesseq_imp sup.cobounded1 sup_left_isotone nu_below_mu_nu_L_def nu_below_mu_nu_def by blast + +lemma nu_below_mu_nu_L_kappa_mu_nu_L: + "has_least_fixpoint f \ has_greatest_fixpoint f \ isotone f \ apx.isotone f \ nu_below_mu_nu_L f \ kappa_mu_nu_L f" + using kappa_mu_nu_L_def kappa_mu_nu_def n_equal_l nu_below_mu_nu_L_nu_below_mu_nu nu_below_mu_nu_kappa_mu_nu by auto + +lemma nu_below_mu_nu_L_mu_nu_apx_nu_L: + "has_least_fixpoint f \ has_greatest_fixpoint f \ nu_below_mu_nu_L f \ mu_nu_apx_nu_L f" + using mu_nu_apx_nu_L_def mu_nu_apx_nu_def n_equal_l nu_below_mu_nu_2_mu_nu_apx_nu nu_below_mu_nu_L_nu_below_mu_nu nu_below_mu_nu_nu_below_mu_nu_2 by auto + +lemma nu_below_mu_nu_L_mu_nu_apx_meet_L: + "has_least_fixpoint f \ has_greatest_fixpoint f \ nu_below_mu_nu_L f \ mu_nu_apx_meet_L f" + using mu_nu_apx_meet_L_def mu_nu_apx_meet_def mu_nu_apx_nu_mu_nu_apx_meet n_equal_l nu_below_mu_nu_2_mu_nu_apx_nu nu_below_mu_nu_L_nu_below_mu_nu nu_below_mu_nu_nu_below_mu_nu_2 by auto + +lemma mu_nu_apx_nu_L_nu_below_mu_nu_L: + assumes "has_least_fixpoint f" + and "has_greatest_fixpoint f" + and "mu_nu_apx_nu_L f" + shows "nu_below_mu_nu_L f" +proof - + let ?n = "\ f \ d(\ f * bot) * L" + let ?l = "\ f \ (\ f \ L)" + have "d(L) * \ f \ ?n \ d(?n * bot) * top" + using assms(3) apx_def mu_nu_apx_nu_L_def by blast + also have "... \ ?n \ d(?l * bot) * top" + using d_isotone mult_left_isotone semiring.add_left_mono n_below_l by auto + also have "... \ ?n \ d(\ f * bot) * top" + by (meson assms(1,2) l_below_nu d_isotone mult_left_isotone sup_right_isotone) + finally show ?thesis + by (metis sup_assoc sup_right_top mult_left_dist_sup nu_below_mu_nu_L_def) +qed + +lemma kappa_mu_nu_L_mu_nu_apx_nu_L: + "has_greatest_fixpoint f \ kappa_mu_nu_L f \ mu_nu_apx_nu_L f" + using kappa_mu_nu_L_def kappa_apx_below_nu mu_nu_apx_nu_L_def by force + +lemma mu_nu_apx_meet_L_mu_nu_apx_nu_L: + "mu_nu_apx_meet_L f \ mu_nu_apx_nu_L f" + using apx_greatest_lower_bound mu_nu_apx_meet_L_def mu_nu_apx_nu_L_def by fastforce + +lemma kappa_mu_nu_L_nu_below_mu_nu_L: + "has_least_fixpoint f \ has_greatest_fixpoint f \ kappa_mu_nu_L f \ nu_below_mu_nu_L f" + using kappa_mu_nu_L_mu_nu_apx_nu_L mu_nu_apx_nu_L_nu_below_mu_nu_L by auto + +end + +class itering_apx = domain_itering_lattice_L + domain_semiring_lattice_apx +begin + +lemma circ_apx_isotone: + assumes "x \ y" + shows "x\<^sup>\ \ y\<^sup>\" +proof - + have 1: "x \ y \ L \ d(L) * y \ x \ d(x * bot) * top" + using assms apx_def by auto + have "d(L) * y\<^sup>\ \ (d(L) * y)\<^sup>\" + by (metis d_circ_import d_mult_below l2) + also have "... \ x\<^sup>\ * (d(x * bot) * top * x\<^sup>\)\<^sup>\" + using 1 by (metis circ_sup_1 circ_isotone) + also have "... = x\<^sup>\ \ x\<^sup>\ * d(x * bot) * top" + by (metis circ_left_top mult_assoc mult_left_dist_sup mult_1_right mult_top_circ) + also have "... \ x\<^sup>\ \ d(x\<^sup>\ * x * bot) * top" + by (metis sup_right_isotone mult_assoc mult_domain_top) + finally have 2: "d(L) * y\<^sup>\ \ x\<^sup>\ \ d(x\<^sup>\ * bot) * top" + using circ_plus_same d0_circ_left_unfold by auto + have "x\<^sup>\ \ y\<^sup>\ * L\<^sup>\" + using 1 by (metis circ_sup_1 circ_back_loop_fixpoint circ_isotone l40 le_iff_sup mult_assoc) + also have "... = y\<^sup>\ \ y\<^sup>\ * L" + by (simp add: circ_L mult_left_dist_sup sup_commute) + also have "... \ y\<^sup>\ \ y\<^sup>\ * bot \ L" + using l14 semiring.add_left_mono sup_monoid.add_assoc by auto + finally have "x\<^sup>\ \ y\<^sup>\ \ L" + using sup.absorb_iff1 zero_right_mult_decreasing by auto + thus ?thesis + using 2 by (simp add: apx_def) +qed + +end + +class omega_algebra_apx = domain_omega_algebra_lattice_L + domain_semiring_lattice_apx + +sublocale omega_algebra_apx < star: itering_apx where circ = star .. + +context omega_algebra_apx +begin + +lemma omega_apx_isotone: + assumes "x \ y" + shows "x\<^sup>\ \ y\<^sup>\" +proof - + have 1: "x \ y \ L \ d(L) * y \ x \ d(x * bot) * top" + using assms apx_def by auto + have "d(L) * y\<^sup>\ = (d(L) * y)\<^sup>\" + by (simp add: d_omega_export l2) + also have "... \ (x \ d(x * bot) * top)\<^sup>\" + using 1 by (simp add: omega_isotone) + also have "... = (x\<^sup>\ * d(x * bot) * top)\<^sup>\ \ (x\<^sup>\ * d(x * bot) * top)\<^sup>\ * x\<^sup>\" + by (simp add: ils.il_inf_associative omega_decompose) + also have "... \ x\<^sup>\ * d(x * bot) * top \ (x\<^sup>\ * d(x * bot) * top)\<^sup>\ * x\<^sup>\" + using mult_top_omega sup_left_isotone by blast + also have "... = x\<^sup>\ * d(x * bot) * top \ (1 \ x\<^sup>\ * d(x * bot) * top * (x\<^sup>\ * d(x * bot) * top)\<^sup>\) * x\<^sup>\" + by (simp add: star_left_unfold_equal) + also have "... \ x\<^sup>\ \ x\<^sup>\ * d(x * bot) * top" + by (smt (verit, ccfv_threshold) sup_mono le_sup_iff mult_assoc mult_left_one mult_right_dist_sup mult_right_isotone order_refl top_greatest) + also have "... \ x\<^sup>\ \ d(x\<^sup>\ * x * bot) * top" + by (metis sup_right_isotone mult_assoc mult_domain_top) + also have "... \ x\<^sup>\ \ d(x\<^sup>\ * bot) * top" + by (simp add: dL_star.d0_circ_left_unfold star_plus) + finally have 2: "d(L) * y\<^sup>\ \ x\<^sup>\ \ d(x\<^sup>\ * bot) * top" + by (meson sup_right_isotone d0_star_below_d0_omega mult_left_isotone order_trans) + have "x\<^sup>\ \ (y \ L)\<^sup>\" + using 1 by (simp add: omega_isotone) + also have "... = (y\<^sup>\ * L)\<^sup>\ \ (y\<^sup>\ * L)\<^sup>\ * y\<^sup>\" + by (simp add: omega_decompose) + also have "... = y\<^sup>\ * L * (y\<^sup>\ * L)\<^sup>\ \ (y\<^sup>\ * L)\<^sup>\ * y\<^sup>\" + using omega_unfold by auto + also have "... \ y\<^sup>\ * L \ (y\<^sup>\ * L)\<^sup>\ * y\<^sup>\" + using mult_L_omega omega_unfold sup_left_isotone by auto + also have "... = y\<^sup>\ * L \ (1 \ y\<^sup>\ * L * (y\<^sup>\ * L)\<^sup>\) * y\<^sup>\" + by (simp add: star_left_unfold_equal) + also have "... \ y\<^sup>\ * L \ y\<^sup>\" + by (simp add: dL_star.mult_L_circ_mult_below star_left_unfold_equal sup_commute) + also have "... \ y\<^sup>\ * bot \ L \ y\<^sup>\" + by (simp add: l14 le_supI1) + finally have "x\<^sup>\ \ y\<^sup>\ \ L" + using star_bot_below_omega sup.left_commute sup.order_iff sup_commute by auto + thus ?thesis + using 2 by (simp add: apx_def) +qed + +lemma combined_apx_isotone: + "x \ y \ (x\<^sup>\ \ L) \ x\<^sup>\ * z \ (y\<^sup>\ \ L) \ y\<^sup>\ * z" + using meet_L_apx_isotone mult_apx_left_isotone star.circ_apx_isotone sup_apx_isotone omega_apx_isotone by auto + +lemma d_split_nu_mu: + "d(L) * (y\<^sup>\ \ y\<^sup>\ * z) \ y\<^sup>\ * z \ ((y\<^sup>\ \ y\<^sup>\ * z) \ L) \ d((y\<^sup>\ \ y\<^sup>\ * z) * bot) * top" +proof - + have "d(L) * y\<^sup>\ \ (y\<^sup>\ \ L) \ d(y\<^sup>\ * bot) * top" + using l31 l91 omega_vector sup_right_isotone by auto + hence "d(L) * (y\<^sup>\ \ y\<^sup>\ * z) \ y\<^sup>\ * z \ (y\<^sup>\ \ L) \ d(y\<^sup>\ * bot) * top" + by (smt sup_assoc sup_commute sup_mono d_mult_below mult_left_dist_sup) + also have "... \ y\<^sup>\ * z \ ((y\<^sup>\ \ y\<^sup>\ * z) \ L) \ d(y\<^sup>\ * bot) * top" + by (simp add: le_supI1 le_supI2) + also have "... \ y\<^sup>\ * z \ ((y\<^sup>\ \ y\<^sup>\ * z) \ L) \ d((y\<^sup>\ \ y\<^sup>\ * z) * bot) * top" + by (meson d_isotone mult_left_isotone sup.cobounded1 sup_right_isotone) + finally show ?thesis + . +qed + +lemma loop_exists: + "d(L) * \ (\x . y * x \ z) \ \ (\x . y * x \ z) \ (\ (\x . y * x \ z) \ L) \ d(\ (\x . y * x \ z) * bot) * top" + by (simp add: d_split_nu_mu omega_loop_nu star_loop_mu) + +lemma loop_apx_least_fixpoint: + "apx.is_least_fixpoint (\x . y * x \ z) (\ (\x . y * x \ z) \ (\ (\x . y * x \ z) \ L))" + using apx.least_fixpoint_char affine_apx_isotone loop_exists affine_has_greatest_fixpoint affine_has_least_fixpoint affine_isotone nu_below_mu_nu_def nu_below_mu_nu_kappa_mu_nu kappa_mu_nu_def by auto + +lemma loop_has_apx_least_fixpoint: + "apx.has_least_fixpoint (\x . y * x \ z)" + by (metis apx.has_least_fixpoint_def loop_apx_least_fixpoint) + +lemma loop_semantics: + "\ (\x . y * x \ z) = \ (\x . y * x \ z) \ (\ (\x . y * x \ z) \ L)" + using apx.least_fixpoint_char loop_apx_least_fixpoint by auto + +lemma loop_semantics_kappa_mu_nu: + "\ (\x . y * x \ z) = (y\<^sup>\ \ L) \ y\<^sup>\ * z" +proof - + have "\ (\x . y * x \ z) = y\<^sup>\ * z \ ((y\<^sup>\ \ y\<^sup>\ * z) \ L)" + by (metis loop_semantics omega_loop_nu star_loop_mu) + thus ?thesis + by (metis sup.absorb2 sup_commute sup_ge2 sup_inf_distrib1) +qed + +lemma loop_semantics_kappa_mu_nu_domain: + "\ (\x . y * x \ z) = d(y\<^sup>\) * L \ y\<^sup>\ * z" + by (simp add: omega_meet_L loop_semantics_kappa_mu_nu) + +lemma loop_semantics_apx_isotone: + "w \ y \ \ (\x . w * x \ z) \ \ (\x . y * x \ z)" + by (metis loop_semantics_kappa_mu_nu combined_apx_isotone) + +end + +end + diff --git a/thys/Correctness_Algebras/Extended_Designs.thy b/thys/Correctness_Algebras/Extended_Designs.thy new file mode 100644 --- /dev/null +++ b/thys/Correctness_Algebras/Extended_Designs.thy @@ -0,0 +1,289 @@ +(* Title: Extended Designs + Author: Walter Guttmann + Maintainer: Walter Guttmann +*) + +section \Extended Designs\ + +theory Extended_Designs + +imports Omega_Algebras Domain + +begin + +class domain_semiring_L_below = left_zero_domain_semiring + L + + assumes L_left_zero_below: "L * x \ L" + assumes mult_L_split: "x * L = x * bot \ d(x) * L" +begin + +lemma d_zero_mult_L: + "d(x * bot) * L \ x" + by (metis le_sup_iff mult_L_split mult_assoc mult_left_zero zero_right_mult_decreasing) + +lemma mult_L: + "x * L \ x * bot \ L" + by (metis sup_right_isotone d_mult_below mult_L_split) + +lemma d_mult_L: + "d(x) * L \ x * L" + by (metis sup_right_divisibility mult_L_split) + +lemma d_L_split: + "x * d(y) * L = x * bot \ d(x * y) * L" + by (metis d_commutative d_mult_d d_zero mult_L_split mult_assoc mult_left_zero) + +lemma d_mult_mult_L: + "d(x * y) * L \ x * d(y) * L" + using d_L_split by auto + +lemma L_L: + "L * L = L" + by (metis d_restrict_equals le_iff_sup mult_L_split zero_right_mult_decreasing) + +end + +class antidomain_semiring_L = left_zero_antidomain_semiring + L + + assumes d_zero_mult_L: "d(x * bot) * L \ x" + assumes d_L_zero : "d(L * bot) = 1" + assumes mult_L : "x * L \ x * bot \ L" +begin + +lemma L_left_zero: + "L * x = L" + by (metis order.antisym d_L_zero d_zero_mult_L mult_assoc mult_left_one mult_left_zero zero_right_mult_decreasing) + +subclass domain_semiring_L_below + apply unfold_locales + apply (simp add: L_left_zero) + apply (rule order.antisym) + apply (smt d_restrict_equals le_iff_sup mult_L mult_assoc mult_left_dist_sup) + by (metis le_sup_iff d_L_zero d_mult_d d_zero_mult_L mult_assoc mult_right_isotone mult_1_right bot_least) + +end + +class ed_below = bounded_left_zero_omega_algebra + domain_semiring_L_below + Omega + + assumes Omega_def: "x\<^sup>\ = d(x\<^sup>\) * L \ x\<^sup>\" +begin + +lemma Omega_isotone: + "x \ y \ x\<^sup>\ \ y\<^sup>\" + by (metis Omega_def sup_mono d_isotone mult_left_isotone omega_isotone star.circ_isotone) + +lemma star_below_Omega: + "x\<^sup>\ \ x\<^sup>\" + using Omega_def by auto + +lemma one_below_Omega: + "1 \ x\<^sup>\" + using order_trans star.circ_reflexive star_below_Omega by blast + +lemma L_left_zero_star: + "L * x\<^sup>\ = L" + by (meson L_left_zero_below order.antisym star.circ_back_loop_prefixpoint sup.boundedE) + +lemma L_left_zero_Omega: + "L * x\<^sup>\ = L" + using L_left_zero_star L_left_zero_below Omega_def mult_left_dist_sup sup.order_iff sup_monoid.add_commute by auto + +lemma mult_L_star: + "(x * L)\<^sup>\ = 1 \ x * L" + by (metis L_left_zero_star mult_assoc star.circ_left_unfold) + +lemma mult_L_omega_below: + "(x * L)\<^sup>\ \ x * L" + by (metis L_left_zero_below mult_right_isotone omega_slide) + +lemma mult_L_sup_star: + "(x * L \ y)\<^sup>\ = y\<^sup>\ \ y\<^sup>\ * x * L" + by (metis L_left_zero_star sup_commute mult_assoc star.circ_unfold_sum) + +lemma mult_L_sup_omega_below: + "(x * L \ y)\<^sup>\ \ y\<^sup>\ \ y\<^sup>\ * x * L" +proof - + have "(x * L \ y)\<^sup>\ = (y\<^sup>\ * x * L)\<^sup>\ \ (y\<^sup>\ * x * L)\<^sup>\ * y\<^sup>\" + by (simp add: ils.il_inf_associative omega_decompose sup_commute) + also have "... \ y\<^sup>\ * x * L \ (y\<^sup>\ * x * L)\<^sup>\ * y\<^sup>\" + using sup_left_isotone mult_L_omega_below by auto + also have "... = y\<^sup>\ * x * L \ y\<^sup>\ * x * L * y\<^sup>\ \ y\<^sup>\" + by (smt L_left_zero_star sup_assoc sup_commute mult_assoc star.circ_loop_fixpoint) + also have "... \ y\<^sup>\ \ y\<^sup>\ * x * L" + by (metis L_left_zero_star sup_commute eq_refl mult_assoc star.circ_back_loop_fixpoint) + finally show ?thesis + . +qed + +lemma mult_L_sup_circ: + "(x * L \ y)\<^sup>\ = d(y\<^sup>\) * L \ y\<^sup>\ \ y\<^sup>\ * x * L" +proof - + have "(x * L \ y)\<^sup>\ = d((x * L \ y)\<^sup>\) * L \ (x * L \ y)\<^sup>\" + by (simp add: Omega_def) + also have "... \ d(y\<^sup>\ \ y\<^sup>\ * x * L) * L \ (x * L \ y)\<^sup>\" + by (metis sup_left_isotone d_isotone mult_L_sup_omega_below mult_left_isotone) + also have "... = d(y\<^sup>\) * L \ d(y\<^sup>\ * x * L) * L \ (x * L \ y)\<^sup>\" + by (simp add: d_dist_sup mult_right_dist_sup) + also have "... \ d(y\<^sup>\) * L \ y\<^sup>\ * x * L * L \ (x * L \ y)\<^sup>\" + by (meson d_mult_L order.refl sup.mono) + also have "... = d(y\<^sup>\) * L \ y\<^sup>\ \ y\<^sup>\ * x * L" + by (smt L_L sup_assoc sup_commute le_iff_sup mult_L_sup_star mult_assoc order_refl) + finally have 1: "(x * L \ y)\<^sup>\ \ d(y\<^sup>\) * L \ y\<^sup>\ \ y\<^sup>\ * x * L" + . + have 2: "d(y\<^sup>\) * L \ (x * L \ y)\<^sup>\" + using Omega_isotone Omega_def by force + have "y\<^sup>\ \ y\<^sup>\ * x * L \ (x * L \ y)\<^sup>\" + by (metis Omega_def sup_ge2 mult_L_sup_star) + hence "d(y\<^sup>\) * L \ y\<^sup>\ \ y\<^sup>\ * x * L \ (x * L \ y)\<^sup>\" + using 2 by simp + thus ?thesis + using 1 by (simp add: order.antisym) +qed + +lemma circ_sup_d: + "(x\<^sup>\ * y)\<^sup>\ * x\<^sup>\ = d((x\<^sup>\ * y)\<^sup>\) * L \ ((x\<^sup>\ * y)\<^sup>\ * x\<^sup>\ \ (x\<^sup>\ * y)\<^sup>\ * d(x\<^sup>\) * L)" +proof - + have "(x\<^sup>\ * y)\<^sup>\ * x\<^sup>\ = ((d(x\<^sup>\) * L \ x\<^sup>\) * y)\<^sup>\ * x\<^sup>\" + by (simp add: Omega_def) + also have "... = (d(x\<^sup>\) * L * y \ x\<^sup>\ * y)\<^sup>\ * x\<^sup>\" + by (simp add: mult_right_dist_sup) + also have "... \ (d(x\<^sup>\) * L \ x\<^sup>\ * y)\<^sup>\ * x\<^sup>\" + by (metis L_left_zero_below Omega_isotone sup_left_isotone mult_assoc mult_left_isotone mult_right_isotone) + also have "... = (d((x\<^sup>\ * y)\<^sup>\) * L \ (x\<^sup>\ * y)\<^sup>\ \ (x\<^sup>\ * y)\<^sup>\ * d(x\<^sup>\) * L) * x\<^sup>\" + by (simp add: mult_L_sup_circ) + also have "... = d((x\<^sup>\ * y)\<^sup>\) * L * x\<^sup>\ \ (x\<^sup>\ * y)\<^sup>\ * x\<^sup>\ \ (x\<^sup>\ * y)\<^sup>\ * d(x\<^sup>\) * L * x\<^sup>\" + using mult_right_dist_sup by auto + also have "... = d((x\<^sup>\ * y)\<^sup>\) * L \ (x\<^sup>\ * y)\<^sup>\ * x\<^sup>\ \ (x\<^sup>\ * y)\<^sup>\ * d(x\<^sup>\) * L" + by (simp add: L_left_zero_Omega mult.assoc) + also have "... = d((x\<^sup>\ * y)\<^sup>\) * L \ ((x\<^sup>\ * y)\<^sup>\ * x\<^sup>\ \ (x\<^sup>\ * y)\<^sup>\ * d(x\<^sup>\) * L)" + by (simp add: Omega_def ils.il_inf_associative semiring.distrib_left sup_left_commute sup_monoid.add_commute) + finally have 1: "(x\<^sup>\ * y)\<^sup>\ * x\<^sup>\ \ d((x\<^sup>\ * y)\<^sup>\) * L \ ((x\<^sup>\ * y)\<^sup>\ * x\<^sup>\ \ (x\<^sup>\ * y)\<^sup>\ * d(x\<^sup>\) * L)" + . + have "d((x\<^sup>\ * y)\<^sup>\) * L \ (x\<^sup>\ * y)\<^sup>\" + using Omega_isotone Omega_def mult_left_isotone by auto + also have "... \ (x\<^sup>\ * y)\<^sup>\ * x\<^sup>\" + by (metis mult_right_isotone mult_1_right one_below_Omega) + finally have 2: "d((x\<^sup>\ * y)\<^sup>\) * L \ (x\<^sup>\ * y)\<^sup>\ * x\<^sup>\" + . + have 3: "(x\<^sup>\ * y)\<^sup>\ * x\<^sup>\ \ (x\<^sup>\ * y)\<^sup>\ * x\<^sup>\" + by (meson Omega_isotone order.trans mult_left_isotone mult_right_isotone star_below_Omega) + have "(x\<^sup>\ * y)\<^sup>\ * d(x\<^sup>\) * L \ (x\<^sup>\ * y)\<^sup>\ * x\<^sup>\" + by (metis Omega_def sup_commute mult_assoc mult_left_sub_dist_sup_right) + also have "... \ (x\<^sup>\ * y)\<^sup>\ * x\<^sup>\" + using Omega_isotone Omega_def mult_left_isotone by force + finally have "d((x\<^sup>\ * y)\<^sup>\) * L \ ((x\<^sup>\ * y)\<^sup>\ * x\<^sup>\ \ (x\<^sup>\ * y)\<^sup>\ * d(x\<^sup>\) * L) \ (x\<^sup>\ * y)\<^sup>\ * x\<^sup>\" + using 2 3 by (simp add: sup_assoc) + thus ?thesis + using 1 by (simp add: order.antisym) +qed + +(* +lemma mult_L_omega: "(x * L)\<^sup>\ = x * L" nitpick [expect=genuine,card=5] oops +lemma mult_L_sup_omega: "(x * L \ y)\<^sup>\ = y\<^sup>\ \ y\<^sup>\ * x * L" nitpick [expect=genuine,card=5] oops +lemma d_Omega_circ_simulate_right_plus: "z * x \ y * y\<^sup>\ * z \ w \ z * x\<^sup>\ \ y\<^sup>\ * (z \ w * x\<^sup>\)" nitpick [expect=genuine,card=4] oops +lemma d_Omega_circ_simulate_left_plus: "x * z \ z * y\<^sup>\ \ w \ x\<^sup>\ * z \ (z \ x\<^sup>\ * w) * y\<^sup>\" nitpick [expect=genuine,card=3] oops +*) + +end + +class ed = ed_below + + assumes L_left_zero: "L * x = L" +begin + +lemma mult_L_omega: + "(x * L)\<^sup>\ = x * L" + by (metis L_left_zero omega_slide) + +lemma mult_L_sup_omega: + "(x * L \ y)\<^sup>\ = y\<^sup>\ \ y\<^sup>\ * x * L" + by (metis L_left_zero ils.il_inf_associative mult_bot_add_omega sup_commute) + +lemma d_Omega_circ_simulate_right_plus: + assumes "z * x \ y * y\<^sup>\ * z \ w" + shows "z * x\<^sup>\ \ y\<^sup>\ * (z \ w * x\<^sup>\)" +proof - + have "z * x \ y * d(y\<^sup>\) * L * z \ y * y\<^sup>\ * z \ w" + using assms Omega_def ils.il_inf_associative mult_right_dist_sup semiring.distrib_left by auto + also have "... \ y * d(y\<^sup>\) * L \ y * y\<^sup>\ * z \ w" + by (metis L_left_zero_below sup_commute sup_right_isotone mult_assoc mult_right_isotone) + also have "... = y * bot \ d(y * y\<^sup>\) * L \ y * y\<^sup>\ * z \ w" + by (simp add: d_L_split) + also have "... = d(y\<^sup>\) * L \ y * y\<^sup>\ * z \ w" + by (smt sup_assoc sup_commute sup_bot_left mult_assoc mult_left_dist_sup omega_unfold) + finally have 1: "z * x \ d(y\<^sup>\) * L \ y * y\<^sup>\ * z \ w" + . + have "(d(y\<^sup>\) * L \ y\<^sup>\ * z \ y\<^sup>\ * w * d(x\<^sup>\) * L \ y\<^sup>\ * w * x\<^sup>\) * x = d(y\<^sup>\) * L * x \ y\<^sup>\ * z * x \ y\<^sup>\ * w * d(x\<^sup>\) * L * x \ y\<^sup>\ * w * x\<^sup>\ * x" + using mult_right_dist_sup by fastforce + also have "... \ d(y\<^sup>\) * L \ y\<^sup>\ * z * x \ y\<^sup>\ * w * d(x\<^sup>\) * L * x \ y\<^sup>\ * w * x\<^sup>\ * x" + by (metis L_left_zero_below sup_left_isotone mult_assoc mult_right_isotone) + also have "... \ d(y\<^sup>\) * L \ y\<^sup>\ * z * x \ y\<^sup>\ * w * d(x\<^sup>\) * L \ y\<^sup>\ * w * x\<^sup>\ * x" + by (metis L_left_zero_below sup_commute sup_left_isotone mult_assoc mult_right_isotone) + also have "... \ d(y\<^sup>\) * L \ y\<^sup>\ * z * x \ y\<^sup>\ * w * d(x\<^sup>\) * L \ y\<^sup>\ * w * x\<^sup>\" + by (meson star.circ_back_loop_prefixpoint sup.boundedE sup_right_isotone) + also have "... \ d(y\<^sup>\) * L \ y\<^sup>\ * (d(y\<^sup>\) * L \ y * y\<^sup>\ * z \ w) \ y\<^sup>\ * w * d(x\<^sup>\) * L \ y\<^sup>\ * w * x\<^sup>\" + using 1 by (smt sup_left_isotone sup_right_isotone le_iff_sup mult_assoc mult_left_dist_sup) + also have "... = d(y\<^sup>\) * L \ y\<^sup>\ * y * y\<^sup>\ * z \ y\<^sup>\ * w * d(x\<^sup>\) * L \ y\<^sup>\ * w * x\<^sup>\" + by (smt sup_assoc sup_commute sup_idem mult_assoc mult_left_dist_sup d_L_split star.circ_back_loop_fixpoint star_mult_omega) + also have "... \ d(y\<^sup>\) * L \ y\<^sup>\ * z \ y\<^sup>\ * w * d(x\<^sup>\) * L \ y\<^sup>\ * w * x\<^sup>\" + using mult_isotone order_refl semiring.add_right_mono star.circ_mult_upper_bound star.right_plus_below_circ sup_right_isotone by auto + finally have 2: "z * x\<^sup>\ \ d(y\<^sup>\) * L \ y\<^sup>\ * z \ y\<^sup>\ * w * d(x\<^sup>\) * L \ y\<^sup>\ * w * x\<^sup>\" + by (smt le_sup_iff sup_ge1 star.circ_loop_fixpoint star_right_induct) + have "z * x * x\<^sup>\ \ y * y\<^sup>\ * z * x\<^sup>\ \ d(y\<^sup>\) * L * x\<^sup>\ \ w * x\<^sup>\" + using 1 by (metis sup_commute mult_left_isotone mult_right_dist_sup) + also have "... \ y * y\<^sup>\ * z * x\<^sup>\ \ d(y\<^sup>\) * L \ w * x\<^sup>\" + by (metis L_left_zero eq_refl ils.il_inf_associative) + finally have "z * x\<^sup>\ \ y\<^sup>\ \ y\<^sup>\ * d(y\<^sup>\) * L \ y\<^sup>\ * w * x\<^sup>\" + by (smt sup_assoc sup_commute left_plus_omega mult_assoc mult_left_dist_sup omega_induct omega_unfold star.left_plus_circ) + hence "z * x\<^sup>\ \ y\<^sup>\ \ y\<^sup>\ * w * x\<^sup>\" + by (metis sup_commute d_mult_L le_iff_sup mult_assoc mult_right_isotone omega_sub_vector order_trans star_mult_omega) + hence "d(z * x\<^sup>\) * L \ d(y\<^sup>\) * L \ y\<^sup>\ * w * d(x\<^sup>\) * L" + by (smt sup_assoc sup_commute d_L_split d_dist_sup le_iff_sup mult_right_dist_sup) + hence "z * d(x\<^sup>\) * L \ z * bot \ d(y\<^sup>\) * L \ y\<^sup>\ * w * d(x\<^sup>\) * L" + using d_L_split sup_assoc sup_right_isotone by force + also have "... \ y\<^sup>\ * z \ d(y\<^sup>\) * L \ y\<^sup>\ * w * d(x\<^sup>\) * L" + by (smt sup_commute sup_left_isotone sup_ge1 order_trans star.circ_loop_fixpoint zero_right_mult_decreasing) + finally have "z * d(x\<^sup>\) * L \ d(y\<^sup>\) * L \ y\<^sup>\ * z \ y\<^sup>\ * w * d(x\<^sup>\) * L \ y\<^sup>\ * w * x\<^sup>\" + by (simp add: le_supI2 sup_commute) + thus ?thesis + using 2 by (smt L_left_zero Omega_def sup_assoc le_iff_sup mult_assoc mult_left_dist_sup mult_right_dist_sup) +qed + +lemma d_Omega_circ_simulate_left_plus: + assumes "x * z \ z * y\<^sup>\ \ w" + shows "x\<^sup>\ * z \ (z \ x\<^sup>\ * w) * y\<^sup>\" +proof - + have "x * (z * d(y\<^sup>\) * L \ z * y\<^sup>\ \ d(x\<^sup>\) * L \ x\<^sup>\ * w * d(y\<^sup>\) * L \ x\<^sup>\ * w * y\<^sup>\) = x * z * d(y\<^sup>\) * L \ x * z * y\<^sup>\ \ d(x\<^sup>\) * L \ x * x\<^sup>\ * w * d(y\<^sup>\) * L \ x * x\<^sup>\ * w * y\<^sup>\" + by (smt sup_assoc sup_commute mult_assoc mult_left_dist_sup d_L_split omega_unfold) + also have "... \ (z * d(y\<^sup>\) * L \ z * y\<^sup>\ \ w) * d(y\<^sup>\) * L \ (z * d(y\<^sup>\) * L \ z * y\<^sup>\ \ w) * y\<^sup>\ \ d(x\<^sup>\) * L \ x\<^sup>\ * w * d(y\<^sup>\) * L \ x\<^sup>\ * w * y\<^sup>\" + by (smt assms Omega_def sup_assoc sup_ge2 le_iff_sup mult_assoc mult_left_dist_sup mult_right_dist_sup star.circ_loop_fixpoint) + also have "... = z * d(y\<^sup>\) * L \ z * y\<^sup>\ * d(y\<^sup>\) * L \ w * d(y\<^sup>\) * L \ z * y\<^sup>\ \ w * y\<^sup>\ \ d(x\<^sup>\) * L \ x\<^sup>\ * w * d(y\<^sup>\) * L \ x\<^sup>\ * w * y\<^sup>\" + by (smt L_left_zero sup_assoc sup_commute sup_idem mult_assoc mult_right_dist_sup star.circ_transitive_equal) + also have "... = z * d(y\<^sup>\) * L \ w * d(y\<^sup>\) * L \ z * y\<^sup>\ \ w * y\<^sup>\ \ d(x\<^sup>\) * L \ x\<^sup>\ * w * d(y\<^sup>\) * L \ x\<^sup>\ * w * y\<^sup>\" + by (smt sup_assoc sup_commute sup_idem le_iff_sup mult_assoc d_L_split star_mult_omega zero_right_mult_decreasing) + finally have "x * (z * d(y\<^sup>\) * L \ z * y\<^sup>\ \ d(x\<^sup>\) * L \ x\<^sup>\ * w * d(y\<^sup>\) * L \ x\<^sup>\ * w * y\<^sup>\) \ z * d(y\<^sup>\) * L \ z * y\<^sup>\ \ d(x\<^sup>\) * L \ x\<^sup>\ * w * d(y\<^sup>\) * L \ x\<^sup>\ * w * y\<^sup>\" + by (smt sup_assoc sup_commute sup_idem mult_assoc star.circ_loop_fixpoint) + thus ?thesis + by (smt (verit, del_insts) L_left_zero Omega_def sup_assoc le_sup_iff sup_ge1 mult_assoc mult_left_dist_sup mult_right_dist_sup star.circ_back_loop_fixpoint star_left_induct) +qed + +end + +text \Theorem 2.5 and Theorem 50.4\ + +sublocale ed < ed_omega: itering where circ = Omega + apply unfold_locales + apply (smt sup_assoc sup_commute sup_bot_left circ_sup_d Omega_def mult_left_dist_sup mult_right_dist_sup d_L_split d_dist_sup omega_decompose star.circ_sup_1 star.circ_slide) + apply (smt L_left_zero sup_assoc sup_commute sup_bot_left Omega_def mult_assoc mult_left_dist_sup mult_right_dist_sup d_L_split omega_slide star.circ_mult) + using d_Omega_circ_simulate_right_plus apply blast + by (simp add: d_Omega_circ_simulate_left_plus) + +sublocale ed < ed_star: itering where circ = star .. + +class ed_2 = ed_below + antidomain_semiring_L + Omega +begin + +subclass ed + apply unfold_locales + by (rule L_left_zero) + +end + +end + diff --git a/thys/Correctness_Algebras/General_Refinement_Algebras.thy b/thys/Correctness_Algebras/General_Refinement_Algebras.thy new file mode 100644 --- /dev/null +++ b/thys/Correctness_Algebras/General_Refinement_Algebras.thy @@ -0,0 +1,311 @@ +(* Title: General Refinement Algebras + Author: Walter Guttmann + Maintainer: Walter Guttmann +*) + +section \General Refinement Algebras\ + +theory General_Refinement_Algebras + +imports Omega_Algebras + +begin + +class general_refinement_algebra = left_kleene_algebra + Omega + + assumes Omega_unfold: "y\<^sup>\ \ 1 \ y * y\<^sup>\" + assumes Omega_induct: "x \ z \ y * x \ x \ y\<^sup>\ * z" +begin + +lemma Omega_unfold_equal: + "y\<^sup>\ = 1 \ y * y\<^sup>\" + by (smt Omega_induct Omega_unfold sup_right_isotone order.antisym mult_right_isotone mult_1_right) + +lemma Omega_sup_1: + "(x \ y)\<^sup>\ = x\<^sup>\ * (y * x\<^sup>\)\<^sup>\" + apply (rule order.antisym) + apply (smt Omega_induct Omega_unfold_equal sup_assoc sup_commute sup_right_isotone mult_assoc mult_right_dist_sup mult_right_isotone mult_1_right order_refl) + by (smt Omega_induct Omega_unfold_equal sup_assoc sup_commute mult_assoc mult_left_one mult_right_dist_sup mult_1_right order_refl) + +lemma Omega_left_slide: + "(x * y)\<^sup>\ * x \ x * (y * x)\<^sup>\" +proof - + have "1 \ y * (x * y)\<^sup>\ * x \ 1 \ y * x * (1 \ (y * (x * y)\<^sup>\) * x)" + by (smt Omega_unfold_equal sup_right_isotone mult_assoc mult_left_one mult_left_sub_dist_sup mult_right_dist_sup mult_right_isotone mult_1_right) + thus ?thesis + by (smt Omega_induct Omega_unfold_equal le_sup_iff mult_assoc mult_left_one mult_right_dist_sup mult_right_isotone mult_1_right) +qed + +end + +text \Theorem 50.3\ + +sublocale general_refinement_algebra < Omega: left_conway_semiring where circ = Omega + apply unfold_locales + using Omega_unfold_equal apply simp + apply (simp add: Omega_left_slide) + by (simp add: Omega_sup_1) + +context general_refinement_algebra +begin + +lemma star_below_Omega: + "x\<^sup>\ \ x\<^sup>\" + by (metis Omega_induct mult_1_right order_refl star.circ_left_unfold) + +lemma star_mult_Omega: + "x\<^sup>\ = x\<^sup>\ * x\<^sup>\" + by (metis Omega.left_plus_below_circ sup_commute sup_ge1 order.eq_iff star.circ_loop_fixpoint star_left_induct_mult_iff) + +lemma Omega_one_greatest: + "x \ 1\<^sup>\" + by (metis Omega_induct sup_bot_left mult_left_one order_refl order_trans zero_right_mult_decreasing) + +lemma greatest_left_zero: + "1\<^sup>\ * x = 1\<^sup>\" + by (simp add: Omega_one_greatest Omega_induct order.antisym) + +(* +lemma circ_right_unfold: "1 \ x\<^sup>\ * x = x\<^sup>\" nitpick [expect=genuine,card=8] oops +lemma circ_slide: "(x * y)\<^sup>\ * x = x * (y * x)\<^sup>\" nitpick [expect=genuine,card=6] oops +lemma circ_simulate: "z * x \ y * z \ z * x\<^sup>\ \ y\<^sup>\ * z" nitpick [expect=genuine,card=6] oops +lemma circ_simulate_right: "z * x \ y * z \ w \ z * x\<^sup>\ \ y\<^sup>\ * (z \ w * x\<^sup>\)" nitpick [expect=genuine,card=6] oops +lemma circ_simulate_right_1: "z * x \ y * z \ z * x\<^sup>\ \ y\<^sup>\ * z" nitpick [expect=genuine,card=6] oops +lemma circ_simulate_right_plus: "z * x \ y * y\<^sup>\ * z \ w \ z * x\<^sup>\ \ y\<^sup>\ * (z \ w * x\<^sup>\)" nitpick [expect=genuine,card=6] oops +lemma circ_simulate_right_plus_1: "z * x \ y * y\<^sup>\ * z \ z * x\<^sup>\ \ y\<^sup>\ * z" nitpick [expect=genuine,card=6] oops +lemma circ_simulate_left_1: "x * z \ z * y \ x\<^sup>\ * z \ z * y\<^sup>\ \ x\<^sup>\ * bot" oops (* holds in LKA, counterexample exists in GRA *) +lemma circ_simulate_left_plus_1: "x * z \ z * y\<^sup>\ \ x\<^sup>\ * z \ z * y\<^sup>\ \ x\<^sup>\ * bot" oops (* holds in LKA, counterexample exists in GRA *) +lemma circ_simulate_absorb: "y * x \ x \ y\<^sup>\ * x \ x \ y\<^sup>\ * bot" nitpick [expect=genuine,card=8] oops (* holds in LKA, counterexample exists in GRA *) +*) + +end + +class bounded_general_refinement_algebra = general_refinement_algebra + bounded_left_kleene_algebra +begin + +lemma Omega_one: + "1\<^sup>\ = top" + by (simp add: Omega_one_greatest order.antisym) + +lemma top_left_zero: + "top * x = top" + using Omega_one greatest_left_zero by blast + +end + +sublocale bounded_general_refinement_algebra < Omega: bounded_left_conway_semiring where circ = Omega .. + +class left_demonic_refinement_algebra = general_refinement_algebra + + assumes Omega_isolate: "y\<^sup>\ \ y\<^sup>\ * bot \ y\<^sup>\" +begin + +lemma Omega_isolate_equal: + "y\<^sup>\ = y\<^sup>\ * bot \ y\<^sup>\" + using Omega_isolate order.antisym le_sup_iff star_below_Omega zero_right_mult_decreasing by auto + +(* +lemma Omega_sum_unfold_1: "(x \ y)\<^sup>\ = y\<^sup>\ \ y\<^sup>\ * x * (x \ y)\<^sup>\" oops +lemma Omega_sup_3: "(x \ y)\<^sup>\ = (x\<^sup>\ * y)\<^sup>\ * x\<^sup>\" oops +*) + +end + +class bounded_left_demonic_refinement_algebra = left_demonic_refinement_algebra + bounded_left_kleene_algebra +begin + +(* +lemma Omega_mult: "(x * y)\<^sup>\ = 1 \ x * (y * x)\<^sup>\ * y" oops +lemma Omega_sup: "(x \ y)\<^sup>\ = (x\<^sup>\ * y)\<^sup>\ * x\<^sup>\" oops +lemma Omega_simulate: "z * x \ y * z \ z * x\<^sup>\ \ y\<^sup>\ * z" nitpick [expect=genuine,card=6] oops +lemma Omega_separate_2: "y * x \ x * (x \ y) \ (x \ y)\<^sup>\ = x\<^sup>\ * y\<^sup>\" oops +lemma Omega_circ_simulate_right_plus: "z * x \ y * y\<^sup>\ * z \ w \ z * x\<^sup>\ \ y\<^sup>\ * (z \ w * x\<^sup>\)" nitpick [expect=genuine,card=6] oops +lemma Omega_circ_simulate_left_plus: "x * z \ z * y\<^sup>\ \ w \ x\<^sup>\ * z \ (z \ x\<^sup>\ * w) * y\<^sup>\" oops +*) + +end + +sublocale bounded_left_demonic_refinement_algebra < Omega: bounded_left_conway_semiring where circ = Omega .. + +class demonic_refinement_algebra = left_zero_kleene_algebra + left_demonic_refinement_algebra +begin + +lemma Omega_mult: + "(x * y)\<^sup>\ = 1 \ x * (y * x)\<^sup>\ * y" + by (smt (verit, del_insts) Omega.circ_left_slide Omega_induct Omega_unfold_equal order.eq_iff mult_assoc mult_left_dist_sup mult_1_right) + +lemma Omega_sup: + "(x \ y)\<^sup>\ = (x\<^sup>\ * y)\<^sup>\ * x\<^sup>\" + by (smt Omega_sup_1 Omega_mult mult_assoc mult_left_dist_sup mult_left_one mult_right_dist_sup mult_1_right) + +lemma Omega_simulate: + "z * x \ y * z \ z * x\<^sup>\ \ y\<^sup>\ * z" + by (smt Omega_induct Omega_unfold_equal sup_right_isotone mult_assoc mult_left_dist_sup mult_left_isotone mult_1_right) + +end + +text \Theorem 2.4\ + +sublocale demonic_refinement_algebra < Omega1: itering_1 where circ = Omega + apply unfold_locales + apply (simp add: Omega_simulate mult_assoc) + by (simp add: Omega_simulate) + +sublocale demonic_refinement_algebra < Omega1: left_zero_conway_semiring_1 where circ = Omega .. + +context demonic_refinement_algebra +begin + +lemma Omega_sum_unfold_1: + "(x \ y)\<^sup>\ = y\<^sup>\ \ y\<^sup>\ * x * (x \ y)\<^sup>\" + by (smt Omega1.circ_sup_9 Omega.circ_loop_fixpoint Omega_isolate_equal sup_assoc sup_commute mult_assoc mult_left_zero mult_right_dist_sup) + +lemma Omega_sup_3: + "(x \ y)\<^sup>\ = (x\<^sup>\ * y)\<^sup>\ * x\<^sup>\" + apply (rule order.antisym) + apply (metis Omega_sum_unfold_1 Omega_induct eq_refl sup_commute) + by (simp add: Omega.circ_isotone Omega_sup mult_left_isotone star_below_Omega) + +lemma Omega_separate_2: + "y * x \ x * (x \ y) \ (x \ y)\<^sup>\ = x\<^sup>\ * y\<^sup>\" + apply (rule order.antisym) + apply (smt (verit, del_insts) Omega_induct Omega_sum_unfold_1 sup_right_isotone mult_assoc mult_left_isotone star_mult_Omega star_simulation_left) + by (simp add: Omega.circ_sub_dist_3) + +lemma Omega_circ_simulate_right_plus: + assumes "z * x \ y * y\<^sup>\ * z \ w" + shows "z * x\<^sup>\ \ y\<^sup>\ * (z \ w * x\<^sup>\)" +proof - + have "z * x\<^sup>\ = z \ z * x * x\<^sup>\" + using Omega1.circ_back_loop_fixpoint Omega1.circ_plus_same sup_commute mult_assoc by auto + also have "... \ y * y\<^sup>\ * z * x\<^sup>\ \ z \ w * x\<^sup>\" + by (smt assms sup_assoc sup_commute sup_right_isotone le_iff_sup mult_right_dist_sup) + finally have "z * x\<^sup>\ \ (y * y\<^sup>\)\<^sup>\ * (z \ w * x\<^sup>\)" + by (smt Omega_induct sup_assoc sup_commute mult_assoc) + thus ?thesis + by (simp add: Omega.left_plus_circ) +qed + +lemma Omega_circ_simulate_left_plus: + assumes "x * z \ z * y\<^sup>\ \ w" + shows "x\<^sup>\ * z \ (z \ x\<^sup>\ * w) * y\<^sup>\" +proof - + have "x * ((z \ x\<^sup>\ * w) * y\<^sup>\) \ (z * y\<^sup>\ \ w \ x * x\<^sup>\ * w) * y\<^sup>\" + by (smt assms mult_assoc mult_left_dist_sup sup_left_isotone mult_left_isotone) + also have "... \ z * y\<^sup>\ * y\<^sup>\ \ w * y\<^sup>\ \ x\<^sup>\ * w * y\<^sup>\" + by (smt Omega.left_plus_below_circ sup_right_isotone mult_left_isotone mult_right_dist_sup) + finally have 1: "x * ((z \ x\<^sup>\ * w) * y\<^sup>\) \ (z \ x\<^sup>\ * w) * y\<^sup>\" + by (metis Omega.circ_transitive_equal mult_assoc Omega.circ_reflexive sup_assoc le_iff_sup mult_left_one mult_right_dist_sup) + have "x\<^sup>\ * z = x\<^sup>\ * bot \ x\<^sup>\ * z" + by (metis Omega_isolate_equal mult_assoc mult_left_zero mult_right_dist_sup) + also have "... \ x\<^sup>\ * w * y\<^sup>\ \ x\<^sup>\ * (z \ x\<^sup>\ * w) * y\<^sup>\" + by (metis Omega1.circ_back_loop_fixpoint bot_least idempotent_bot_closed le_supI2 mult_isotone mult_left_sub_dist_sup_left semiring.add_mono zero_right_mult_decreasing mult_assoc) + also have "... \ (z \ x\<^sup>\ * w) * y\<^sup>\" + using 1 by (metis le_supI mult_right_sub_dist_sup_right star_left_induct_mult mult_assoc) + finally show ?thesis + . +qed + +lemma Omega_circ_simulate_right: + assumes "z * x \ y * z \ w" + shows "z * x\<^sup>\ \ y\<^sup>\ * (z \ w * x\<^sup>\)" +proof - + have "y * z \ w \ y * y\<^sup>\ * z \ w" + using Omega.circ_mult_increasing mult_left_isotone sup_left_isotone by auto + thus ?thesis + using Omega_circ_simulate_right_plus assms order.trans by blast +qed + +end + +sublocale demonic_refinement_algebra < Omega: itering where circ = Omega + apply unfold_locales + apply (simp add: Omega_sup) + using Omega_mult apply blast + apply (simp add: Omega_circ_simulate_right_plus) + using Omega_circ_simulate_left_plus by auto + +class bounded_demonic_refinement_algebra = demonic_refinement_algebra + bounded_left_zero_kleene_algebra +begin + +lemma Omega_one: + "1\<^sup>\ = top" + by (simp add: Omega_one_greatest order.antisym) + +lemma top_left_zero: + "top * x = top" + using Omega_one greatest_left_zero by auto + +end + +sublocale bounded_demonic_refinement_algebra < Omega: bounded_itering where circ = Omega .. + +class general_refinement_algebra_omega = left_omega_algebra + Omega + + assumes omega_left_zero: "x\<^sup>\ \ x\<^sup>\ * y" + assumes Omega_def: "x\<^sup>\ = x\<^sup>\ \ x\<^sup>\" +begin + +lemma omega_left_zero_equal: + "x\<^sup>\ * y = x\<^sup>\" + by (simp add: order.antisym omega_left_zero omega_sub_vector) + +subclass left_demonic_refinement_algebra + apply unfold_locales + apply (metis Omega_def sup_commute eq_refl mult_1_right omega_loop_fixpoint) + apply (metis Omega_def mult_right_dist_sup omega_induct omega_left_zero_equal) + by (metis Omega_def mult_right_sub_dist_sup_right sup_commute sup_right_isotone omega_left_zero_equal) + +end + +class left_demonic_refinement_algebra_omega = bounded_left_omega_algebra + Omega + + assumes top_left_zero: "top * x = top" + assumes Omega_def: "x\<^sup>\ = x\<^sup>\ \ x\<^sup>\" +begin + +subclass general_refinement_algebra_omega + apply unfold_locales + apply (metis mult_assoc omega_vector order_refl top_left_zero) + by (rule Omega_def) + +end + +class demonic_refinement_algebra_omega = left_demonic_refinement_algebra_omega + bounded_left_zero_omega_algebra +begin + +lemma Omega_mult: + "(x * y)\<^sup>\ = 1 \ x * (y * x)\<^sup>\ * y" + by (metis Omega_def comb1.circ_mult_1 omega_left_zero_equal omega_translate) + +lemma Omega_sup: + "(x \ y)\<^sup>\ = (x\<^sup>\ * y)\<^sup>\ * x\<^sup>\" +proof - + have "(x\<^sup>\ * y)\<^sup>\ * x\<^sup>\ = (x\<^sup>\ * y)\<^sup>\ * x\<^sup>\ \ (x\<^sup>\ * y)\<^sup>\ \ (x\<^sup>\ * y)\<^sup>\ * x\<^sup>\\<^sup>\ * x\<^sup>\" + by (smt sup_commute Omega_def mult_assoc mult_right_dist_sup mult_bot_add_omega omega_left_zero_equal star.circ_sup_1) + thus ?thesis + using Omega_def Omega_sup_1 comb2.circ_slide_1 omega_left_zero_equal by auto +qed + +lemma Omega_simulate: + "z * x \ y * z \ z * x\<^sup>\ \ y\<^sup>\ * z" + using Omega_def comb2.circ_simulate omega_left_zero_equal by auto + +subclass demonic_refinement_algebra .. + +end + +(* +text hold in GRA and LKA +lemma circ_circ_mult: "1\<^sup>\ * x\<^sup>\ = x\<^sup>\\<^sup>\" oops +lemma sub_mult_one_circ: "x * 1\<^sup>\ \ 1\<^sup>\ * x" oops +lemma circ_circ_mult_1: "x\<^sup>\ * 1\<^sup>\ = x\<^sup>\\<^sup>\" oops +lemma "y * x \ x \ y\<^sup>\ * x \ 1\<^sup>\ * x" oops + +text unknown +lemma circ_simulate_2: "y * x\<^sup>\ \ x\<^sup>\ * y\<^sup>\ \ y\<^sup>\ * x\<^sup>\ \ x\<^sup>\ * y\<^sup>\" oops (* holds in LKA *) +lemma circ_simulate_3: "y * x\<^sup>\ \ x\<^sup>\ \ y\<^sup>\ * x\<^sup>\ \ x\<^sup>\ * y\<^sup>\" oops (* holds in LKA *) +lemma circ_separate_mult_1: "y * x \ x * y \ (x * y)\<^sup>\ \ x\<^sup>\ * y\<^sup>\" oops +lemma "x\<^sup>\ = (x * x)\<^sup>\ * (x \ 1)" oops +lemma "y\<^sup>\ * x\<^sup>\ \ x\<^sup>\ * y\<^sup>\ \ (x \ y)\<^sup>\ = x\<^sup>\ * y\<^sup>\" oops +lemma "y * x \ (1 \ x) * y\<^sup>\ \ (x \ y)\<^sup>\ = x\<^sup>\ * y\<^sup>\" oops +*) + +end + diff --git a/thys/Correctness_Algebras/Hoare.thy b/thys/Correctness_Algebras/Hoare.thy new file mode 100644 --- /dev/null +++ b/thys/Correctness_Algebras/Hoare.thy @@ -0,0 +1,1185 @@ +(* Title: Hoare Calculus + Author: Walter Guttmann + Maintainer: Walter Guttmann +*) + +section \Hoare Calculus\ + +theory Hoare + +imports Complete_Tests Preconditions + +begin + +class ite = + fixes ite :: "'a \ 'a \ 'a \ 'a" ("_ \ _ \ _" [58,58,58] 57) + +class hoare_triple = + fixes hoare_triple :: "'a \ 'a \ 'a \ bool" ("_ \ _ \ _" [54,54,54] 53) + +class ifthenelse = precondition + ite + + assumes ite_pre: "x\-p\y\-q = -p*(x\-q) \ --p*(y\-q)" +begin + +text \Theorem 40.2\ + +lemma ite_pre_then: + "-p*(x\-p\y\-q) = -p*(x\-q)" +proof - + have "-p*(x\-p\y\-q) = -p*(x\-q) \ bot*(y\-q)" + by (smt (z3) ite_pre pre_closed tests_dual.sba_dual.sup_right_unit tests_dual.sub_commutative tests_dual.sup_left_zero tests_dual.sup_right_dist_inf tests_dual.top_double_complement tests_dual.wnf_lemma_1) + thus ?thesis + by (metis pre_closed tests_dual.sba_dual.sup_right_unit tests_dual.sub_sup_closed tests_dual.sup_left_zero) +qed + +text \Theorem 40.3\ + +lemma ite_pre_else: + "--p*(x\-p\y\-q) = --p*(y\-q)" +proof - + have "--p*(x\-p\y\-q) = bot*(x\-q) \ --p*(y\-q)" + by (smt (z3) ite_pre pre_closed tests_dual.sub_commutative tests_dual.sub_inf_left_zero tests_dual.sup_left_zero tests_dual.sup_right_dist_inf tests_dual.top_double_complement tests_dual.wnf_lemma_3) + thus ?thesis + by (metis pre_closed tests_dual.sba_dual.sub_sup_demorgan tests_dual.sub_inf_left_zero tests_dual.sup_left_zero) +qed + +lemma ite_import_mult_then: + "-p*-q \ x\-r \ -p*-q \ x\-p\y\-r" + by (smt ite_pre_then leq_def pre_closed sub_assoc sub_comm sub_mult_closed) + +lemma ite_import_mult_else: + "--p*-q \ y\-r \ --p*-q \ x\-p\y\-r" + by (smt ite_pre_else leq_def pre_closed sub_assoc sub_comm sub_mult_closed) + +text \Theorem 40.1\ + +lemma ite_import_mult: + "-p*-q \ x\-r \ --p*-q \ y\-r \ -q \ x\-p\y\-r" + by (smt (verit) ite_import_mult_else ite_import_mult_then pre_closed tests_dual.sba_dual.inf_less_eq_cases) + +end + +class whiledo = ifthenelse + while + + assumes while_pre: "-p\x\-q = -p*(x\-p\x\-q) \ --p*-q" + assumes while_post: "-p\x\-q = -p\x\--p*-q" +begin + +text \Theorem 40.4\ + +lemma while_pre_then: + "-p*(-p\x\-q) = -p*(x\-p\x\-q)" + by (smt pre_closed tests_dual.sub_commutative while_pre tests_dual.wnf_lemma_1) + +text \Theorem 40.5\ + +lemma while_pre_else: + "--p*(-p\x\-q) = --p*-q" + by (smt pre_closed tests_dual.sub_commutative while_pre tests_dual.wnf_lemma_3) + +text \Theorem 40.6\ + +lemma while_pre_sub_1: + "-p\x\-q \ x*(-p\x)\-p\1\-q" + by (smt (z3) ite_import_mult pre_closed pre_one_increasing pre_seq tests_dual.sba_dual.transitive tests_dual.sub_sup_closed tests_dual.upper_bound_right while_pre_else while_pre_then) + +text \Theorem 40.7\ + +lemma while_pre_sub_2: + "-p\x\-q \ x\-p\1\-p\x\-q" + by (smt (z3) ite_import_mult pre_closed pre_one_increasing tests_dual.sba_dual.transitive tests_dual.sub_sup_closed tests_dual.upper_bound_right while_pre_then) + +text \Theorem 40.8\ + +lemma while_pre_compl: + "--p \ -p\x\--p" + by (metis pre_closed tests_dual.sup_idempotent tests_dual.upper_bound_right while_pre_else) + +lemma while_pre_compl_one: + "--p \ -p\x\1" + by (metis tests_dual.sba_dual.top_double_complement while_post tests_dual.sup_right_unit while_pre_compl) + +text \Theorem 40.10\ + +lemma while_export_equiv: + "-q \ -p\x\1 \ -p*-q \ -p\x\1" + by (smt pre_closed tests_dual.sba_dual.shunting tests_dual.sba_dual.sub_less_eq_def tests_dual.sba_dual.top_double_complement while_pre_compl_one) + +lemma nat_test_pre: + assumes "nat_test t s" + and "-q \ s" + and "\n . t n*-p*-q \ x\pSum t n*-q" + shows "-q \ -p\x\--p*-q" +proof - + have 1: "-q*--p \ -p\x\--p*-q" + by (metis pre_closed tests_dual.sub_commutative while_post tests_dual.upper_bound_right while_pre_else) + have "\n . t n*-p*-q \ -p\x\--p*-q" + proof + fix n + show "t n*-p*-q \ -p\x\--p*-q" + proof (induct n rule: nat_less_induct) + fix n + have 2: "t n = --(t n)" + using assms(1) nat_test_def by auto + assume "\m -p\x\--p*-q" + hence "\m t m*--p*-q \ -p\x\--p*-q" + using 1 by (smt (verit, del_insts) assms(1) tests_dual.greatest_lower_bound leq_def nat_test_def pre_closed tests_dual.sub_associative tests_dual.sub_commutative sub_mult_closed) + hence "\m -p\x\--p*-q" + by (smt (verit, del_insts) assms(1) tests_dual.sup_right_unit tests_dual.sup_left_dist_inf tests_dual.sup_right_dist_inf nat_test_def tests_dual.inf_complement sub_mult_closed) + hence "pSum t n*-q \ -p\x\--p*-q" + by (smt assms(1) pSum_below_nat pre_closed sub_mult_closed) + hence "t n*-p*-q*(-p\x\--p*-q) = t n*-p*-q" + using 2 by (smt assms(1,3) leq_def pSum_test_nat pre_closed pre_sub_distr sub_assoc sub_comm sub_mult_closed transitive while_pre_then) + thus "t n*-p*-q \ -p\x\--p*-q" + using 2 by (smt (z3) pre_closed tests_dual.sub_sup_closed tests_dual.upper_bound_right) + qed + qed + hence "-q*-p \ -p\x\--p*-q" + by (smt (verit, del_insts) assms(1,2) leq_def nat_test_def pre_closed tests_dual.sub_associative tests_dual.sub_commutative sub_mult_closed) + thus ?thesis + using 1 by (smt (z3) pre_closed tests_dual.sba_dual.inf_less_eq_cases tests_dual.sub_commutative tests_dual.sub_sup_closed) +qed + +lemma nat_test_pre_1: + assumes "nat_test t s" + and "-r \ s" + and "-r \ -q" + and "\n . t n*-p*-q \ x\pSum t n*-q" + shows "-r \ -p\x\--p*-q" +proof - + let ?qs = "-q*s" + have 1: "-r \ ?qs" + by (metis assms(1-3) nat_test_def tests_dual.least_upper_bound) + have "\n . t n*-p*?qs \ x\pSum t n*?qs" + proof + fix n + have 2: "pSum t n \ s" + by (simp add: assms(1) pSum_below_sum) + have "t n = t n * s" + by (metis assms(1) nat_test_def tests_dual.sba_dual.less_eq_inf) + hence "t n*-p*?qs = t n*-p*-q" + by (smt (verit, ccfv_threshold) assms(1) nat_test_def tests_dual.sub_sup_closed tests_dual.sub_associative tests_dual.sub_commutative) + also have "t n*-p*-q \ x\pSum t n*-q" + by (simp add: assms(4)) + also have "x\pSum t n*-q = x\pSum t n*?qs" + using 2 by (smt (verit, ccfv_SIG) assms(1) leq_def nat_test_def pSum_test_nat tests_dual.sub_associative tests_dual.sub_commutative) + finally show "t n*-p*?qs \ x\pSum t n*?qs" + . + qed + hence 3: "?qs \ -p\x\--p*?qs" + by (smt (verit, ccfv_threshold) assms(1) tests_dual.upper_bound_left tests_dual.upper_bound_right nat_test_def nat_test_pre pSum_test_nat pre_closed tests_dual.sub_associative sub_mult_closed transitive) + have "-p\x\--p*?qs \ -p\x\--p*-q" + by (metis assms(1) nat_test_def pre_lower_bound_left tests_dual.sub_sup_closed while_post) + thus ?thesis + using 1 3 by (smt (verit, del_insts) leq_def tests_dual.sub_associative assms(1) nat_test_def pre_closed sub_mult_closed) +qed + +lemma nat_test_pre_2: + assumes "nat_test t s" + and "-r \ s" + and "\n . t n*-p \ x\pSum t n" + shows "-r \ -p\x\1" +proof - + have 1: "-r \ -p\x\--p*s" + by (smt (verit, ccfv_threshold) assms leq_def nat_test_def nat_test_pre_1 pSum_below_sum pSum_test_nat tests_dual.sub_associative tests_dual.sub_commutative) + have "-p\x\--p*s \ -p\x\1" + by (metis assms(1) nat_test_def pre_below_pre_one while_post) + thus ?thesis + using 1 by (smt (verit) assms(1) nat_test_def pre_closed tests_dual.sba_dual.top_double_complement while_post tests_dual.transitive) +qed + +lemma nat_test_pre_3: + assumes "nat_test t s" + and "-q \ s" + and "\n . t n*-p*-q \ x\pSum t n*-q" + shows "-q \ -p\x\1" +proof - + have "-p\x\--p*-q \ -p\x\1" + by (metis pre_below_pre_one sub_mult_closed) + thus ?thesis + by (smt (verit, ccfv_threshold) assms pre_closed tests_dual.sba_dual.top_double_complement tests_dual.sba_dual.transitive tests_dual.sub_sup_closed nat_test_pre) +qed + +definition aL :: "'a" + where "aL \ 1\1\1" + +lemma aL_test: + "aL = --aL" + by (metis aL_def one_def pre_closed) + +end + +class atoms = tests + + fixes Atomic_program :: "'a set" + fixes Atomic_test :: "'a set" + assumes one_atomic_program: "1 \ Atomic_program" + assumes zero_atomic_test: "bot \ Atomic_test" + assumes atomic_test_test: "p \ Atomic_test \ p = --p" + +class while_program = whiledo + atoms + power +begin + +inductive_set Test_expression :: "'a set" + where atom_test: "p \ Atomic_test \ p \ Test_expression" + | neg_test: "p \ Test_expression \ -p \ Test_expression" + | conj_test: "p \ Test_expression \ q \ Test_expression \ p*q \ Test_expression" + +lemma test_expression_test: + "p \ Test_expression \ p = --p" + apply (induct rule: Test_expression.induct) + apply (simp add: atomic_test_test) + apply simp + by (metis tests_dual.sub_sup_closed) + +lemma disj_test: + "p \ Test_expression \ q \ Test_expression \ p\q \ Test_expression" + by (smt conj_test neg_test tests_dual.sub_inf_def test_expression_test) + +lemma zero_test_expression: + "bot \ Test_expression" + by (simp add: Test_expression.atom_test zero_atomic_test) + +lemma one_test_expression: + "1 \ Test_expression" + using Test_expression.simps tests_dual.sba_dual.one_def zero_test_expression by blast + +lemma pSum_test_expression: + "(\n . t n \ Test_expression) \ pSum t m \ Test_expression" + apply (induct m) + apply (simp add: zero_test_expression) + by (simp add: disj_test) + +inductive_set While_program :: "'a set" + where atom_prog: "x \ Atomic_program \ x \ While_program" + | seq_prog: "x \ While_program \ y \ While_program \ x*y \ While_program" + | cond_prog: "p \ Test_expression \ x \ While_program \ y \ While_program \ x\p\y \ While_program" + | while_prog: "p \ Test_expression \ x \ While_program \ p\x \ While_program" + +lemma one_while_program: + "1 \ While_program" + by (simp add: While_program.atom_prog one_atomic_program) + +lemma power_while_program: + "x \ While_program \ x^m \ While_program" + apply (induct m) + apply (simp add: one_while_program) + by (simp add: While_program.seq_prog) + +inductive_set Pre_expression :: "'a set" + where test_pre: "p \ Test_expression \ p \ Pre_expression" + | neg_pre: "p \ Pre_expression \ -p \ Pre_expression" + | conj_pre: "p \ Pre_expression \ q \ Pre_expression \ p*q \ Pre_expression" + | pre_pre: "p \ Pre_expression \ x \ While_program \ x\p \ Pre_expression" + +lemma pre_expression_test: + "p \ Pre_expression \ p = --p" + apply (induct rule: Pre_expression.induct) + apply (simp add: test_expression_test) + apply simp + apply (metis sub_mult_closed) + by (metis pre_closed) + +lemma disj_pre: + "p \ Pre_expression \ q \ Pre_expression \ p\q \ Pre_expression" + by (smt conj_pre neg_pre tests_dual.sub_inf_def pre_expression_test) + +lemma zero_pre_expression: + "bot \ Pre_expression" + by (simp add: Pre_expression.test_pre zero_test_expression) + +lemma one_pre_expression: + "1 \ Pre_expression" + by (simp add: Pre_expression.test_pre one_test_expression) + +lemma pSum_pre_expression: + "(\n . t n \ Pre_expression) \ pSum t m \ Pre_expression" + apply (induct m) + apply (simp add: zero_pre_expression) + by (simp add: disj_pre) + +lemma aL_pre_expression: + "aL \ Pre_expression" + by (simp add: Pre_expression.pre_pre While_program.while_prog aL_def one_pre_expression one_test_expression one_while_program) + +end + +class hoare_calculus = while_program + complete_tests +begin + +definition tfun :: "'a \ 'a \ 'a \ 'a \ 'a" + where "tfun p x q r \ p \ (x\q*r)" + +lemma tfun_test: + "p = --p \ q = --q \ r = --r \ tfun p x q r = --tfun p x q r" + by (smt tfun_def sub_mult_closed pre_closed tests_dual.inf_closed) + +lemma tfun_pre_expression: + "x \ While_program \ p \ Pre_expression \ q \ Pre_expression \ r \ Pre_expression \ tfun p x q r \ Pre_expression" + by (simp add: Pre_expression.conj_pre Pre_expression.pre_pre disj_pre tfun_def) + +lemma tfun_iso: + "p = --p \ q = --q \ r = --r \ s = --s \ r \ s \ tfun p x q r \ tfun p x q s" + by (smt tfun_def tests_dual.sub_sup_right_isotone pre_iso sub_mult_closed tests_dual.sub_inf_right_isotone pre_closed) + +definition tseq :: "'a \ 'a \ 'a \ 'a \ nat \ 'a" + where "tseq p x q r m \ (tfun p x q ^ m) r" + +lemma tseq_test: + "p = --p \ q = --q \ r = --r \ tseq p x q r m = --tseq p x q r m" + apply (induct m) + apply (smt tseq_def tfun_test power_zero_id id_def) + by (metis tseq_def tfun_test power_succ_unfold_ext) + +lemma tseq_test_seq: + "p = --p \ q = --q \ r = --r \ test_seq (tseq p x q r)" + using test_seq_def tseq_test by auto + +lemma tseq_pre_expression: + "x \ While_program \ p \ Pre_expression \ q \ Pre_expression \ r \ Pre_expression \ tseq p x q r m \ Pre_expression" + apply (induct m) + apply (smt tseq_def id_def power_zero_id) + by (smt tseq_def power_succ_unfold_ext tfun_pre_expression) + +definition tsum :: "'a \ 'a \ 'a \ 'a \ 'a" + where "tsum p x q r \ Sum (tseq p x q r)" + +lemma tsum_test: + "p = --p \ q = --q \ r = --r \ tsum p x q r = --tsum p x q r" + using Sum_test tseq_test_seq tsum_def by auto + +lemma t_fun_test: + "q = --q \ tfun (-p) x (p\x\q) (-p\(x\(p\x\q)*aL)) = --tfun (-p) x (p\x\q) (-p\(x\(p\x\q)*aL))" + by (metis aL_test pre_closed tests_dual.sba_dual.double_negation tfun_def tfun_test) + +lemma t_fun_pre_expression: + "x \ While_program \ p \ Test_expression \ q \ Pre_expression \ tfun (-p) x (p\x\q) (-p\(x\(p\x\q)*aL)) \ Pre_expression" + by (simp add: Pre_expression.conj_pre Pre_expression.neg_pre Pre_expression.pre_pre Pre_expression.test_pre While_program.while_prog aL_pre_expression disj_pre tfun_pre_expression) + +lemma t_seq_test: + "q = --q \ tseq (-p) x (p\x\q) (-p\(x\(p\x\q)*aL)) m = --tseq (-p) x (p\x\q) (-p\(x\(p\x\q)*aL)) m" + by (metis aL_test pre_closed tests_dual.sba_dual.double_negation tfun_def tfun_test tseq_test) + +lemma t_seq_test_seq: + "q = --q \ test_seq (tseq (-p) x (p\x\q) (-p\(x\(p\x\q)*aL)))" + using test_seq_def t_seq_test by auto + +lemma t_seq_pre_expression: + "x \ While_program \ p \ Test_expression \ q \ Pre_expression \ tseq (-p) x (p\x\q) (-p\(x\(p\x\q)*aL)) m \ Pre_expression" + using Pre_expression.pre_pre Pre_expression.test_pre Test_expression.neg_test While_program.while_prog aL_pre_expression tfun_def tfun_pre_expression tseq_pre_expression by auto + +lemma t_sum_test: + "q = --q \ tsum (-p) x (p\x\q) (-p\(x\(p\x\q)*aL)) = --tsum (-p) x (p\x\q) (-p\(x\(p\x\q)*aL))" + using Sum_test t_seq_test_seq tsum_def by auto + +definition tfun2 :: "'a \ 'a \ 'a \ 'a \ 'a \ 'a" + where "tfun2 p q x r s \ p \ q*(x\r*s)" + +lemma tfun2_test: + "p = --p \ q = --q \ r = --r \ s = --s \ tfun2 p q x r s = --tfun2 p q x r s" + by (smt tfun2_def sub_mult_closed pre_closed tests_dual.inf_closed) + +lemma tfun2_pre_expression: + "x \ While_program \ p \ Pre_expression \ q \ Pre_expression \ r \ Pre_expression \ s \ Pre_expression \ tfun2 p q x r s \ Pre_expression" + by (simp add: Pre_expression.conj_pre Pre_expression.pre_pre disj_pre tfun2_def) + +lemma tfun2_iso: + "p = --p \ q = --q \ r = --r \ s1 = --s1 \ s2 = --s2 \ s1 \ s2 \ tfun2 p q x r s1 \ tfun2 p q x r s2" + by (smt tfun2_def tests_dual.sub_inf_right_isotone pre_iso sub_mult_closed tests_dual.sub_sup_right_isotone pre_closed) + +definition tseq2 :: "'a \ 'a \ 'a \ 'a \ 'a \ nat \ 'a" + where "tseq2 p q x r s m \ (tfun2 p q x r ^ m) s" + +lemma tseq2_test: + "p = --p \ q = --q \ r = --r \ s = --s \ tseq2 p q x r s m = --tseq2 p q x r s m" + apply (induct m) + apply (smt tseq2_def power_zero_id id_def) + by (smt tseq2_def tfun2_test power_succ_unfold_ext) + +lemma tseq2_test_seq: + "p = --p \ q = --q \ r = --r \ s = --s \ test_seq (tseq2 p q x r s)" + using test_seq_def tseq2_test by force + +lemma tseq2_pre_expression: + "x \ While_program \ p \ Pre_expression \ q \ Pre_expression \ r \ Pre_expression \ s \ Pre_expression \ tseq2 p q x r s m \ Pre_expression" + apply (induct m) + apply (smt tseq2_def id_def power_zero_id) + by (smt tseq2_def power_succ_unfold_ext tfun2_pre_expression) + +definition tsum2 :: "'a \ 'a \ 'a \ 'a \ 'a \ 'a" + where "tsum2 p q x r s \ Sum (tseq2 p q x r s)" + +lemma tsum2_test: + "p = --p \ q = --q \ r = --r \ s = --s \ tsum2 p q x r s = --tsum2 p q x r s" + using Sum_test tseq2_test_seq tsum2_def by force + +lemma t_fun2_test: + "p = --p \ q = --q \ tfun2 (-p*q) p x (p\x\q) (-p*q\p*(x\(p\x\q)*aL)) = --tfun2 (-p*q) p x (p\x\q) (-p*q\p*(x\(p\x\q)*aL))" + by (smt (z3) aL_test pre_closed tests_dual.sub_sup_closed tfun2_def tfun2_test) + +lemma t_fun2_pre_expression: + "x \ While_program \ p \ Test_expression \ q \ Pre_expression \ tfun2 (-p*q) p x (p\x\q) (-p*q\p*(x\(p\x\q)*aL)) \ Pre_expression" + by (simp add: Pre_expression.conj_pre Pre_expression.neg_pre Pre_expression.pre_pre Pre_expression.test_pre While_program.while_prog aL_pre_expression disj_pre tfun2_pre_expression) + +lemma t_seq2_test: + "p = --p \ q = --q \ tseq2 (-p*q) p x (p\x\q) (-p*q\p*(x\(p\x\q)*aL)) m = --tseq2 (-p*q) p x (p\x\q) (-p*q\p*(x\(p\x\q)*aL)) m" + by (smt (z3) aL_test pre_closed tests_dual.sub_sup_closed tfun2_def tfun2_test tseq2_test) + +lemma t_seq2_test_seq: + "p = --p \ q = --q \ test_seq (tseq2 (-p*q) p x (p\x\q) (-p*q\p*(x\(p\x\q)*aL)))" + using test_seq_def t_seq2_test by auto + +lemma t_seq2_pre_expression: + "x \ While_program \ p \ Test_expression \ q \ Pre_expression \ tseq2 (-p*q) p x (p\x\q) (-p*q\p*(x\(p\x\q)*aL)) m \ Pre_expression" + by (simp add: Pre_expression.conj_pre Pre_expression.neg_pre Pre_expression.pre_pre Pre_expression.test_pre While_program.while_prog aL_pre_expression disj_pre tseq2_pre_expression) + +lemma t_sum2_test: + "p = --p \ q = --q \ tsum2 (-p*q) p x (p\x\q) (-p*q\p*(x\(p\x\q)*aL)) = --tsum2 (-p*q) p x (p\x\q) (-p*q\p*(x\(p\x\q)*aL))" + using Sum_test t_seq2_test_seq tsum2_def by auto + +lemma t_seq2_below_t_seq: + assumes "p \ Test_expression" + and "q \ Pre_expression" + shows "tseq2 (-p*q) p x (p\x\q) (-p*q\p*(x\(p\x\q)*aL)) m \ tseq (-p) x (p\x\q) (-p\(x\(p\x\q)*aL)) m" +proof - + let ?t2 = "tseq2 (-p*q) p x (p\x\q) (-p*q\p*(x\(p\x\q)*aL))" + let ?t = "tseq (-p) x (p\x\q) (-p\(x\(p\x\q)*aL))" + show "?thesis" + proof (induct m) + case 0 + show "?t2 0 \ ?t 0" + by (smt assms aL_test id_def tests_dual.upper_bound_left tests_dual.upper_bound_right tests_dual.inf_isotone power_zero_id pre_closed pre_expression_test sub_mult_closed test_pre tseq2_def tseq_def) + next + fix m + assume "?t2 m \ ?t m" + hence 1: "?t2 (Suc m) \ tfun2 (- p * q) p x (p \ x \ q) (?t m)" + by (smt assms power_succ_unfold_ext pre_closed pre_expression_test sub_mult_closed t_seq2_test t_seq_test test_pre tfun2_iso tseq2_def) + have "... \ ?t (Suc m)" + by (smt assms tests_dual.upper_bound_left tests_dual.upper_bound_right tests_dual.inf_isotone power_succ_unfold_ext pre_closed pre_expression_test sub_mult_closed t_seq_test test_pre tfun2_def tfun_def tseq_def) + thus "?t2 (Suc m) \ ?t (Suc m)" + using 1 by (smt (verit, del_insts) assms pre_closed pre_expression_test test_expression_test tests_dual.sba_dual.transitive tests_dual.sub_sup_closed t_seq2_test t_seq_test tfun2_test) + qed +qed + +lemma t_seq2_below_t_sum: + "p \ Test_expression \ q \ Pre_expression \ x \ While_program \ tseq2 (-p*q) p x (p\x\q) (-p*q\p*(x\(p\x\q)*aL)) m \ tsum (-p) x (p\x\q) (-p\(x\(p\x\q)*aL))" + by (smt (verit, del_insts) Sum_upper pre_expression_test t_seq2_below_t_seq t_seq2_test t_seq_test t_sum_test test_pre test_seq_def tsum_def leq_def tests_dual.sub_associative) + +lemma t_sum2_below_t_sum: + "p \ Test_expression \ q \ Pre_expression \ x \ While_program \ tsum2 (-p*q) p x (p\x\q) (-p*q\p*(x\(p\x\q)*aL)) \ tsum (-p) x (p\x\q) (-p\(x\(p\x\q)*aL))" + by (smt Sum_least pre_expression_test t_seq2_below_t_sum t_seq2_test t_sum_test test_pre test_seq_def tsum2_def) + +lemma t_seq2_below_w: + "p \ Test_expression \ q \ Pre_expression \ x \ While_program \ tseq2 (-p*q) p x (p\x\q) (-p*q\p*(x\(p\x\q)*aL)) m \ p\x\q" + apply (cases m) + apply (smt aL_test id_def tests_dual.upper_bound_left tests_dual.sub_sup_right_isotone tests_dual.inf_commutative tests_dual.sub_inf_right_isotone power_zero_id pre_closed pre_expression_test pre_iso sub_mult_closed test_pre tseq2_def while_pre) + by (smt tseq2_def power_succ_unfold_ext tests_dual.upper_bound_left tests_dual.sub_sup_right_isotone tests_dual.inf_commutative tests_dual.sub_inf_right_isotone pre_closed pre_expression_test pre_iso sub_mult_closed t_seq2_test test_pre tseq2_def while_pre tfun2_def) + +lemma t_sum2_below_w: + "p \ Test_expression \ q \ Pre_expression \ x \ While_program \ tsum2 (-p*q) p x (p\x\q) (-p*q\p*(x\(p\x\q)*aL)) \ p\x\q" + by (smt Sum_least pre_closed pre_expression_test t_seq2_below_w t_seq2_test_seq test_pre tsum2_def) + +lemma t_sum2_w: + assumes "aL = 1" + and "p \ Test_expression" + and "q \ Pre_expression" + and "x \ While_program" + shows "tsum2 (-p*q) p x (p\x\q) (-p*q\p*(x\(p\x\q)*aL)) = p\x\q" +proof - + let ?w = "p\x\q" + let ?s = "-p*q\p*(x\?w*aL)" + have "?w = tseq2 (-p*q) p x ?w ?s 0" + by (smt assms(1-3) tests_dual.sup_right_unit id_def tests_dual.inf_commutative power_zero_id pre_closed pre_expression_test sub_mult_closed test_expression_test tseq2_def while_pre) + hence "?w \ tsum2 (-p*q) p x ?w ?s" + by (smt assms(2,3) Sum_upper pre_expression_test t_seq2_test_seq test_pre tsum2_def) + thus ?thesis + by (smt assms(2-4) tests_dual.antisymmetric pre_closed pre_expression_test t_sum2_test t_sum2_below_w test_pre) +qed + +inductive derived_hoare_triple :: "'a \ 'a \ 'a \ bool" ("_ \ _ \ _" [54,54,54] 53) + where atom_trip: "p \ Pre_expression \ x \ Atomic_program \ x\p\x\p" + | seq_trip: "p\x\q \ q\y\r \ p\x*y\r" + | cond_trip: "p \ Test_expression \ q \ Pre_expression \ p*q\x\r \ -p*q\y\r \ q\x\p\y\r" + | while_trip: "p \ Test_expression \ q \ Pre_expression \ test_seq t \ q \ Sum t \ t 0*p*q\x\aL*q \ (\n>0 . t n*p*q\x\pSum t n*q) \ q\p\x\-p*q" + | cons_trip: "p \ Pre_expression \ s \ Pre_expression \ p \ q \ q\x\r \ r \ s \ p\x\s" + +lemma derived_type: + "p\x\q \ p \ Pre_expression \ q \ Pre_expression \ x \ While_program" + apply (induct rule: derived_hoare_triple.induct) + apply (simp add: Pre_expression.pre_pre While_program.atom_prog) + using While_program.seq_prog apply blast + using While_program.cond_prog apply blast + using Pre_expression.conj_pre Pre_expression.neg_pre Pre_expression.test_pre While_program.while_prog apply simp + by blast + +lemma cons_pre_trip: + "p \ Pre_expression \ q\y\r \ p*q\y\r" + by (metis cons_trip derived_type Pre_expression.conj_pre pre_expression_test tests_dual.sba_dual.reflexive tests_dual.upper_bound_right) + +lemma cons_post_trip: + "q \ Pre_expression \ r \ Pre_expression \ p\y\q*r \ p\y\r" + by (metis cons_trip derived_type pre_expression_test tests_dual.sba_dual.reflexive tests_dual.upper_bound_right) + +definition valid_hoare_triple :: "'a \ 'a \ 'a \ bool" ("_ \ _ \ _" [54,54,54] 53) + where "p\x\q \ (p \ Pre_expression \ q \ Pre_expression \ x \ While_program \ p \ x\q)" + +end + +class hoare_calculus_sound = hoare_calculus + + assumes while_soundness: "-p*-q \ x\-q \ aL*-q \ -p\x\-q" +begin + +lemma while_soundness_0: + "-p*-q \ x\-q \ -q*aL \ -p\x\--p*-q" + by (smt while_soundness aL_test sub_comm while_post) + +lemma while_soundness_1: + assumes "test_seq t" + and "-q \ Sum t" + and "t 0*-p*-q \ x\aL*-q" + and "\n>0 . t n*-p*-q \ x\pSum t n*-q" + shows "-q \ -p\x\--p*-q" +proof - + have "\n . t n*-p*-q \ x\-q" + proof + fix n + show "t n*-p*-q \ x\-q" + proof (cases n) + case 0 + thus ?thesis + by (smt (z3) assms(1) assms(3) aL_test leq_def pre_closed pre_lower_bound_right test_seq_def tests_dual.sub_associative tests_dual.sub_sup_closed) + next + case (Suc m) + hence 1: "t n*-p*-q \ x\pSum t n*-q" + using assms(4) by blast + have "x\pSum t n*-q \ x\-q" + by (metis assms(1) pSum_test pre_lower_bound_right) + thus ?thesis + using 1 by (smt (verit, del_insts) assms(1) pSum_test pre_closed sub_mult_closed test_seq_def leq_def tests_dual.sub_associative) + qed + qed + hence 2: "-p*-q \ x\-q" + by (smt assms(1,2) Sum_test leq_def mult_right_dist_Sum pre_closed sub_assoc sub_comm sub_mult_closed test_seq_def) + have "\n . t n*-q \ -p\x\--p*-q \ pSum t n*-q \ -p\x\--p*-q" + proof + fix n + show "t n*-q \ -p\x\--p*-q \ pSum t n*-q \ -p\x\--p*-q" + proof (induct n rule: nat_less_induct) + fix n + assume 3: "\m -p\x\--p*-q \ pSum t m*-q \ -p\x\--p*-q" + have 4: "pSum t n*-q \ -p\x\--p*-q" + proof (cases n) + case 0 + thus ?thesis + by (metis pSum.simps(1) pre_closed sub_mult_closed tests_dual.top_greatest tests_dual.sba_dual.less_eq_inf tests_dual.top_double_complement) + next + case (Suc m) + hence "pSum t n*-q = (pSum t m \ t m)*-q" + by simp + also have "... = pSum t m*-q \ t m*-q" + by (metis (full_types) assms(1) pSum_test test_seq_def tests_dual.sup_right_dist_inf) + also have "... \ -p\x\--p*-q" + proof - + have "pSum t m*-q = --(pSum t m*-q) \ t m*-q = --(t m*-q) \ -p\x\--p*-q = --(-p\x\--p*-q)" + apply (intro conjI) + apply (metis assms(1) pSum_test tests_dual.sub_sup_closed) + apply (metis assms(1) test_seq_def tests_dual.sub_sup_closed) + by (metis pre_closed tests_dual.sub_sup_closed) + thus ?thesis + using 3 by (smt (z3) lessI Suc tests_dual.greatest_lower_bound sub_mult_closed) + qed + finally show ?thesis + . + qed + hence 5: "x\pSum t n*-q \ x\-p\x\--p*-q" + by (smt assms pSum_test pre_closed pre_iso sub_mult_closed) + have 6: "-p*(t n*-q) \ -p*(-p\x\--p*-q)" + proof (cases n) + case 0 + thus ?thesis + using 2 by (smt assms(1,3) aL_test leq_def tests_dual.sup_idempotent tests_dual.sub_sup_right_isotone pre_closed pre_lower_bound_left sub_assoc sub_comm sub_mult_closed test_seq_def transitive while_pre_then while_soundness_0) + next + case (Suc m) + hence "-p*(t n*-q) \ x\pSum t n*-q" + by (smt assms(1,4) test_seq_def tests_dual.sub_associative tests_dual.sub_commutative zero_less_Suc) + hence "-p*(t n*-q) \ x\-p\x\--p*-q" + using 5 by (smt assms(1) tests_dual.least_upper_bound pSum_test pre_closed sub_mult_closed test_seq_def leq_def) + hence "-p*(t n*-q) \ -p*(x\-p\x\--p*-q)" + by (smt assms(1) tests_dual.upper_bound_left pre_closed sub_mult_closed test_seq_def leq_def tests_dual.sub_associative) + thus ?thesis + using while_post while_pre_then by auto + qed + have "--p*(t n*-q) \ --p*(-p\x\--p*-q)" + by (smt assms(1) leq_def tests_dual.upper_bound_right sub_assoc sub_comm sub_mult_closed test_seq_def while_pre_else) + thus "t n*-q \ -p\x\--p*-q \ pSum t n*-q \ -p\x\--p*-q" + using 4 6 by (smt assms(1) tests_dual.sup_less_eq_cases_2 pre_closed sub_mult_closed test_seq_def) + qed + qed + thus ?thesis + by (smt assms(1,2) Sum_test leq_def mult_right_dist_Sum pre_closed sub_comm sub_mult_closed) +qed + +lemma while_soundness_2: + assumes "test_seq t" + and "-r \ Sum t" + and "\n . t n*-p \ x\pSum t n" + shows "-r \ -p\x\1" +proof - + have 1: "\n>0 . t n*-p*Sum t \ x\pSum t n*Sum t" + by (smt (z3) assms(1,3) Sum_test Sum_upper leq_def pSum_below_Sum pSum_test test_seq_def tests_dual.sub_associative tests_dual.sub_commutative) + have 2: "t 0*-p*Sum t \ x\bot" + by (smt assms(1,3) Sum_test Sum_upper leq_def sub_assoc sub_comm test_seq_def pSum.simps(1)) + have "x\bot \ x\aL*Sum t" + by (smt assms(1) Sum_test aL_test pre_iso sub_mult_closed tests_dual.top_double_complement tests_dual.top_greatest) + hence "t 0*-p*Sum t \ x\aL*Sum t" + using 2 by (smt (z3) assms(1) Sum_test aL_test leq_def pSum.simps(1) pSum_test pre_closed test_seq_def tests_dual.sub_associative tests_dual.sub_sup_closed) + hence 3: "Sum t \ -p\x\--p*Sum t" + using 1 by (smt (verit, del_insts) assms(1) Sum_test tests_dual.sba_dual.one_def tests_dual.sup_right_unit tests_dual.upper_bound_left while_soundness_1) + have "-p\x\--p*Sum t \ -p\x\1" + by (metis assms(1) Sum_test pre_below_pre_one tests_dual.sub_sup_closed) + hence "Sum t \ -p\x\1" + using 3 by (smt (z3) assms(1) Sum_test pre_closed tests_dual.sba_dual.one_def while_post tests_dual.transitive) + thus ?thesis + by (smt (z3) assms(1,2) Sum_test pre_closed tests_dual.sba_dual.one_def tests_dual.transitive) +qed + +theorem soundness: + "p\x\q \ p\x\q" + apply (induct rule: derived_hoare_triple.induct) + apply (metis Pre_expression.pre_pre While_program.atom_prog pre_expression_test tests_dual.sba_dual.reflexive valid_hoare_triple_def) + apply (metis valid_hoare_triple_def pre_expression_test pre_compose While_program.seq_prog) + apply (metis valid_hoare_triple_def ite_import_mult pre_expression_test cond_prog test_pre) + apply (smt (verit, del_insts) Pre_expression.conj_pre Pre_expression.neg_pre Pre_expression.test_pre While_program.while_prog pre_expression_test valid_hoare_triple_def while_soundness_1) + by (metis pre_expression_test pre_iso pre_pre tests_dual.sba_dual.transitive valid_hoare_triple_def) + +end + +class hoare_calculus_pre_complete = hoare_calculus + + assumes aL_pre_import: "(x\-q)*aL \ x\-q*aL" + assumes pre_right_dist_Sum: "x \ While_program \ ascending_chain t \ test_seq t \ x\Sum t = Sum (\n . x\t n)" +begin + +lemma aL_pre_import_equal: + "(x\-q)*aL = (x\-q*aL)*aL" +proof - + have 1: "(x\-q)*aL \ (x\-q*aL)*aL" + by (smt (z3) aL_pre_import aL_test pre_closed tests_dual.sub_sup_closed tests_dual.least_upper_bound tests_dual.upper_bound_right) + have "(x\-q*aL)*aL \ (x\-q)*aL" + by (smt (verit, ccfv_threshold) aL_test pre_closed pre_lower_bound_left tests_dual.sba_dual.inf_isotone tests_dual.sba_dual.reflexive tests_dual.sub_sup_closed) + thus ?thesis + using 1 by (smt (z3) tests_dual.antisymmetric aL_test pre_closed tests_dual.sub_sup_closed) +qed + +lemma aL_pre_below_t_seq2: + assumes "p \ Test_expression" + and "q \ Pre_expression" + shows "(p\x\q)*aL \ tseq2 (-p*q) p x (p\x\q) (-p*q\p*(x\(p\x\q)*aL)) 0" +proof (unfold tseq2_def power_zero_id id_def while_pre) + have "(p\x\q)*aL = (p*(x\p\x\q) \ -p*q)*aL" + by (metis assms while_pre test_pre pre_expression_test) + also have "... = p*(x\p\x\q)*aL \ -p*q*aL" + by (smt (z3) assms aL_test tests_dual.sup_right_dist_inf pre_closed pre_expression_test sub_mult_closed test_pre) + also have "... = p*((x\p\x\q)*aL) \ -p*q*aL" + by (smt assms aL_test pre_closed pre_expression_test test_pre sub_assoc) + also have "... \ p*(x\(p\x\q)*aL) \ -p*q" + proof - + have 1: "(x\p\x\q)*aL \ x\(p\x\q)*aL" + by (metis assms(2) pre_closed pre_expression_test aL_pre_import) + have "-p*q*aL \ -p*q" + by (metis assms(2) aL_test pre_expression_test tests_dual.sub_sup_closed tests_dual.upper_bound_left) + thus ?thesis + using 1 by (smt assms aL_test pre_closed pre_expression_test test_pre tests_dual.sub_sup_closed tests_dual.sub_sup_right_isotone tests_dual.inf_isotone) + qed + also have "... = -p*q \ p*(x\(p\x\q)*aL)" + by (smt assms aL_test tests_dual.inf_commutative pre_closed pre_expression_test test_pre tests_dual.sub_sup_closed) + finally show "(p\x\q)*aL \ -p*q \ p*(x\(p\x\q)*aL)" + . +qed + +lemma t_seq2_ascending: + assumes "p \ Test_expression" + and "q \ Pre_expression" + and "x \ While_program" + shows "tseq2 (-p*q) p x (p\x\q) (-p*q\p*(x\(p\x\q)*aL)) m \ tseq2 (-p*q) p x (p\x\q) (-p*q\p*(x\(p\x\q)*aL)) (Suc m)" +proof (induct m) + let ?w = "p\x\q" + let ?r = "-p*q\p*(x\?w*aL)" + case 0 + have 1: "?w*aL = --(?w*aL)" + by (simp add: assms Pre_expression.conj_pre Pre_expression.pre_pre While_program.while_prog aL_pre_expression pre_expression_test) + have 2: "?r = --?r" + by (simp add: assms Pre_expression.conj_pre Pre_expression.neg_pre Pre_expression.pre_pre Pre_expression.test_pre While_program.while_prog aL_pre_expression disj_pre pre_expression_test) + have "?w*aL \ ?r" + by (metis aL_pre_below_t_seq2 assms(1,2) id_def tseq2_def power_zero_id) + hence "?w*aL \ ?w*?r" + using 1 2 by (smt (verit, ccfv_threshold) assms Pre_expression.pre_pre While_program.while_prog aL_test pre_expression_test tests_dual.sub_associative tests_dual.sub_sup_right_isotone tests_dual.sba_dual.less_eq_inf tests_dual.sba_dual.reflexive) + hence "x\?w*aL \ x\(?w*?r)" + by (smt (verit, ccfv_threshold) assms Pre_expression.conj_pre Pre_expression.neg_pre Pre_expression.pre_pre While_program.while_prog aL_pre_expression disj_pre pre_expression_test pre_iso test_pre) + hence "p*(x\?w*aL) \ p*(x\(?w*?r))" + by (smt (z3) assms Pre_expression.conj_pre Pre_expression.neg_pre Pre_expression.pre_pre While_program.while_prog aL_pre_expression disj_pre pre_expression_test test_pre tests_dual.sub_sup_right_isotone) + hence "?r \ -p*q\p*(x\(?w*?r))" + by (smt (verit, del_insts) assms Pre_expression.conj_pre Pre_expression.neg_pre Pre_expression.pre_pre While_program.while_prog aL_pre_expression disj_pre pre_expression_test test_pre tests_dual.sba_dual.reflexive tests_dual.inf_isotone) + thus ?case + by (unfold tseq2_def power_zero_id power_succ_unfold_ext id_def tfun2_def) +next + let ?w = "p\x\q" + let ?r = "-p*q\p*(x\?w*aL)" + let ?t = "tseq2 (-p*q) p x ?w ?r" + case (Suc m) + hence "?w*?t m \ ?w*?t (Suc m)" + by (smt (z3) assms(1,2) pre_closed pre_expression_test t_seq2_test test_expression_test tests_dual.sub_sup_right_isotone) + hence "x\?w*?t m \ x\?w*?t (Suc m)" + by (smt (z3) assms Pre_expression.conj_pre Pre_expression.neg_pre Pre_expression.pre_pre While_program.while_prog aL_pre_expression disj_pre pre_expression_test pre_iso test_pre tseq2_pre_expression) + hence "p*(x\?w*?t m) \ p*(x\?w*?t (Suc m))" + by (smt (z3) assms Pre_expression.conj_pre Pre_expression.neg_pre Pre_expression.pre_pre While_program.while_prog aL_pre_expression disj_pre pre_expression_test test_pre tests_dual.sub_sup_right_isotone tseq2_pre_expression) + hence "-p*q\p*(x\?w*?t m) \ -p*q\p*(x\?w*?t (Suc m))" + by (smt (z3) assms Pre_expression.conj_pre Pre_expression.neg_pre Pre_expression.pre_pre While_program.while_prog aL_pre_expression disj_pre pre_expression_test test_pre tests_dual.sba_dual.reflexive tests_dual.inf_isotone tseq2_pre_expression) + thus ?case + by (smt tseq2_def power_succ_unfold_ext tfun2_def) +qed + +lemma t_seq2_ascending_chain: + "p \ Test_expression \ q \ Pre_expression \ x \ While_program \ ascending_chain (tseq2 (-p*q) p x (p\x\q) (-p*q\p*(x\(p\x\q)*aL)))" + by (simp add: ord.ascending_chain_def t_seq2_ascending) + +end + +class hoare_calculus_complete = hoare_calculus_pre_complete + + assumes while_completeness: "-p*(x\-q) \ -q \ -p\x\-q \ -q\aL" +begin + +lemma while_completeness_var: + assumes "-p*(x\-q)\-r \ -q" + shows "-p\x\-r \ -q\aL" +proof - + have 1: "-p\x\-q \ -q\aL" + by (smt assms pre_closed tests_dual.sub_sup_closed tests_dual.greatest_lower_bound while_completeness) + have "-p\x\-r \ -p\x\-q" + by (smt assms pre_closed tests_dual.sub_sup_closed tests_dual.greatest_lower_bound pre_iso) + thus ?thesis + using 1 by (smt (z3) aL_test pre_closed tests_dual.sba_dual.sub_sup_closed tests_dual.sba_dual.transitive) +qed + +lemma while_completeness_sum: + assumes "p \ Test_expression" + and "q \ Pre_expression" + and "x \ While_program" + shows "p\x\q \ tsum (-p) x (p\x\q) (-p\(x\(p\x\q)*aL))" +proof - + let ?w = "p\x\q" + let ?r = "-p*q\p*(x\?w*aL)" + let ?t = "tseq2 (-p*q) p x ?w ?r" + let ?ts = "tsum2 (-p*q) p x ?w ?r" + have 1: "?w = --?w" + by (metis assms(2) pre_expression_test pre_closed) + have 2: "?r = --?r" + by (simp add: assms Pre_expression.conj_pre Pre_expression.neg_pre Pre_expression.pre_pre Pre_expression.test_pre While_program.while_prog aL_pre_expression disj_pre pre_expression_test) + have 3: "?ts = --?ts" + by (meson assms(1) assms(2) pre_expression_test t_sum2_test test_expression_test) + have 4: "test_seq ?t" + by (simp add: assms(1) assms(2) pre_expression_test t_seq2_test_seq test_expression_test) + have "-p*q \ ?r" + by (smt (z3) assms(1,2) aL_test pre_closed pre_expression_test sub_mult_closed test_pre tests_dual.lower_bound_left) + hence 5: "-p*q \ ?ts" + using 1 2 3 by (smt assms Sum_upper id_def tests_dual.sba_dual.transitive power_zero_id pre_expression_test sub_mult_closed test_pre tseq2_def tseq2_test_seq tsum2_def) + have "\n . p*(x\?t n) \ ?ts" + proof (rule allI, unfold tsum2_def) + fix n + have 6: "p*(x\?t n) \ ?t (Suc n)" + using 4 by (smt assms leq_def power_succ_unfold_ext pre_closed pre_expression_test tests_dual.sub_commutative sub_mult_closed t_seq2_below_w test_pre test_seq_def tfun2_def tseq2_def tests_dual.lower_bound_right) + have "?t (Suc n) \ Sum ?t" + using 4 Sum_upper by auto + thus "p*(x\?t n) \ Sum ?t" + using 3 4 6 by (smt assms(1) pre_closed pre_expression_test sub_mult_closed test_pre test_seq_def tests_dual.transitive tsum2_def) + qed + hence "p*(x\?ts) \ ?ts" + using 3 4 by (smt assms mult_left_dist_Sum pre_closed pre_right_dist_Sum t_seq2_ascending_chain test_expression_test test_seq_def tsum2_def) + hence "p*(x\?ts)\-p*q \ ?ts" + using 3 5 by (smt assms(1,2) tests_dual.greatest_lower_bound pre_closed pre_expression_test sub_mult_closed test_pre) + hence "?w \ ?ts\aL" + using 1 3 by (smt assms(1,2) pre_expression_test while_post sub_mult_closed t_sum2_below_t_sum t_sum_test test_pre transitive while_completeness_var) + hence "?w = ?w*(?ts\aL)" + using 1 3 by (smt aL_test tests_dual.sba_dual.less_eq_inf tests_dual.sba_dual.sub_sup_closed) + also have "... = ?w*?ts\?w*aL" + using 1 3 by (smt aL_test tests_dual.sup_left_dist_inf) + also have "... \ ?ts\?t 0" + using 1 3 4 by (smt (z3) assms(1,2) aL_pre_below_t_seq2 tests_dual.upper_bound_right aL_test test_seq_def tests_dual.sub_sup_closed tests_dual.inf_isotone) + also have "... = ?ts" + using 3 4 by (smt Sum_upper tsum2_def test_seq_def tests_dual.less_eq_inf) + finally have "?w \ ?ts" + . + thus ?thesis + using 1 3 by (metis assms t_sum2_below_t_sum t_sum2_below_w tests_dual.antisymmetric) +qed + +lemma while_complete: + assumes "p \ Test_expression" + and "q \ Pre_expression" + and "x \ While_program" + and "\r\Pre_expression . x\r\x\r" + shows "p\x\q\p\x\q" +proof - + let ?w = "p\x\q" + let ?t = "tseq (-p) x ?w (-p\(x\?w*aL))" + have 1: "?w \ Pre_expression" + by (simp add: assms(1-3) Pre_expression.pre_pre While_program.while_prog) + have 2: "test_seq ?t" + by (simp add: assms(2) pre_expression_test t_seq_test_seq) + hence 3: "?w \ Sum ?t" + using assms(1-3) tsum_def while_completeness_sum by auto + have 4: "p = --p" + by (simp add: assms(1) test_expression_test) + have "x\?w*aL = --(x\?w*aL)" + using 1 by (simp add: assms(3) Pre_expression.conj_pre Pre_expression.pre_pre aL_pre_expression pre_expression_test) + hence 5: "(-p\(x\?w*aL))*p = (x\?w*aL)*p" + using 4 by (metis tests_dual.sba_dual.inf_complement_intro) + have "x\aL*?w\x\aL*?w" + using 1 by (simp add: assms(4) Pre_expression.conj_pre aL_pre_expression) + hence "x\?w*aL\x\aL*?w" + using 1 by (metis aL_test pre_expression_test sub_comm) + hence "(x\?w*aL)*p*?w\x\aL*?w" + using 1 by (smt (z3) assms(1) Pre_expression.conj_pre Pre_expression.test_pre derived_hoare_triple.cons_trip derived_type pre_expression_test sub_assoc tests_dual.sba_dual.reflexive tests_dual.upper_bound_left) + hence "(-p\(x\?w*aL))*p*?w\x\aL*?w" + using 5 by simp + hence 6: "?t 0*p*?w\x\aL*?w" + by (unfold tseq_def power_zero_id id_def) + have "\n>0 . ?t n*p*?w\x\pSum ?t n*?w" + proof (rule allI, rule impI) + fix n + assume "0<(n::nat)" + from this obtain m where 7: "n = Suc m" + by (auto dest: less_imp_Suc_add) + hence "?t m*?w \ pSum ?t n*?w" + using 1 2 by (smt pSum.simps(2) pSum_test pre_expression_test test_seq_def tests_dual.lower_bound_right tests_dual.sba_dual.inf_isotone tests_dual.sba_dual.reflexive) + thus "?t n*p*?w\x\pSum ?t n*?w" + using 1 7 by (smt assms conj_pre cons_trip tests_dual.upper_bound_left tests_dual.sba_dual.inf_complement_intro pSum_pre_expression power_succ_unfold_ext pre_closed pre_expression_test sub_assoc sub_comm t_seq_pre_expression test_pre tfun_def tseq_def) + qed + hence "?w\p\x\-p*?w" + using 1 2 3 6 assms while_trip by auto + hence "?w\p\x\-p*q" + using 4 by (metis assms(2) while_pre_else pre_expression_test while_pre_else) + thus ?thesis + using assms(1,2) Pre_expression.neg_pre Pre_expression.test_pre cons_post_trip by blast +qed + +lemma pre_completeness: + "x \ While_program \ q \ Pre_expression \ x\q\x\q" + apply (induct arbitrary: q rule: While_program.induct) + apply (simp add: derived_hoare_triple.atom_trip) + apply (metis pre_pre pre_seq seq_trip pre_expression_test) + apply (smt cond_prog cond_trip cons_pre_trip ite_pre_else ite_pre_then neg_pre pre_pre pre_expression_test test_pre) + by (simp add: while_complete) + +theorem completeness: + "p\x\q \ p\x\q" + by (metis valid_hoare_triple_def pre_completeness tests_dual.reflexive pre_expression_test cons_trip) + +end + +class hoare_calculus_sound_complete = hoare_calculus_sound + hoare_calculus_complete +begin + +text \Theorem 41\ + +theorem soundness_completeness: + "p\x\q \ p\x\q" + using completeness soundness by blast + +end + +class hoare_rules = whiledo + complete_tests + hoare_triple + + assumes rule_pre: "x\-q\x\-q" + assumes rule_seq: "-p\x\-q \ -q\y\-r \ -p\x*y\-r" + assumes rule_cond: "-p*-q\x\-r \ --p*-q\y\-r \ -q\x\-p\y\-r" + assumes rule_while: "test_seq t \ -q \ Sum t \ t 0*-p*-q\x\aL*-q \ (\n>0 . t n*-p*-q\x\pSum t n*-q) \ -q\-p\x\--p*-q" + assumes rule_cons: "-p \ -q \ -q\x\-r \ -r \ -s \ -p\x\-s" + assumes rule_disj: "-p\x\-r \ -q\x\-s \ -p\-q\x\-r\-s" +begin + +lemma rule_cons_pre: + "-p \ -q \ -q\x\-r \ -p\x\-r" + using rule_cons tests_dual.sba_dual.reflexive by blast + +lemma rule_cons_pre_mult: + "-q\x\-r \ -p*-q\x\-r" + by (metis tests_dual.sub_sup_closed rule_cons_pre tests_dual.upper_bound_right) + +lemma rule_cons_pre_plus: + "-p\-q\x\-r \ -p\x\-r" + by (metis tests_dual.sba_dual.sub_sup_closed tests_dual.sba_dual.upper_bound_left rule_cons_pre) + +lemma rule_cons_post: + "-q\x\-r \ -r \ -s \ -q\x\-s" + using rule_cons tests_dual.sba_dual.reflexive by blast + +lemma rule_cons_post_mult: + "-q\x\-r*-s \ -q\x\-s" + by (metis rule_cons_post tests_dual.upper_bound_left sub_comm sub_mult_closed) + +lemma rule_cons_post_plus: + "-q\x\-r \ -q\x\-r\-s" + by (metis tests_dual.sba_dual.sub_sup_closed tests_dual.sba_dual.upper_bound_left rule_cons_post) + +lemma rule_disj_pre: + "-p\x\-r \ -q\x\-r \ -p\-q\x\-r" + by (metis rule_disj tests_dual.sba_dual.sup_idempotent) + +end + +class hoare_calculus_valid = hoare_calculus_sound_complete + hoare_triple + + assumes hoare_triple_valid: "-p\x\-q \ -p \ x\-q" +begin + +lemma valid_hoare_triple_same: + "p \ Pre_expression \ q \ Pre_expression \ x \ While_program \ p\x\q = p\x\q" + by (metis valid_hoare_triple_def hoare_triple_valid pre_expression_test) + +lemma derived_hoare_triple_same: + "p \ Pre_expression \ q \ Pre_expression \ x \ While_program \ p\x\q = p\x\q" + by (simp add: soundness_completeness valid_hoare_triple_same) + +lemma valid_rule_disj: + assumes "-p\x\-r" + and "-q\x\-s" + shows "-p\-q\x\-r\-s" +proof - + have "x\-r \ x\-r\-s \ x\-s \ x\-r\-s" + by (metis pre_iso tests_dual.sba_dual.sub_sup_closed tests_dual.sba_dual.upper_bound_left tests_dual.sba_dual.upper_bound_right) + thus ?thesis + by (smt assms hoare_triple_valid tests_dual.greatest_lower_bound tests_dual.sba_dual.sub_sup_closed pre_closed tests_dual.transitive) +qed + +subclass hoare_rules + apply unfold_locales + apply (metis hoare_triple_valid pre_closed tests_dual.sba_dual.reflexive) + apply (meson hoare_triple_valid pre_compose) + apply (smt hoare_triple_valid ite_import_mult sub_mult_closed) + apply (smt (verit, del_insts) hoare_triple_valid aL_test pSum_test sba_dual.sub_sup_closed sub_mult_closed test_seq_def while_soundness_1) + apply (smt hoare_triple_valid pre_iso tests_dual.transitive pre_closed) + by (simp add: valid_rule_disj) + +lemma nat_test_rule_while: + "nat_test t s \ -q \ s \ (\n . t n*-p*-q\x\pSum t n*-q) \ -q\-p\x\--p*-q" + by (smt (verit, ccfv_threshold) hoare_triple_valid nat_test_def nat_test_pre pSum_test_nat sub_mult_closed) + +lemma test_seq_rule_while: + "test_seq t \ -q \ Sum t \ t 0*-p*-q\x\aL*-q \ (\n>0 . t n*-p*-q\x\pSum t n*-q) \ -q\-p\x\--p*-q" + by (smt (verit, del_insts) hoare_triple_valid aL_test pSum_test sub_mult_closed test_seq_def while_soundness_1) + +lemma rule_bot: + "bot\x\-p" + by (metis hoare_triple_valid pre_closed tests_dual.top_double_complement tests_dual.top_greatest) + +lemma rule_skip: + "-p\1\-p" + by (simp add: hoare_triple_valid pre_one_increasing) + +lemma rule_example_4: + assumes "test_seq t" + and "Sum t = 1" + and "t 0*-p1*-p3 = bot" + and "-p1\z1\-p1*-p2" + and "\n>0 . t n*-p1*-p2*-p3\z2\pSum t n*-p1*-p2" + shows "-p1\z1*(-p3\z2)\-p2*--p3" +proof - + have "t 0*-p3*(-p1*-p2) = bot" + by (smt (verit, ccfv_threshold) assms(1,3) sub_assoc sub_comm sub_mult_closed test_seq_def tests_dual.sup_right_zero) + hence 1: "t 0*-p3*(-p1*-p2)\z2\aL*(-p1*-p2)" + by (metis aL_test sub_mult_closed rule_bot) + have "\n>0 . t n*-p3*(-p1*-p2)\z2\pSum t n*(-p1*-p2)" + by (smt assms(1,5) lower_bound_left pSum_test rule_cons_pre sub_assoc sub_comm sub_mult_closed test_seq_def) + hence "-p1*-p2\-p3\z2\--p3*(-p1*-p2)" + using 1 by (smt (verit, del_insts) assms(1,2) tests_dual.sub_bot_least rule_while sub_mult_closed) + thus ?thesis + by (smt assms(4) tests_dual.upper_bound_left rule_cons_post rule_seq sub_assoc sub_comm sub_mult_closed) +qed + +end + +class hoare_calculus_pc_2 = hoare_calculus_sound + hoare_calculus_pre_complete + + assumes aL_one: "aL = 1" +begin + +subclass hoare_calculus_sound_complete + apply unfold_locales + by (simp add: aL_one pre_below_one) + +lemma while_soundness_pc: + assumes "-p*-q \ x\-q" + shows "-q \ -p\x\--p*-q" +proof - + let ?t = "\x . 1" + have 1: "test_seq ?t" + by (simp add: test_seq_def) + hence 2: "-q \ Sum ?t" + by (metis Sum_test Sum_upper tests_dual.sba_dual.one_def tests_dual.antisymmetric tests_dual.sub_bot_least) + have 3: "?t 0*-p*-q \ x\aL*-q" + using 1 by (simp add: assms aL_one) + have "\n>0 . ?t n*-p*-q \ x\pSum ?t n*-q" + using 1 by (metis assms pSum_test pSum_upper tests_dual.sba_dual.one_def tests_dual.antisymmetric tests_dual.sub_bot_least tests_dual.sup_left_unit) + thus ?thesis + using 1 2 3 aL_one while_soundness_0 by auto +qed + +end + +class hoare_calculus_pc = hoare_calculus_sound + hoare_calculus_pre_complete + + assumes pre_one_one: "x\1 = 1" +begin + +subclass hoare_calculus_pc_2 + apply unfold_locales + by (simp add: aL_def pre_one_one) + +end + +class hoare_calculus_pc_valid = hoare_calculus_pc + hoare_calculus_valid +begin + +lemma rule_while_pc: + "-p*-q\x\-q \ -q\-p\x\--p*-q" + by (metis hoare_triple_valid sub_mult_closed while_soundness_pc) + +lemma rule_alternation: + "-p\x\-q \ -q\y\-p \ -p\-r\x*y\--r*-p" + by (meson rule_cons_pre_mult rule_seq rule_while_pc) + +lemma rule_alternation_context: + "-p\v\-p \ -p\w\-q \ -q\x\-q \ -q\y\-p \ -p\z\-p \ -p\-r\v*w*x*y*z\--r*-p" + by (meson rule_cons_pre_mult rule_seq rule_while_pc) + +lemma rule_example_3: + assumes "-p*-q\x\--p*-q" + and "--p*-r\x\-p*-r" + and "-p*-r\y\-p*-q" + and "--p*-q\z\--p*-r" + shows "-p*-q\--p*-r\-s\x*(y\-p\z)\--s*(-p*-q\--p*-r)" +proof - + have t1: "-p*-q\--p*-r\x\--p*-q\-p*-r" + by (smt assms(1,2) rule_disj sub_mult_closed) + have "-p*-r\y\-p*-q\--p*-r" + by (smt assms(3) rule_cons_post_plus sub_mult_closed) + hence t2: "-p*(--p*-q\-p*-r)\y\-p*-q\--p*-r" + by (smt (z3) tests_dual.sba_dual.less_eq_inf tests_dual.sba_dual.reflexive tests_dual.sba_dual.sub_sup_closed tests_dual.sub_associative tests_dual.sub_sup_closed tests_dual.upper_bound_left tests_dual.wnf_lemma_3) + have "--p*-q\z\-p*-q\--p*-r" + by (smt assms(4) tests_dual.inf_commutative rule_cons_post_plus sub_mult_closed) + hence "--p*(--p*-q\-p*-r)\z\-p*-q\--p*-r" + by (smt (z3) tests_dual.sba_dual.one_def tests_dual.sba_dual.sup_absorb tests_dual.sba_dual.sup_complement_intro tests_dual.sba_dual.sup_right_unit tests_dual.sub_sup_closed tests_dual.sup_complement_intro tests_dual.sup_left_dist_inf tests_dual.sup_right_unit tests_dual.top_double_complement) + hence "--p*-q\-p*-r\y\-p\z\-p*-q\--p*-r" + using t2 by (smt tests_dual.inf_closed rule_cond sub_mult_closed) + hence "-s*(-p*-q\--p*-r)\x*(y\-p\z)\-p*-q\--p*-r" + using t1 by (smt tests_dual.inf_closed rule_cons_pre_mult rule_seq sub_mult_closed) + thus ?thesis + by (smt tests_dual.inf_closed rule_while_pc sub_mult_closed) +qed + +end + +class hoare_calculus_tc = hoare_calculus + precondition_test_test + precondition_distr_mult + + assumes while_bnd: "p \ Test_expression \ q \ Pre_expression \ x \ While_program \ p\x\q \ Sum (\n . (p*x)^n\bot)" +begin + +lemma + assumes "p \ Test_expression" + and "q \ Pre_expression" + and "x \ While_program" + shows "p\x\q \ tsum (-p) x (p\x\q) (-p\(x\(p\x\q)*aL))" +proof - + let ?w = "p\x\q" + let ?s = "-p\(x\?w*aL)" + let ?t = "tseq (-p) x ?w ?s" + let ?b = "\n . (p*x)^n\bot" + have 2: "test_seq ?t" + by (simp add: assms(2) pre_expression_test t_seq_test_seq) + have 3: "test_seq ?b" + using pre_closed test_seq_def tests_dual.sba_dual.complement_top by blast + have 4: "?w = --?w" + by (metis assms(2) pre_expression_test pre_closed) + have "?w \ Sum ?b" + using assms while_bnd by blast + hence 5: "?w = Sum ?b*?w" + using 3 4 by (smt Sum_test leq_def sub_comm) + have "\n . ?b n*?w \ ?t n" + proof + fix n + show "?b n*?w \ ?t n" + proof (induct n) + show "?b 0*?w \ ?t 0" + using 2 4 by (metis power.power_0 pre_one test_seq_def tests_dual.sup_left_zero tests_dual.top_double_complement tests_dual.top_greatest) + next + fix n + assume 6: "?b n*?w \ ?t n" + have "-p \ ?t (Suc n)" + apply (unfold tseq_def power_succ_unfold_ext) + by (smt assms(2) pre_expression_test t_seq_test pre_closed sub_mult_closed tfun_def tseq_def tests_dual.lower_bound_left) + hence 7: "-p*?b (Suc n)*?w \ ?t (Suc n)" + using 2 3 4 by (smt tests_dual.upper_bound_left sub_mult_closed test_seq_def tests_dual.transitive) + have 8: "p*?b (Suc n)*?w \ x\?w*(?b n*?w)" + by (smt assms(1,2) tests_dual.upper_bound_right tests_dual.sup_idempotent power_Suc pre_closed pre_distr_mult pre_expression_test pre_import_composition sub_assoc sub_comm sub_mult_closed test_expression_test while_pre_then tests_dual.top_double_complement) + have 9: "... \ x\?w*?t n" + using 2 3 4 6 by (smt tests_dual.sub_sup_right_isotone pre_iso sub_mult_closed test_seq_def) + have "... \ ?t (Suc n)" + using 2 4 by (smt power_succ_unfold_ext pre_closed sub_mult_closed test_seq_def tfun_def tseq_def tests_dual.lower_bound_right) + hence "p*?b (Suc n)*?w \ ?t (Suc n)" + using 2 3 4 8 9 by (smt assms(1) pre_closed sub_mult_closed test_expression_test test_seq_def tests_dual.transitive) + thus "?b (Suc n)*?w \ ?t (Suc n)" + using 2 3 4 7 by (smt assms(1) tests_dual.sup_less_eq_cases sub_assoc sub_mult_closed test_expression_test test_seq_def) + qed + qed + hence "Sum ?b*?w \ tsum (-p) x ?w ?s" + using 3 4 by (smt assms(2) Sum_upper mult_right_dist_Sum pre_expression_test sub_mult_closed t_seq_test t_sum_test test_seq_def tests_dual.transitive tsum_def) + thus ?thesis + using 5 by auto +qed + +end + +class complete_pre = complete_tests + precondition + power +begin + +definition bnd :: "'a \ 'a" + where "bnd x \ Sup { x^n\bot | n::nat . True }" + +lemma bnd_test_set: + "test_set { x^n\bot | n::nat . True }" + by (smt (verit, del_insts) CollectD pre_closed test_set_def tests_dual.top_double_complement) + +lemma bnd_test: + "bnd x = --bnd x" + using bnd_def bnd_test_set sup_test by auto + +lemma bnd_upper: + "x^m\bot \ bnd x" +proof - + have "x^m\bot \ { x^m\bot | m::nat . True }" + by auto + thus ?thesis + using bnd_def bnd_test_set sup_upper by auto +qed + +lemma bnd_least: + assumes "\n . x^n\bot \ -p" + shows "bnd x \ -p" +proof - + have "\y\{ x^n\bot | n::nat . True } . y \ -p" + using assms by blast + thus ?thesis + using bnd_def bnd_test_set sup_least by auto +qed + +lemma mult_right_dist_bnd: + assumes "\n . (x^n\bot)*-p \ -q" + shows "bnd x*-p \ -q" +proof - + have "Sup { y*-p | y . y \ { x^n\bot | n::nat . True } } \ -q" + by (smt assms mem_Collect_eq tests_dual.complement_bot pre_closed sub_mult_closed sup_least test_set_def) + thus ?thesis + using bnd_test_set bnd_def mult_right_dist_sup by simp +qed + +lemma tests_complete: + "nat_test (\n . (-p*x)^n\bot) (bnd(-p*x))" + using bnd_test bnd_upper mult_right_dist_bnd nat_test_def tests_dual.complement_bot pre_closed by blast + +end + +end + diff --git a/thys/Correctness_Algebras/Hoare_Modal.thy b/thys/Correctness_Algebras/Hoare_Modal.thy new file mode 100644 --- /dev/null +++ b/thys/Correctness_Algebras/Hoare_Modal.thy @@ -0,0 +1,402 @@ +(* Title: Hoare Calculus and Modal Operators + Author: Walter Guttmann + Maintainer: Walter Guttmann +*) + +section \Hoare Calculus and Modal Operators\ + +theory Hoare_Modal + +imports Stone_Kleene_Relation_Algebras.Kleene_Algebras Complete_Domain Hoare Relative_Modal + +begin + +class box_precondition = relative_box_semiring + pre + + assumes pre_def: "x\p = |x]p" +begin + +text \Theorem 47\ + +subclass precondition + apply unfold_locales + apply (simp add: box_x_a pre_def) + apply (simp add: box_left_mult pre_def) + using box_def box_right_submult_a_a pre_def tests_dual.sba_dual.greatest_lower_bound apply fastforce + by (simp add: box_1_a pre_def) + +subclass precondition_test_test + apply unfold_locales + by (simp add: a_box_a_a pre_def) + +subclass precondition_promote + apply unfold_locales + using a_mult_d box_def pre_def pre_test_test by auto + +subclass precondition_test_box + apply unfold_locales + by (simp add: box_a_a d_def pre_def) + +lemma pre_Z: + "-p \ x\-q \ -p * x * --q \ Z" + by (simp add: box_demodalisation_2 pre_def) + +lemma pre_left_dist_add: + "x\y\-q = (x\-q) * (y\-q)" + by (simp add: box_left_dist_sup pre_def) + +lemma pre_left_antitone: + "x \ y \ y\-q \ x\-q" + by (simp add: box_antitone_isotone pre_def) + +lemma pre_promote_neg: + "(x\-q) * x * --q \ Z" + by (simp add: box_below_Z pre_def) + +lemma pre_pc_Z: + "x\1 = 1 \ x * bot \ Z" + by (simp add: a_strict box_x_1 pre_def) + +(* +lemma pre_sub_promote: "(x\-q) * x \ (x\-q) * x * -q \ Z" nitpick [expect=genuine,card=6] oops +lemma pre_promote: "(x\-q) * x \ Z = (x\-q) * x * -q \ Z" nitpick [expect=genuine,card=6] oops +lemma pre_mult_sub_promote: "(x*y\-q) * x \ (x*y\-q) * x * (y\-q) \ Z" nitpick [expect=genuine,card=6] oops +lemma pre_mult_promote: "(x*y\-q) * x * (y\-q) \ Z = (x*y\-q) * x \ Z" nitpick [expect=genuine,card=6] oops +*) + +end + +class left_zero_box_precondition = box_precondition + relative_left_zero_antidomain_semiring +begin + +lemma pre_sub_promote: + "(x\-q) * x \ (x\-q) * x * -q \ Z" + using case_split_right_sup pre_promote_neg by blast + +lemma pre_promote: + "(x\-q) * x \ Z = (x\-q) * x * -q \ Z" + apply (rule sup_same_context) + apply (simp add: pre_sub_promote) + by (metis a_below_one le_supI1 mult_1_right mult_right_isotone) + +lemma pre_mult_sub_promote: + "(x*y\-q) * x \ (x*y\-q) * x * (y\-q) \ Z" + by (metis pre_closed pre_seq pre_sub_promote) + +lemma pre_mult_promote_sub: + "(x*y\-q) * x * (y\-q) \ (x*y\-q) * x" + by (metis mult_right_isotone mult_1_right pre_below_one) + +lemma pre_mult_promote: + "(x*y\-q) * x * (y\-q) \ Z = (x*y\-q) * x \ Z" + by (metis sup_ge1 sup_same_context order_trans pre_mult_sub_promote pre_mult_promote_sub) + +end + +class diamond_precondition = relative_box_semiring + pre + + assumes pre_def: "x\p = |x>p" +begin + +text \Theorem 47\ + +subclass precondition + apply unfold_locales + apply (simp add: d_def diamond_def pre_def) + apply (simp add: diamond_left_mult pre_def) + apply (metis a_antitone a_dist_sup box_antitone_isotone box_deMorgan_1 order.refl pre_def sup_right_divisibility) + by (simp add: diamond_1_a pre_def) + +subclass precondition_test_test + apply unfold_locales + by (metis diamond_a_a_same diamond_a_export diamond_associative diamond_right_mult pre_def) + +subclass precondition_promote + apply unfold_locales + using d_def diamond_def pre_def pre_test_test tests_dual.sub_sup_closed by force + +subclass precondition_test_diamond + apply unfold_locales + by (simp add: diamond_a_a pre_def) + +lemma pre_left_dist_add: + "x\y\-q = (x\-q) \ (y\-q)" + by (simp add: diamond_left_dist_sup pre_def) + +lemma pre_left_isotone: + "x \ y \ x\-q \ y\-q" + by (metis diamond_left_isotone pre_def) + +end + +class box_while = box_precondition + bounded_left_conway_semiring + ite + while + + assumes ite_def: "x\p\y = p * x \ -p * y" + assumes while_def: "p\x = (p * x)\<^sup>\ * -p" +begin + +subclass bounded_relative_antidomain_semiring .. + +lemma Z_circ_left_zero: + "Z * x\<^sup>\ = Z" + using Z_left_zero_above_one circ_plus_one sup.absorb_iff2 by auto + +subclass ifthenelse + apply unfold_locales + by (smt a_d_closed box_a_export box_left_dist_sup box_x_a tests_dual.case_duality d_def ite_def pre_def) + +text \Theorem 48.1\ + +subclass whiledo + apply unfold_locales + apply (smt circ_loop_fixpoint ite_def ite_pre mult_assoc mult_1_right pre_one pre_seq while_def) + using pre_mult_test_promote while_def by auto + +lemma pre_while_1: + "-p*(-p\x)\1 = -p\x\1" +proof - + have "--p*(-p*(-p\x)\1) = --p*(-p\x\1)" + by (metis mult_1_right pre_closed pre_seq pre_test_neg tests_dual.sba_dual.top_double_complement while_pre_else) + thus ?thesis + by (smt (z3) pre_closed pre_import tests_dual.sba_dual.top_double_complement tests_dual.sup_eq_cases) +qed + +lemma aL_one_circ: + "aL = a(1\<^sup>\*bot)" + by (metis aL_def box_left_mult box_x_a idempotent_bot_closed idempotent_one_closed pre_def tests_dual.sba_dual.one_def while_def tests_dual.one_def) + +end + +class diamond_while = diamond_precondition + bounded_left_conway_semiring + ite + while + + assumes ite_def: "x\p\y = p * x \ -p * y" + assumes while_def: "p\x = (p * x)\<^sup>\ * -p" +begin + +subclass bounded_relative_antidomain_semiring .. + +lemma Z_circ_left_zero: + "Z * x\<^sup>\ = Z" + by (simp add: Z_left_zero_above_one circ_reflexive) + +subclass ifthenelse + apply unfold_locales + by (simp add: ite_def pre_export pre_left_dist_add) + +text \Theorem 48.2\ + +subclass whiledo + apply unfold_locales + apply (smt circ_loop_fixpoint ite_def ite_pre mult_assoc mult_1_right pre_one pre_seq while_def) + by (simp add: pre_mult_test_promote while_def) + +lemma aL_one_circ: + "aL = d(1\<^sup>\*bot)" + by (metis aL_def tests_dual.complement_bot diamond_x_1 mult_left_one pre_def while_def) + +end + +class box_while_program = box_while + atoms +begin + +subclass while_program .. + +end + +class diamond_while_program = diamond_while + atoms +begin + +subclass while_program .. + +end + +class box_hoare_calculus = box_while_program + complete_antidomain_semiring +begin + +subclass hoare_calculus .. + +end + +class diamond_hoare_calculus = diamond_while_program + complete_antidomain_semiring +begin + +subclass hoare_calculus .. + +end + +class box_hoare_sound = box_hoare_calculus + relative_domain_semiring_split + left_kleene_conway_semiring + + assumes aL_circ: "aL * x\<^sup>\ \ x\<^sup>\" +begin + +lemma aL_circ_ext: + "|x\<^sup>\]y \ |aL * x\<^sup>\]y" + by (simp add: aL_circ box_left_antitone) + +lemma box_star_induct: + assumes "-p \ |x](-p)" + shows "-p \ |x\<^sup>\](-p)" +proof - + have 1: "x*--p*top \ Z \ --p*top" + by (metis assms Z_top sup_commute box_demodalisation_2 mult_assoc mult_left_isotone shunting_Z) + have "x*(Z \ --p*top) \ x*--p*top \ Z" + using split_Z sup_monoid.add_commute mult_assoc by force + also have "... \ Z \ --p*top" + using 1 by simp + finally have "x*(Z \ --p*top) \ --p \ Z \ --p*top" + using le_supI2 sup.bounded_iff top_right_mult_increasing by auto + thus ?thesis + by (metis sup_commute box_demodalisation_2 mult_assoc shunting_Z star_left_induct) +qed + +lemma box_circ_induct: + "-p \ |x](-p) \ -p*aL \ |x\<^sup>\](-p)" + by (smt aL_circ_ext aL_test box_left_mult box_star_induct order_trans tests_dual.inf_commutative pre_closed pre_def pre_test tests_dual.shunting_right) + +lemma a_while_soundness: + assumes "-p*-q \ |x](-q)" + shows "aL*-q \ |(-p*x)\<^sup>\*--p](-q)" +proof - + have "|(-p*x)\<^sup>\](-q) \ |(-p*x)\<^sup>\*--p](-q)" + by (meson box_left_antitone circ_mult_upper_bound circ_reflexive order.refl order.trans tests_dual.sub_bot_least) + thus ?thesis + by (smt assms box_import_shunting box_circ_induct order_trans sub_comm aL_test) +qed + +subclass hoare_calculus_sound + apply unfold_locales + by (simp add: a_while_soundness pre_def while_def) + +end + +class diamond_hoare_sound = diamond_hoare_calculus + left_kleene_conway_semiring + + assumes aL_circ: "aL * x\<^sup>\ \ x\<^sup>\" +begin + +lemma aL_circ_equal: + "aL * x\<^sup>\ = aL * x\<^sup>\" + apply (rule order.antisym) + using aL_circ aL_one_circ d_restrict_iff_1 apply force + by (simp add: mult_right_isotone star_below_circ) + +lemma aL_zero: + "aL = bot" + by (smt aL_circ_equal aL_one_circ d_export d_idempotent diamond_d_bot diamond_def mult_assoc mult_1_right star_one) + +subclass hoare_calculus_sound + apply unfold_locales + using aL_zero by auto + +end + +class box_hoare_complete = box_hoare_calculus + left_kleene_conway_semiring + + assumes box_circ_induct_2: "-p*|x](-q) \ -q \ |x\<^sup>\](-p) \ -q\aL" + assumes aL_zero_or_one: "aL = bot \ aL = 1" + assumes while_mult_left_dist_Prod: "x \ While_program \ descending_chain t \ test_seq t \ x*Prod t = Prod (\n . x*t n)" +begin + +subclass hoare_calculus_complete + apply unfold_locales + apply (metis aL_zero_or_one bot_least order.eq_iff mult_1_right pre_closed tests_dual.sup_right_zero) + subgoal + apply (unfold pre_def box_def) + by (metis a_ascending_chain a_dist_Prod a_dist_Sum descending_chain_left_mult while_mult_left_dist_Prod test_seq_def) + by (smt box_circ_induct_2 tests_dual.double_negation tests_dual.greatest_lower_bound tests_dual.upper_bound_left mult_right_dist_sup pre_closed pre_def pre_import pre_seq pre_test sub_mult_closed while_def) + +end + +class diamond_hoare_complete = diamond_hoare_calculus + relative_domain_semiring_split + left_kleene_conway_semiring + + assumes dL_circ: "-aL*x\<^sup>\ \ x\<^sup>\" + assumes aL_zero_or_one: "aL = bot \ aL = 1" + assumes while_mult_left_dist_Sum: "x \ While_program \ ascending_chain t \ test_seq t \ x*Sum t = Sum (\n . x*t n)" +begin + +lemma diamond_star_induct_var: + assumes "|x>(d p) \ d p" + shows "|x\<^sup>\>(d p) \ d p" +proof - + have "x * (d p * x\<^sup>\ \ Z) \ d p * x * x\<^sup>\ \ Z * x\<^sup>\ \ Z" + by (metis assms sup_left_isotone d_mult_d diamond_def diamond_demodalisation_3 mult_assoc mult_left_isotone mult_right_dist_sup order_trans split_Z) + also have "... \ d p * x\<^sup>\ \ Z" + by (metis Z_mult_decreasing mult_right_isotone star.left_plus_below_circ sup.bounded_iff sup_ge1 sup_mono sup_monoid.add_commute mult_assoc) + finally show ?thesis + by (smt sup_commute le_sup_iff sup_ge2 d_mult_d diamond_def diamond_demodalisation_3 order_trans star.circ_back_loop_prefixpoint star_left_induct) +qed + +lemma diamond_star_induct: + "d q \ |x>(d p) \ d p \ |x\<^sup>\>(d q) \ d p" + by (metis le_sup_iff diamond_star_induct_var diamond_right_isotone order_trans) + +lemma while_completeness_1: + assumes "-p*(x\-q) \ -q" + shows "-p\x\-q \ -q\aL" +proof - + have "--p*-q \ |-p*x>(-q) \ -q" + using assms pre_def pre_export tests_dual.upper_bound_right by auto + hence "|(-p*x)\<^sup>\>(--p*-q) \ -q" + by (smt diamond_star_induct d_def sub_mult_closed tests_dual.double_negation) + hence "|-aL*(-p*x)\<^sup>\>(--p*-q) \ -q" + by (meson dL_circ diamond_isotone order.eq_iff order.trans) + thus ?thesis + by (smt aL_test diamond_a_export diamond_def mult_assoc tests_dual.inf_commutative pre_closed pre_def tests_dual.shunting while_def) +qed + +subclass hoare_calculus_complete + apply unfold_locales + apply (metis aL_test aL_zero_or_one bot_least order.eq_iff pre_closed pre_test pre_test_one tests_dual.sup_right_zero) + subgoal + apply (unfold pre_def diamond_def) + by (simp add: ascending_chain_left_mult d_dist_Sum while_mult_left_dist_Sum) + by (simp add: while_completeness_1) + +end + +class box_hoare_valid = box_hoare_sound + box_hoare_complete + hoare_triple + + assumes hoare_triple_def: "p\x\q \ p \ |x]q" +begin + +text \Theorem 49.2\ + +subclass hoare_calculus_valid + apply unfold_locales + by (simp add: hoare_triple_def pre_def) + +lemma rule_skip_valid: + "-p\1\-p" + by (simp add: rule_skip) + +end + +class diamond_hoare_valid = diamond_hoare_sound + diamond_hoare_complete + hoare_triple + + assumes hoare_triple_def: "p\x\q \ p \ |x>q" +begin + +lemma circ_star_equal: + "x\<^sup>\ = x\<^sup>\" + by (metis aL_zero order.antisym dL_circ mult_left_one one_def star_below_circ) + +text \Theorem 49.1\ + +subclass hoare_calculus_valid + apply unfold_locales + by (simp add: hoare_triple_def pre_def) + +end + +class diamond_hoare_sound_2 = diamond_hoare_calculus + left_kleene_conway_semiring + + assumes diamond_circ_induct_2: "--p*-q \ |x>(-q) \ aL*-q \ |x\<^sup>\>(-p)" +begin + +subclass hoare_calculus_sound + apply unfold_locales + by (smt a_export diamond_associative diamond_circ_induct_2 tests_dual.double_negation tests_dual.sup_complement_intro pre_def pre_import_equiv_mult sub_comm sub_mult_closed while_def) + +end + +class diamond_hoare_valid_2 = diamond_hoare_sound_2 + diamond_hoare_complete + hoare_triple + + assumes hoare_triple_def: "p\x\q \ p \ |x>q" +begin + +subclass hoare_calculus_valid + apply unfold_locales + by (simp add: hoare_triple_def pre_def) + +end + +end + diff --git a/thys/Correctness_Algebras/Lattice_Ordered_Semirings.thy b/thys/Correctness_Algebras/Lattice_Ordered_Semirings.thy new file mode 100644 --- /dev/null +++ b/thys/Correctness_Algebras/Lattice_Ordered_Semirings.thy @@ -0,0 +1,906 @@ +(* Title: Lattice-Ordered Semirings + Author: Walter Guttmann + Maintainer: Walter Guttmann +*) + +section \Lattice-Ordered Semirings\ + +theory Lattice_Ordered_Semirings + +imports Stone_Relation_Algebras.Semirings + +begin + +text \Many results in this theory are taken from a joint paper with Rudolf Berghammer.\ + +text \M0-algebra\ + +class lattice_ordered_pre_left_semiring = pre_left_semiring + bounded_distrib_lattice +begin + +subclass bounded_pre_left_semiring + apply unfold_locales + by simp + +lemma top_mult_right_one: + "x * top = x * top * 1" + by (metis order.antisym mult_sub_right_one mult_sup_associative_one surjective_one_closed) + +lemma mult_left_sub_dist_inf_left: + "x * (y \ z) \ x * y" + by (simp add: mult_right_isotone) + +lemma mult_left_sub_dist_inf_right: + "x * (y \ z) \ x * z" + by (simp add: mult_right_isotone) + +lemma mult_right_sub_dist_inf_left: + "(x \ y) * z \ x * z" + by (simp add: mult_left_isotone) + +lemma mult_right_sub_dist_inf_right: + "(x \ y) * z \ y * z" + by (simp add: mult_left_isotone) + +lemma mult_right_sub_dist_inf: + "(x \ y) * z \ x * z \ y * z" + by (simp add: mult_right_sub_dist_inf_left mult_right_sub_dist_inf_right) + +text \Figure 1: fundamental properties\ + +definition co_total :: "'a \ bool" where "co_total x \ x * bot = bot" +definition up_closed :: "'a \ bool" where "up_closed x \ x * 1 = x" +definition sup_distributive :: "'a \ bool" where "sup_distributive x \ (\y z . x * (y \ z) = x * y \ x * z)" +definition inf_distributive :: "'a \ bool" where "inf_distributive x \ (\y z . x * (y \ z) = x * y \ x * z)" +definition contact :: "'a \ bool" where "contact x \ x * x \ 1 = x" +definition kernel :: "'a \ bool" where "kernel x \ x * x \ 1 = x * 1" +definition sup_dist_contact :: "'a \ bool" where "sup_dist_contact x \ sup_distributive x \ contact x" +definition inf_dist_kernel :: "'a \ bool" where "inf_dist_kernel x \ inf_distributive x \ kernel x" +definition test :: "'a \ bool" where "test x \ x * top \ 1 = x" +definition co_test :: "'a \ bool" where "co_test x \ x * bot \ 1 = x" +definition co_vector :: "'a \ bool" where "co_vector x \ x * bot = x" + +text \AAMP Theorem 6 / Figure 2: relations between properties\ + +lemma reflexive_total: + "reflexive x \ total x" + using sup_left_divisibility total_sup_closed by force + +lemma reflexive_dense: + "reflexive x \ dense_rel x" + using mult_left_isotone by fastforce + +lemma reflexive_transitive_up_closed: + "reflexive x \ transitive x \ up_closed x" + by (metis antisym_conv mult_isotone mult_sub_right_one reflexive_dense up_closed_def) + +lemma coreflexive_co_total: + "coreflexive x \ co_total x" + by (metis co_total_def order.eq_iff mult_left_isotone mult_left_one bot_least) + +lemma coreflexive_transitive: + "coreflexive x \ transitive x" + by (simp add: coreflexive_transitive) + +lemma idempotent_transitive_dense: + "idempotent x \ transitive x \ dense_rel x" + by (simp add: order.eq_iff) + +lemma contact_reflexive: + "contact x \ reflexive x" + using contact_def sup_right_divisibility by auto + +lemma contact_transitive: + "contact x \ transitive x" + using contact_def sup_left_divisibility by blast + +lemma contact_dense: + "contact x \ dense_rel x" + by (simp add: contact_reflexive reflexive_dense) + +lemma contact_idempotent: + "contact x \ idempotent x" + by (simp add: contact_dense contact_transitive idempotent_transitive_dense) + +lemma contact_up_closed: + "contact x \ up_closed x" + by (simp add: contact_reflexive contact_transitive reflexive_transitive_up_closed) + +lemma contact_reflexive_idempotent_up_closed: + "contact x \ reflexive x \ idempotent x \ up_closed x" + by (metis contact_def contact_idempotent contact_reflexive contact_up_closed sup_absorb2 sup_monoid.add_commute) + +lemma kernel_coreflexive: + "kernel x \ coreflexive x" + by (metis kernel_def inf.boundedE mult_sub_right_one) + +lemma kernel_transitive: + "kernel x \ transitive x" + by (simp add: coreflexive_transitive kernel_coreflexive) + +lemma kernel_dense: + "kernel x \ dense_rel x" + by (metis kernel_def inf.boundedE mult_sub_right_one) + +lemma kernel_idempotent: + "kernel x \ idempotent x" + by (simp add: idempotent_transitive_dense kernel_dense kernel_transitive) + +lemma kernel_up_closed: + "kernel x \ up_closed x" + by (metis kernel_coreflexive kernel_def kernel_idempotent inf.absorb1 up_closed_def) + +lemma kernel_coreflexive_idempotent_up_closed: + "kernel x \ coreflexive x \ idempotent x \ up_closed x" + by (metis kernel_coreflexive kernel_def kernel_idempotent inf.absorb1 up_closed_def) + +lemma test_coreflexive: + "test x \ coreflexive x" + using inf.sup_right_divisibility test_def by blast + +lemma test_up_closed: + "test x \ up_closed x" + by (metis order.eq_iff mult_left_one mult_sub_right_one mult_right_sub_dist_inf test_def top_mult_right_one up_closed_def) + +lemma co_test_reflexive: + "co_test x \ reflexive x" + using co_test_def sup_right_divisibility by blast + +lemma co_test_transitive: + "co_test x \ transitive x" + by (smt co_test_def sup_assoc le_iff_sup mult_left_one mult_left_zero mult_right_dist_sup mult_semi_associative) + +lemma co_test_idempotent: + "co_test x \ idempotent x" + by (simp add: co_test_reflexive co_test_transitive idempotent_transitive_dense reflexive_dense) + +lemma co_test_up_closed: + "co_test x \ up_closed x" + by (simp add: co_test_reflexive co_test_transitive reflexive_transitive_up_closed) + +lemma co_test_contact: + "co_test x \ contact x" + by (simp add: co_test_idempotent co_test_reflexive co_test_up_closed contact_reflexive_idempotent_up_closed) + +lemma vector_transitive: + "vector x \ transitive x" + by (metis mult_right_isotone top.extremum) + +lemma vector_up_closed: + "vector x \ up_closed x" + by (metis top_mult_right_one up_closed_def) + +text \AAMP Theorem 10 / Figure 3: closure properties\ + +text \total\ + +lemma one_total: + "total 1" + by simp + +lemma top_total: + "total top" + by simp + +lemma sup_total: + "total x \ total y \ total (x \ y)" + by (simp add: total_sup_closed) + +text \co-total\ + +lemma zero_co_total: + "co_total bot" + by (simp add: co_total_def) + +lemma one_co_total: + "co_total 1" + by (simp add: co_total_def) + +lemma sup_co_total: + "co_total x \ co_total y \ co_total (x \ y)" + by (simp add: co_total_def mult_right_dist_sup) + +lemma inf_co_total: + "co_total x \ co_total y \ co_total (x \ y)" + by (metis co_total_def order.antisym bot_least mult_right_sub_dist_inf_right) + +lemma comp_co_total: + "co_total x \ co_total y \ co_total (x * y)" + by (metis co_total_def order.eq_iff mult_semi_associative bot_least) + +text \sub-transitive\ + +lemma zero_transitive: + "transitive bot" + by (simp add: vector_transitive) + +lemma one_transitive: + "transitive 1" + by simp + +lemma top_transitive: + "transitive top" + by simp + +lemma inf_transitive: + "transitive x \ transitive y \ transitive (x \ y)" + by (meson inf_mono order_trans mult_left_sub_dist_inf_left mult_left_sub_dist_inf_right mult_right_sub_dist_inf) + +text \dense\ + +lemma zero_dense: + "dense_rel bot" + by simp + +lemma one_dense: + "dense_rel 1" + by simp + +lemma top_dense: + "dense_rel top" + by simp + +lemma sup_dense: + assumes "dense_rel x" + and "dense_rel y" + shows "dense_rel (x \ y)" +proof - + have "x \ x * x \ y \ y * y" + using assms by auto + hence "x \ (x \ y) * (x \ y) \ y \ (x \ y) * (x \ y)" + by (meson dense_sup_closed order_trans sup.cobounded1 sup.cobounded2) + hence "x \ y \ (x \ y) * (x \ y)" + by simp + thus "dense_rel (x \ y)" + by simp +qed + +text \reflexive\ + +lemma one_reflexive: + "reflexive 1" + by simp + +lemma top_reflexive: + "reflexive top" + by simp + +lemma sup_reflexive: + "reflexive x \ reflexive y \ reflexive (x \ y)" + by (simp add: reflexive_sup_closed) + +lemma inf_reflexive: + "reflexive x \ reflexive y \ reflexive (x \ y)" + by simp + +lemma comp_reflexive: + "reflexive x \ reflexive y \ reflexive (x * y)" + using reflexive_mult_closed by auto + +text \co-reflexive\ + +lemma zero_coreflexive: + "coreflexive bot" + by simp + +lemma one_coreflexive: + "coreflexive 1" + by simp + +lemma sup_coreflexive: + "coreflexive x \ coreflexive y \ coreflexive (x \ y)" + by simp + +lemma inf_coreflexive: + "coreflexive x \ coreflexive y \ coreflexive (x \ y)" + by (simp add: le_infI1) + +lemma comp_coreflexive: + "coreflexive x \ coreflexive y \ coreflexive (x * y)" + by (simp add: coreflexive_mult_closed) + +text \idempotent\ + +lemma zero_idempotent: + "idempotent bot" + by simp + +lemma one_idempotent: + "idempotent 1" + by simp + +lemma top_idempotent: + "idempotent top" + by simp + +text \up-closed\ + +lemma zero_up_closed: + "up_closed bot" + by (simp add: up_closed_def) + +lemma one_up_closed: + "up_closed 1" + by (simp add: up_closed_def) + +lemma top_up_closed: + "up_closed top" + by (simp add: vector_up_closed) + +lemma sup_up_closed: + "up_closed x \ up_closed y \ up_closed (x \ y)" + by (simp add: mult_right_dist_sup up_closed_def) + +lemma inf_up_closed: + "up_closed x \ up_closed y \ up_closed (x \ y)" + by (metis order.antisym mult_sub_right_one mult_right_sub_dist_inf up_closed_def) + +lemma comp_up_closed: + "up_closed x \ up_closed y \ up_closed (x * y)" + by (metis order.antisym mult_semi_associative mult_sub_right_one up_closed_def) + +text \add-distributive\ + +lemma zero_sup_distributive: + "sup_distributive bot" + by (simp add: sup_distributive_def) + +lemma one_sup_distributive: + "sup_distributive 1" + by (simp add: sup_distributive_def) + +lemma sup_sup_distributive: + "sup_distributive x \ sup_distributive y \ sup_distributive (x \ y)" + using sup_distributive_def mult_right_dist_sup sup_monoid.add_assoc sup_monoid.add_commute by auto + +text \inf-distributive\ + +lemma zero_inf_distributive: + "inf_distributive bot" + by (simp add: inf_distributive_def) + +lemma one_inf_distributive: + "inf_distributive 1" + by (simp add: inf_distributive_def) + +text \contact\ + +lemma one_contact: + "contact 1" + by (simp add: contact_def) + +lemma top_contact: + "contact top" + by (simp add: contact_def) + +lemma inf_contact: + "contact x \ contact y \ contact (x \ y)" + by (meson contact_reflexive_idempotent_up_closed contact_transitive inf_reflexive inf_transitive inf_up_closed preorder_idempotent) + +text \kernel\ + +lemma zero_kernel: + "kernel bot" + by (simp add: kernel_def) + +lemma one_kernel: + "kernel 1" + by (simp add: kernel_def) + +lemma sup_kernel: + "kernel x \ kernel y \ kernel (x \ y)" + using kernel_coreflexive_idempotent_up_closed order.antisym coreflexive_transitive sup_dense sup_up_closed by force + +text \add-distributive contact\ + +lemma one_sup_dist_contact: + "sup_dist_contact 1" + by (simp add: sup_dist_contact_def one_sup_distributive one_contact) + +text \inf-distributive kernel\ + +lemma zero_inf_dist_kernel: + "inf_dist_kernel bot" + by (simp add: inf_dist_kernel_def zero_kernel zero_inf_distributive) + +lemma one_inf_dist_kernel: + "inf_dist_kernel 1" + by (simp add: inf_dist_kernel_def one_kernel one_inf_distributive) + +text \test\ + +lemma zero_test: + "test bot" + by (simp add: test_def) + +lemma one_test: + "test 1" + by (simp add: test_def) + +lemma sup_test: + "test x \ test y \ test (x \ y)" + by (simp add: inf_sup_distrib2 mult_right_dist_sup test_def) + +lemma inf_test: + "test x \ test y \ test (x \ y)" + by (smt (z3) inf.left_commute idempotent_one_closed inf.le_iff_sup inf_top.right_neutral mult_right_isotone mult_sub_right_one mult_right_sub_dist_inf test_def top_mult_right_one) + +text \co-test\ + +lemma one_co_test: + "co_test 1" + by (simp add: co_test_def) + +lemma sup_co_test: + "co_test x \ co_test y \ co_test (x \ y)" + by (smt (z3) co_test_def mult_right_dist_sup sup.left_idem sup_assoc sup_commute) + +text \vector\ + +lemma zero_vector: + "vector bot" + by simp + +lemma top_vector: + "vector top" + by simp + +lemma sup_vector: + "vector x \ vector y \ vector (x \ y)" + by (simp add: vector_sup_closed) + +lemma inf_vector: + "vector x \ vector y \ vector (x \ y)" + by (metis order.antisym top_right_mult_increasing mult_right_sub_dist_inf) + +lemma comp_vector: + "vector y \ vector (x * y)" + by (simp add: vector_mult_closed) + +end + +class lattice_ordered_pre_left_semiring_1 = non_associative_left_semiring + bounded_distrib_lattice + + assumes mult_associative_one: "x * (y * z) = (x * (y * 1)) * z" + assumes mult_right_dist_inf_one: "(x * 1 \ y * 1) * z = x * z \ y * z" +begin + +subclass pre_left_semiring + apply unfold_locales + by (metis mult_associative_one mult_left_isotone mult_right_isotone mult_sub_right_one) + +subclass lattice_ordered_pre_left_semiring .. + +lemma mult_zero_associative: + "x * bot * y = x * bot" + by (metis mult_associative_one mult_left_zero) + +lemma mult_zero_sup_one_dist: + "(x * bot \ 1) * z = x * bot \ z" + by (simp add: mult_right_dist_sup mult_zero_associative) + +lemma mult_zero_sup_dist: + "(x * bot \ y) * z = x * bot \ y * z" + by (simp add: mult_right_dist_sup mult_zero_associative) + +lemma vector_zero_inf_one_comp: + "(x * bot \ 1) * y = x * bot \ y" + by (metis mult_left_one mult_right_dist_inf_one mult_zero_associative) + +text \AAMP Theorem 6 / Figure 2: relations between properties\ + +lemma co_test_inf_distributive: + "co_test x \ inf_distributive x" + by (metis co_test_def distrib_imp1 inf_sup_distrib1 inf_distributive_def mult_zero_sup_one_dist) + +lemma co_test_sup_distributive: + "co_test x \ sup_distributive x" + by (metis sup_sup_distributive sup_distributive_def co_test_def one_sup_distributive sup.idem mult_zero_associative) + +lemma co_test_sup_dist_contact: + "co_test x \ sup_dist_contact x" + by (simp add: co_test_sup_distributive sup_dist_contact_def co_test_contact) + +text \AAMP Theorem 10 / Figure 3: closure properties\ + +text \co-test\ + +lemma inf_co_test: + "co_test x \ co_test y \ co_test (x \ y)" + by (smt (z3) co_test_def co_test_up_closed mult_right_dist_inf_one sup_commute sup_inf_distrib1 up_closed_def) + +lemma comp_co_test: + "co_test x \ co_test y \ co_test (x * y)" + by (metis co_test_def mult_associative_one sup_assoc mult_zero_sup_one_dist) + +end + +class lattice_ordered_pre_left_semiring_2 = lattice_ordered_pre_left_semiring + + assumes mult_sub_associative_one: "x * (y * z) \ (x * (y * 1)) * z" + assumes mult_right_dist_inf_one_sub: "x * z \ y * z \ (x * 1 \ y * 1) * z" +begin + +subclass lattice_ordered_pre_left_semiring_1 + apply unfold_locales + apply (simp add: order.antisym mult_sub_associative_one mult_sup_associative_one) + by (metis order.eq_iff mult_one_associative mult_right_dist_inf_one_sub mult_right_sub_dist_inf) + +end + +class multirelation_algebra_1 = lattice_ordered_pre_left_semiring + + assumes mult_left_top: "top * x = top" +begin + +text \AAMP Theorem 10 / Figure 3: closure properties\ + +lemma top_sup_distributive: + "sup_distributive top" + by (simp add: sup_distributive_def mult_left_top) + +lemma top_inf_distributive: + "inf_distributive top" + by (simp add: inf_distributive_def mult_left_top) + +lemma top_sup_dist_contact: + "sup_dist_contact top" + by (simp add: sup_dist_contact_def top_contact top_sup_distributive) + +lemma top_co_test: + "co_test top" + by (simp add: co_test_def mult_left_top) + +end + +text \M1-algebra\ + +class multirelation_algebra_2 = multirelation_algebra_1 + lattice_ordered_pre_left_semiring_2 +begin + +lemma mult_top_associative: + "x * top * y = x * top" + by (metis mult_left_top mult_associative_one) + +lemma vector_inf_one_comp: + "(x * top \ 1) * y = x * top \ y" + by (metis vector_zero_inf_one_comp mult_top_associative) + +lemma vector_left_annihilator: + "vector x \ x * y = x" + by (metis mult_top_associative) + +text \properties\ + +lemma test_comp_inf: + "test x \ test y \ x * y = x \ y" + by (metis inf.absorb1 inf.left_commute test_coreflexive test_def vector_inf_one_comp) + +text \AAMP Theorem 6 / Figure 2: relations between properties\ + +lemma test_sup_distributive: + "test x \ sup_distributive x" + by (metis sup_distributive_def inf_sup_distrib1 test_def vector_inf_one_comp) + +lemma test_inf_distributive: + "test x \ inf_distributive x" + by (smt (verit, ccfv_SIG) inf.commute inf.sup_monoid.add_assoc inf_distributive_def test_def inf.idem vector_inf_one_comp) + +lemma test_inf_dist_kernel: + "test x \ inf_dist_kernel x" + by (simp add: kernel_def inf_dist_kernel_def one_test test_comp_inf test_inf_distributive) + +lemma vector_idempotent: + "vector x \ idempotent x" + using vector_left_annihilator by blast + +lemma vector_sup_distributive: + "vector x \ sup_distributive x" + by (simp add: sup_distributive_def vector_left_annihilator) + +lemma vector_inf_distributive: + "vector x \ inf_distributive x" + by (simp add: inf_distributive_def vector_left_annihilator) + +lemma vector_co_vector: + "vector x \ co_vector x" + by (metis co_vector_def mult_zero_associative mult_top_associative) + +text \AAMP Theorem 10 / Figure 3: closure properties\ + +text \test\ + +lemma comp_test: + "test x \ test y \ test (x * y)" + by (simp add: inf_test test_comp_inf) + +end + +class dual = + fixes dual :: "'a \ 'a" ("_\<^sup>d" [100] 100) + +class multirelation_algebra_3 = lattice_ordered_pre_left_semiring + dual + + assumes dual_involutive: "x\<^sup>d\<^sup>d = x" + assumes dual_dist_sup: "(x \ y)\<^sup>d = x\<^sup>d \ y\<^sup>d" + assumes dual_one: "1\<^sup>d = 1" +begin + +lemma dual_dist_inf: + "(x \ y)\<^sup>d = x\<^sup>d \ y\<^sup>d" + by (metis dual_dist_sup dual_involutive) + +lemma dual_antitone: + "x \ y \ y\<^sup>d \ x\<^sup>d" + using dual_dist_sup sup_right_divisibility by fastforce + +lemma dual_zero: + "bot\<^sup>d = top" + by (metis dual_antitone bot_least dual_involutive top_le) + +lemma dual_top: + "top\<^sup>d = bot" + using dual_zero dual_involutive by auto + +text \AAMP Theorem 10 / Figure 3: closure properties\ + +lemma reflexive_coreflexive_dual: + "reflexive x \ coreflexive (x\<^sup>d)" + using dual_antitone dual_involutive dual_one by fastforce + +end + +class multirelation_algebra_4 = multirelation_algebra_3 + + assumes dual_sub_dist_comp: "(x * y)\<^sup>d \ x\<^sup>d * y\<^sup>d" +begin + +subclass multirelation_algebra_1 + apply unfold_locales + by (metis order.antisym top.extremum dual_zero dual_sub_dist_comp dual_involutive mult_left_zero) + +lemma dual_sub_dist_comp_one: + "(x * y)\<^sup>d \ (x * 1)\<^sup>d * y\<^sup>d" + by (metis dual_sub_dist_comp mult_one_associative) + +text \AAMP Theorem 10 / Figure 3: closure properties\ + +lemma co_total_total_dual: + "co_total x \ total (x\<^sup>d)" + by (metis co_total_def dual_sub_dist_comp dual_zero top_le) + +lemma transitive_dense_dual: + "transitive x \ dense_rel (x\<^sup>d)" + using dual_antitone dual_sub_dist_comp inf.order_lesseq_imp by blast + +end + +text \M2-algebra\ + +class multirelation_algebra_5 = multirelation_algebra_3 + + assumes dual_dist_comp_one: "(x * y)\<^sup>d = (x * 1)\<^sup>d * y\<^sup>d" +begin + +subclass multirelation_algebra_4 + apply unfold_locales + by (metis dual_antitone mult_sub_right_one mult_left_isotone dual_dist_comp_one) + +lemma strong_up_closed: + "x * 1 \ x \ x\<^sup>d * y\<^sup>d \ (x * y)\<^sup>d" + by (simp add: dual_dist_comp_one antisym_conv mult_sub_right_one) + +lemma strong_up_closed_2: + "up_closed x \ (x * y)\<^sup>d = x\<^sup>d * y\<^sup>d" + by (simp add: dual_dist_comp_one up_closed_def) + +subclass lattice_ordered_pre_left_semiring_2 + apply unfold_locales + apply (smt comp_up_closed dual_antitone dual_dist_comp_one dual_involutive dual_one mult_left_one mult_one_associative mult_semi_associative up_closed_def strong_up_closed_2) + by (smt dual_dist_comp_one dual_dist_inf dual_involutive eq_refl mult_one_associative mult_right_dist_sup) + +text \AAMP Theorem 8\ + +subclass multirelation_algebra_2 .. + +text \AAMP Theorem 10 / Figure 3: closure properties\ + +text \up-closed\ + +lemma dual_up_closed: + "up_closed x \ up_closed (x\<^sup>d)" + by (metis dual_involutive dual_one up_closed_def strong_up_closed_2) + +text \contact\ + +lemma contact_kernel_dual: + "contact x \ kernel (x\<^sup>d)" + by (metis contact_def contact_up_closed dual_dist_sup dual_involutive dual_one kernel_def kernel_up_closed up_closed_def strong_up_closed_2) + +text \add-distributive contact\ + +lemma sup_dist_contact_inf_dist_kernel_dual: + "sup_dist_contact x \ inf_dist_kernel (x\<^sup>d)" +proof + assume 1: "sup_dist_contact x" + hence 2: "up_closed x" + using sup_dist_contact_def contact_up_closed by auto + have "sup_distributive x" + using 1 sup_dist_contact_def by auto + hence "inf_distributive (x\<^sup>d)" + using 2 by (smt sup_distributive_def dual_dist_comp_one dual_dist_inf dual_involutive inf_distributive_def up_closed_def) + thus "inf_dist_kernel (x\<^sup>d)" + using 1 contact_kernel_dual sup_dist_contact_def inf_dist_kernel_def by blast +next + assume 3: "inf_dist_kernel (x\<^sup>d)" + hence 4: "up_closed (x\<^sup>d)" + using kernel_up_closed inf_dist_kernel_def by auto + have "inf_distributive (x\<^sup>d)" + using 3 inf_dist_kernel_def by auto + hence "sup_distributive (x\<^sup>d\<^sup>d)" + using 4 by (smt inf_distributive_def sup_distributive_def dual_dist_sup dual_involutive strong_up_closed_2) + thus "sup_dist_contact x" + using 3 contact_kernel_dual sup_dist_contact_def dual_involutive inf_dist_kernel_def by auto +qed + +text \test\ + +lemma test_co_test_dual: + "test x \ co_test (x\<^sup>d)" + by (smt (z3) co_test_def co_test_up_closed dual_dist_comp_one dual_dist_inf dual_involutive dual_one dual_top test_def test_up_closed up_closed_def) + +text \vector\ + +lemma vector_dual: + "vector x \ vector (x\<^sup>d)" + by (metis dual_dist_comp_one dual_involutive mult_top_associative) + +end + +class multirelation_algebra_6 = multirelation_algebra_4 + + assumes dual_sub_dist_comp_one: "(x * 1)\<^sup>d * y\<^sup>d \ (x * y)\<^sup>d" +begin + +subclass multirelation_algebra_5 + apply unfold_locales + by (metis dual_sub_dist_comp dual_sub_dist_comp_one order.eq_iff mult_one_associative) + +(* +lemma "dense_rel x \ coreflexive x \ up_closed x" nitpick [expect=genuine,card=5] oops +lemma "x * top \ y * z \ (x * top \ y) * z" nitpick [expect=genuine,card=8] oops +*) + +end + +text \M3-algebra\ + +class up_closed_multirelation_algebra = multirelation_algebra_3 + + assumes dual_dist_comp: "(x * y)\<^sup>d = x\<^sup>d * y\<^sup>d" +begin + +lemma mult_right_dist_inf: + "(x \ y) * z = x * z \ y * z" + by (metis dual_dist_sup dual_dist_comp dual_involutive mult_right_dist_sup) + +text \AAMP Theorem 9\ + +subclass idempotent_left_semiring + apply unfold_locales + apply (metis order.antisym dual_antitone dual_dist_comp dual_involutive mult_semi_associative) + apply simp + by (metis order.antisym dual_antitone dual_dist_comp dual_involutive dual_one mult_sub_right_one) + +subclass multirelation_algebra_6 + apply unfold_locales + by (simp_all add: dual_dist_comp) + +lemma vector_inf_comp: + "(x * top \ y) * z = x * top \ y * z" + by (simp add: vector_left_annihilator mult_right_dist_inf mult.assoc) + +lemma vector_zero_inf_comp: + "(x * bot \ y) * z = x * bot \ y * z" + by (simp add: mult_right_dist_inf mult.assoc) + +text \AAMP Theorem 10 / Figure 3: closure properties\ + +text \total\ + +lemma inf_total: + "total x \ total y \ total (x \ y)" + by (simp add: mult_right_dist_inf) + +lemma comp_total: + "total x \ total y \ total (x * y)" + by (simp add: mult_assoc) + +lemma total_co_total_dual: + "total x \ co_total (x\<^sup>d)" + by (metis co_total_def dual_dist_comp dual_involutive dual_top) + +text \dense\ + +lemma transitive_iff_dense_dual: + "transitive x \ dense_rel (x\<^sup>d)" + by (metis dual_antitone dual_dist_comp dual_involutive) + +text \idempotent\ + +lemma idempotent_dual: + "idempotent x \ idempotent (x\<^sup>d)" + using dual_involutive idempotent_transitive_dense transitive_iff_dense_dual by auto + +text \add-distributive\ + +lemma comp_sup_distributive: + "sup_distributive x \ sup_distributive y \ sup_distributive (x * y)" + by (simp add: sup_distributive_def mult.assoc) + +lemma sup_inf_distributive_dual: + "sup_distributive x \ inf_distributive (x\<^sup>d)" + by (smt (verit, ccfv_threshold) sup_distributive_def dual_dist_sup dual_dist_comp dual_dist_inf dual_involutive inf_distributive_def) + +text \inf-distributive\ + +lemma inf_inf_distributive: + "inf_distributive x \ inf_distributive y \ inf_distributive (x \ y)" + by (metis sup_inf_distributive_dual sup_sup_distributive dual_dist_inf dual_involutive) + +lemma comp_inf_distributive: + "inf_distributive x \ inf_distributive y \ inf_distributive (x * y)" + by (simp add: inf_distributive_def mult.assoc) + +(* +lemma "co_total x \ transitive x \ up_closed x \ coreflexive x" nitpick [expect=genuine,card=5] oops +lemma "total x \ dense_rel x \ up_closed x \ reflexive x" nitpick [expect=genuine,card=5] oops +lemma "x * top \ x\<^sup>d * bot = bot" nitpick [expect=genuine,card=6] oops +*) + +end + +class multirelation_algebra_7 = multirelation_algebra_4 + + assumes vector_inf_comp: "(x * top \ y) * z = x * top \ y * z" +begin + +lemma vector_zero_inf_comp: + "(x * bot \ y) * z = x * bot \ y * z" + by (metis vector_inf_comp vector_mult_closed zero_vector) + +lemma test_sup_distributive: + "test x \ sup_distributive x" + by (metis sup_distributive_def inf_sup_distrib1 mult_left_one test_def vector_inf_comp) + +lemma test_inf_distributive: + "test x \ inf_distributive x" + by (smt (z3) inf.right_idem inf.sup_monoid.add_assoc inf.sup_monoid.add_commute inf_distributive_def mult_left_one test_def vector_inf_comp) + +lemma test_inf_dist_kernel: + "test x \ inf_dist_kernel x" + by (metis inf.idem inf.sup_monoid.add_assoc kernel_def inf_dist_kernel_def mult_left_one test_def test_inf_distributive vector_inf_comp) + +lemma co_test_inf_distributive: + assumes "co_test x" + shows "inf_distributive x" +proof - + have "x = x * bot \ 1" + using assms co_test_def by auto + hence "\y z . x * y \ x * z = x * (y \ z)" + by (metis distrib_imp1 inf_sup_absorb inf_sup_distrib1 mult_left_one mult_left_top mult_right_dist_sup sup_top_right vector_zero_inf_comp) + thus "inf_distributive x" + by (simp add: inf_distributive_def) +qed + +lemma co_test_sup_distributive: + assumes "co_test x" + shows "sup_distributive x" +proof - + have "x = x * bot \ 1" + using assms co_test_def by auto + hence "\y z . x * (y \ z) = x * y \ x * z" + by (metis sup_sup_distributive sup_distributive_def inf_sup_absorb mult_left_top one_sup_distributive sup.idem sup_top_right vector_zero_inf_comp) + thus "sup_distributive x" + by (simp add: sup_distributive_def) +qed + +lemma co_test_sup_dist_contact: + "co_test x \ sup_dist_contact x" + by (simp add: sup_dist_contact_def co_test_sup_distributive co_test_contact) + +end + +end + diff --git a/thys/Correctness_Algebras/Monotonic_Boolean_Transformers.thy b/thys/Correctness_Algebras/Monotonic_Boolean_Transformers.thy new file mode 100644 --- /dev/null +++ b/thys/Correctness_Algebras/Monotonic_Boolean_Transformers.thy @@ -0,0 +1,632 @@ +(* Title: Monotonic Boolean Transformers + Author: Walter Guttmann + Maintainer: Walter Guttmann +*) + +section \Monotonic Boolean Transformers\ + +theory Monotonic_Boolean_Transformers + +imports MonoBoolTranAlgebra.Assertion_Algebra Base + +begin + +no_notation inf (infixl "\" 70) +no_notation uminus ("- _" [81] 80) + +context mbt_algebra +begin + +lemma directed_left_mult: + "directed Y \ directed ((*) x ` Y)" + apply (unfold directed_def) + using le_comp by blast + +lemma neg_assertion: + "neg_assert x \ assertion" + by (metis bot_comp neg_assert_def wpt_def wpt_is_assertion mult_assoc) + +lemma assertion_neg_assert: + "x \ assertion \ x = neg_assert (neg_assert x)" + by (metis neg_assertion uminus_uminus) + +text \extend and dualise part of Viorel Preoteasa's theory\ + +definition "assumption \ {x . 1 \ x \ (x * bot) \ (x ^ o) = x}" + +definition "neg_assume (x::'a) \ (x ^ o * top) \ 1" + +lemma neg_assume_assert: + "neg_assume x = (neg_assert (x ^ o)) ^ o" + using dual_bot dual_comp dual_dual dual_inf dual_one neg_assert_def neg_assume_def by auto + +lemma assert_iff_assume: + "x \ assertion \ x ^ o \ assumption" + by (smt assertion_def assumption_def dual_bot dual_comp dual_dual dual_inf dual_le dual_one mem_Collect_eq) + +lemma assertion_iff_assumption_subseteq: + "X \ assertion \ dual ` X \ assumption" + using assert_iff_assume by blast + +lemma assumption_iff_assertion_subseteq: + "X \ assumption \ dual ` X \ assertion" + using assert_iff_assume by auto + +lemma assumption_prop: + "x \ assumption \ (x * bot) \ 1 = x" + by (smt assert_iff_assume assertion_prop dual_comp dual_dual dual_neg_top dual_one dual_sup dual_top) + +lemma neg_assumption: + "neg_assume x \ assumption" + using assert_iff_assume neg_assertion neg_assume_assert by auto + +lemma assumption_neg_assume: + "x \ assumption \ x = neg_assume (neg_assume x)" + by (smt assert_iff_assume assertion_neg_assert dual_dual neg_assume_assert) + +lemma assumption_sup_comp_eq: + "x \ assumption \ y \ assumption \ x \ y = x * y" + by (smt assert_iff_assume assertion_inf_comp_eq dual_comp dual_dual dual_sup) + +lemma sup_uminus_assume[simp]: + "x \ assumption \ x \ neg_assume x = 1" + by (smt assert_iff_assume dual_dual dual_one dual_sup neg_assume_assert sup_uminus) + +lemma inf_uminus_assume[simp]: + "x \ assumption \ x \ neg_assume x = top" + by (smt assert_iff_assume dual_dual dual_sup dual_top inf_uminus neg_assume_assert sup_bot_right) + +lemma uminus_assumption[simp]: + "x \ assumption \ neg_assume x \ assumption" + by (simp add: neg_assumption) + +lemma uminus_uminus_assume[simp]: + "x \ assumption \ neg_assume (neg_assume x) = x" + by (simp add: assumption_neg_assume) + +lemma sup_assumption[simp]: + "x \ assumption \ y \ assumption \ x \ y \ assumption" + by (smt assert_iff_assume dual_dual dual_sup inf_assertion) + +lemma comp_assumption[simp]: + "x \ assumption \ y \ assumption \ x * y \ assumption" + using assumption_sup_comp_eq sup_assumption by auto + +lemma inf_assumption[simp]: + "x \ assumption \ y \ assumption \ x \ y \ assumption" + by (smt assert_iff_assume dual_dual dual_inf sup_assertion) + +lemma assumption_comp_idempotent[simp]: + "x \ assumption \ x * x = x" + using assumption_sup_comp_eq by fastforce + +lemma assumption_comp_idempotent_dual[simp]: + "x \ assumption \ (x ^ o) * (x ^ o) = x ^ o" + by (metis assumption_comp_idempotent dual_comp) + +lemma top_assumption[simp]: + "top \ assumption" + by (simp add: assumption_def) + +lemma one_assumption[simp]: + "1 \ assumption" + by (simp add: assumption_def) + +lemma assert_top: + "neg_assert (neg_assert p) ^ o * bot = neg_assert p * top" + by (smt bot_comp dual_comp dual_dual dual_top inf_comp inf_top_right mult.assoc mult.left_neutral neg_assert_def) + +lemma assume_bot: + "neg_assume (neg_assume p) ^ o * top = neg_assume p * bot" + by (smt assert_top dual_bot dual_comp dual_dual neg_assume_assert) + +definition "wpb x \ (x * bot) \ 1" + +lemma wpt_iff_wpb: + "wpb x = wpt (x ^ o) ^ o" + using dual_comp dual_dual dual_inf dual_one dual_top wpt_def wpb_def by auto + +lemma wpb_is_assumption[simp]: + "wpb x \ assumption" + using assert_iff_assume wpt_is_assertion wpt_iff_wpb by auto + +lemma wpb_comp: + "(wpb x) * x = x" + by (smt dual_comp dual_dual dual_neg_top dual_sup wpt_comp wpt_iff_wpb) + +lemma wpb_comp_2: + "wpb (x * y) = wpb (x * (wpb y))" + by (simp add: sup_comp mult_assoc wpb_def) + +lemma wpb_assumption[simp]: + "x \ assumption \ wpb x = x" + by (simp add: assumption_prop wpb_def) + +lemma wpb_choice: + "wpb (x \ y) = wpb x \ wpb y" + using sup_assoc sup_commute sup_comp wpb_def by auto + +lemma wpb_dual_assumption: + "x \ assumption \ wpb (x ^ o) = 1" + by (smt assert_iff_assume dual_dual dual_one wpt_dual_assertion wpt_iff_wpb) + +lemma wpb_mono: + "x \ y \ wpb x \ wpb y" + by (metis le_iff_sup wpb_choice) + +lemma assumption_disjunctive: + "x \ assumption \ x \ disjunctive" + by (smt assert_iff_assume assertion_conjunctive dual_comp dual_conjunctive dual_dual) + +lemma assumption_conjunctive: + "x \ assumption \ x \ conjunctive" + by (smt assert_iff_assume assertion_disjunctive dual_comp dual_disjunctive dual_dual) + +lemma wpb_le_assumption: + "x \ assumption \ x * y = y \ x \ wpb y" + by (metis assumption_prop bot_least le_comp sup_commute sup_right_isotone mult_assoc wpb_def) + +definition dual_omega :: "'a \ 'a" ("(_ ^ \)" [81] 80) + where "(x ^ \) \ (((x ^ o) ^ \) ^ o)" + +lemma dual_omega_fix: + "x^\ = (x * (x^\)) \ 1" + by (smt dual_comp dual_dual dual_omega_def dual_one dual_sup omega_fix) + +lemma dual_omega_comp_fix: + "x^\ * y = (x * (x^\) * y) \ y" + by (metis dual_omega_fix mult_1_left sup_comp) + +lemma dual_omega_greatest: + "z \ (x * z) \ y \ z \ (x^\) * y" + by (smt dual_comp dual_dual dual_le dual_neg_top dual_omega_def dual_sup omega_least) + +end + +context post_mbt_algebra +begin + +lemma post_antitone: + assumes "x \ y" + shows "post y \ post x" +proof - + have "post y \ post x * y * top \ post y" + by (metis assms inf_top_left post_1 inf_mono le_comp_left_right order_refl) + thus ?thesis + using order_lesseq_imp post_2 by blast +qed + +lemma post_assumption_below_one: + "q \ assumption \ post q \ post 1" + by (simp add: assumption_def post_antitone) + +lemma post_assumption_above_one: + "q \ assumption \ post 1 \ post (q ^ o)" + by (metis dual_le dual_one post_antitone sup.commute sup_ge1 wpb_assumption wpb_def) + +lemma post_assumption_below_dual: + "q \ assumption \ post q \ post (q ^ o)" + using order_trans post_assumption_above_one post_assumption_below_one by blast + +lemma assumption_assertion_absorb: + "q \ assumption \ q * (q ^ o) = q" + by (smt CollectE assumption_def assumption_prop bot_comp mult.left_neutral mult_assoc sup_comp) + +lemma post_dual_below_post_one: + assumes "q \ assumption" + shows "post (q ^ o) \ post 1 * q" +proof - + have "post (q ^ o) \ post 1 * q * (q ^ o) * top \ post (q ^ o)" + by (metis assms assumption_assertion_absorb gt_one_comp inf_le1 inf_top_left mult_assoc order_refl post_1 sup_uminus_assume top_unique) + thus ?thesis + using order_lesseq_imp post_2 by blast +qed + +lemma post_below_post_one: + "q \ assumption \ post q \ post 1 * q" + using order.trans post_assumption_below_dual post_dual_below_post_one by blast + +end + +context complete_mbt_algebra +begin + +lemma Inf_assumption[simp]: + "X \ assumption \ Inf X \ assumption" + by (metis Sup_assertion assert_iff_assume assumption_iff_assertion_subseteq dual_Inf dual_dual) + +definition "continuous x \ (\Y . directed Y \ x * (SUP y\Y . y) = (SUP y\Y . x * y))" + +definition "Continuous \ { x . continuous x }" + +lemma continuous_Continuous: + "continuous x \ x \ Continuous" + by (simp add: Continuous_def) + +text \Theorem 53.1\ + +lemma one_continuous: + "1 \ Continuous" + by (simp add: Continuous_def continuous_def image_def) + +lemma continuous_dist_ascending_chain: + assumes "x \ Continuous" + and "ascending_chain f" + shows "x * (SUP n::nat . f n) = (SUP n::nat . x * f n)" +proof - + have "directed (range f)" + by (simp add: assms(2) ascending_chain_directed) + hence "x * (SUP n::nat . f n) = (SUP y\range f . x * y)" + using assms(1) continuous_Continuous continuous_def by auto + thus ?thesis + by (simp add: range_composition) +qed + +text \Theorem 53.1\ + +lemma assertion_continuous: + assumes "x \ assertion" + shows "x \ Continuous" +proof - + have 1: "x = (x * top) \ 1" + using assms assertion_prop by auto + have "\Y . directed Y \ x * (SUP y\Y . y) = (SUP y\Y . x * y)" + proof (rule allI, rule impI) + fix Y + assume "directed Y" (* assumption not used *) + have "x * (SUP y\Y . y) = (x * top) \ (SUP y\Y . y)" + using 1 by (smt inf_comp mult.assoc mult.left_neutral top_comp) + also have "... = (SUP y\Y . (x * top) \ y)" + by (simp add: inf_Sup) + finally show "x * (SUP y\Y . y) = (SUP y\Y . x * y)" + using 1 by (smt inf_comp mult.left_neutral mult.assoc top_comp SUP_cong) + qed + thus ?thesis + by (simp add: continuous_def Continuous_def) +qed + +text \Theorem 53.1\ + +lemma assumption_continuous: + assumes "x \ assumption" + shows "x \ Continuous" +proof - + have 1: "x = (x * bot) \ 1" + by (simp add: assms assumption_prop) + have "\Y . directed Y \ x * (SUP y\Y . y) = (SUP y\Y . x * y)" + proof (rule allI, rule impI) + fix Y + assume 2: "directed Y" + have "x * (SUP y\Y . y) = (x * bot) \ (SUP y\Y . y)" + using 1 by (smt sup_comp mult.assoc mult.left_neutral bot_comp) + also have "... = (SUP y\Y . (x * bot) \ y)" + using 2 by (smt (verit, ccfv_threshold) sup_SUP SUP_cong directed_def) + finally show "x * (SUP y\Y . y) = (SUP y\Y . x * y)" + using 1 by (metis sup_comp mult.left_neutral mult.assoc bot_comp SUP_cong) + qed + thus ?thesis + by (simp add: continuous_def Continuous_def) +qed + +text \Theorem 53.1\ + +lemma mult_continuous: + assumes "x \ Continuous" + and "y \ Continuous" + shows "x * y \ Continuous" +proof - + have "\Y. directed Y \ x * y * (SUP y\Y . y) = (SUP z\Y . x * y * z)" + proof (rule allI, rule impI) + fix Y + assume "directed Y" + hence "x * y * (SUP w\Y . w) = (SUP z\Y . x * (y * z))" + by (metis assms continuous_Continuous continuous_def directed_left_mult image_ident image_image mult_assoc) + thus "x * y * (SUP y\Y . y) = (SUP z\Y . x * y * z)" + using mult_assoc by auto + qed + thus ?thesis + using Continuous_def continuous_def by blast +qed + +text \Theorem 53.1\ + +lemma sup_continuous: + "x \ Continuous \ y \ Continuous \ x \ y \ Continuous" + by (smt SUP_cong SUP_sup_distrib continuous_Continuous continuous_def sup_comp) + +text \Theorem 53.1\ + +lemma inf_continuous: + assumes "x \ Continuous" + and "y \ Continuous" + shows "x \ y \ Continuous" +proof - + have "\Y. directed Y \ (x \ y) * (SUP y\Y . y) = (SUP z\Y . (x \ y) * z)" + proof (rule allI, rule impI) + fix Y + assume 1: "directed Y" + have 2: "(SUP w\Y . SUP z\Y . (x * w) \ (y * z)) \ (SUP z\Y . (x * z) \ (y * z))" + proof (intro SUP_least) + fix w z + assume "w \ Y" and "z \ Y" + from this obtain v where 3: "v\Y \ w \ v \ z \ v" + using 1 by (meson directed_def) + hence "x * w \ (y * z) \ (x * v) \ (y * v)" + by (meson inf.sup_mono le_comp) + thus "x * w \ (y * z) \ (SUP z\Y . (x * z) \ (y * z))" + using 3 by (meson SUP_upper2) + qed + have "(SUP z\Y . (x * z) \ (y * z)) \ (SUP w\Y . SUP z\Y . (x * w) \ (y * z))" + apply (rule SUP_least) + by (meson SUP_upper SUP_upper2) + hence "(SUP w\Y . SUP z\Y . (x * w) \ (y * z)) = (SUP z\Y . (x \ y) * z)" + using 2 order.antisym inf_comp by auto + thus "(x \ y) * (SUP y\Y . y) = (SUP z\Y . (x \ y) * z)" + using 1 by (metis assms inf_comp continuous_Continuous continuous_def SUP_inf_distrib2) + qed + thus ?thesis + using Continuous_def continuous_def by blast +qed + +text \Theorem 53.1\ + +lemma dual_star_continuous: + assumes "x \ Continuous" + shows "x ^ \ \ Continuous" +proof - + have "\Y. directed Y \ (x ^ \) * (SUP y\Y . y) = (SUP z\Y . (x ^ \) * z)" + proof (rule allI, rule impI) + fix Y + assume "directed Y" + hence "directed ((*) (x ^ \) ` Y)" + by (simp add: directed_left_mult) + hence "x * (SUP y\Y . (x ^ \) * y) = (SUP y\Y . x * ((x ^ \) * y))" + by (metis assms continuous_Continuous continuous_def image_ident image_image) + also have "... = (SUP y\Y . x * (x ^ \) * y)" + using mult_assoc by auto + also have "... \ (SUP y\Y . (x ^ \) * y)" + apply (rule SUP_least) + by (simp add: SUP_upper2 dual_star_comp_fix) + finally have "x * (SUP y\Y . (x ^ \) * y) \ (SUP y\Y . y) \ (SUP y\Y . (x ^ \) * y)" + apply (rule sup_least) + by (metis SUP_mono' dual_star_comp_fix sup.cobounded1 sup_commute) + thus "(x ^ \) * (SUP y\Y . y) = (SUP z\Y . (x ^ \) * z)" + by (meson SUP_least SUP_upper order.antisym dual_star_least le_comp) + qed + thus ?thesis + using Continuous_def continuous_def by blast +qed + +text \Theorem 53.1\ + +lemma omega_continuous: + assumes "x \ Continuous" + shows "x ^ \ \ Continuous" +proof - + have "\Y. directed Y \ (x ^ \) * (SUP y\Y . y) = (SUP z\Y . (x ^ \) * z)" + proof (rule allI, rule impI) + fix Y + assume 1: "directed Y" + hence "directed ((*) (x ^ \) ` Y)" + using directed_left_mult by auto + hence "x * (SUP y\Y . (x ^ \) * y) = (SUP y\Y . x * ((x ^ \) * y))" + by (metis assms continuous_Continuous continuous_def image_ident image_image) + hence 2: "x * (SUP y\Y . (x ^ \) * y) = (SUP y\Y . x * (x ^ \) * y)" + by (simp add: mult_assoc) + have "(SUP y\Y . x * (x ^ \) * y) \ (SUP y\Y . y) = (SUP w\Y . SUP z\Y . (x * (x ^ \) * w) \ z)" + using SUP_inf_distrib2 by blast + hence "x * (SUP y\Y . (x ^ \) * y) \ (SUP y\Y . y) = (SUP w\Y . SUP z\Y . (x * (x ^ \) * w) \ z)" + using 2 by auto + also have "... \ (SUP y\Y . (x ^ \) * y)" + proof (intro SUP_least) + fix w z + assume "w \ Y" and "z \ Y" + from this obtain v where 3: "v\Y \ w \ v \ z \ v" + using 1 by (meson directed_def) + hence "x * x ^ \ * w \ z \ x ^ \ * v" + using inf.sup_mono le_comp omega_comp_fix by auto + thus "x * x ^ \ * w \ z \ (SUP y\Y . (x ^ \) * y)" + using 3 by (meson SUP_upper2) + qed + finally show "(x ^ \) * (SUP y\Y . y) = (SUP z\Y . (x ^ \) * z)" + by (meson SUP_least SUP_upper order.antisym omega_least le_comp) + qed + thus ?thesis + using Continuous_def continuous_def by blast +qed + +definition "co_continuous x \ (\Y . co_directed Y \ x * (INF y\Y . y) = (INF y\Y . x * y))" + +definition "Co_continuous \ { x . co_continuous x }" + +lemma directed_dual: + "directed X \ co_directed (dual ` X)" + by (simp add: directed_def co_directed_def dual_le[THEN sym]) + +lemma dual_dual_image: + "dual ` dual ` X = X" + by (simp add: image_comp) + +lemma continuous_dual: + "continuous x \ co_continuous (x ^ o)" +proof (unfold continuous_def co_continuous_def, rule iffI) + assume 1: "\Y. directed Y \ x * (SUP y\Y . y) = (SUP y\Y . x * y)" + show "\Y. co_directed Y \ x ^ o * (INF y\Y . y) = (INF y\Y . x ^ o * y)" + proof (rule allI, rule impI) + fix Y + assume "co_directed Y" + hence "x ^ o * (INF y\Y . y) = (INF y\(dual ` Y) . (x * y) ^ o)" + using 1 by (metis dual_dual_image dual_SUP image_ident image_image dual_comp directed_dual) + also have "... = (INF y\(dual ` Y) . x ^ o * y ^ o)" + by (meson dual_comp) + also have "... = (INF y\Y . x ^ o * y)" + by (simp add: image_image) + finally show "x ^ o * (INF y\Y . y) = (INF y\Y . x ^ o * y)" + . + qed +next + assume 2: "\Y. co_directed Y \ x ^ o * (INF y\Y . y) = (INF y\Y . x ^ o * y)" + show "\Y. directed Y \ x * (SUP y\Y . y) = (SUP y\Y . x * y)" + proof (rule allI, rule impI) + fix Y + assume "directed Y" + hence "x * (SUP y\Y . y) = (SUP y\(dual ` Y) . (x ^ o * y) ^ o)" + using 2 by (metis directed_dual dual_dual_image image_ident image_image dual_SUP dual_comp dual_dual) + also have "... = (SUP y\(dual ` Y) . x * y ^ o)" + using dual_comp dual_dual by auto + also have "... = (SUP y\Y . x * y)" + by (simp add: image_image) + finally show "x * (SUP y\Y . y) = (SUP y\Y . x * y)" + . + qed +qed + +lemma co_continuous_Co_continuous: + "co_continuous x \ x \ Co_continuous" + by (simp add: Co_continuous_def) + +text \Theorem 53.1 and Theorem 53.2\ + +lemma Continuous_dual: + "x \ Continuous \ x ^ o \ Co_continuous" + by (simp add: Co_continuous_def Continuous_def continuous_dual) + +text \Theorem 53.2\ + +lemma one_co_continuous: + "1 \ Co_continuous" + using Continuous_dual one_continuous by auto + +lemma ascending_chain_dual: + "ascending_chain f \ descending_chain (dual o f)" + using ascending_chain_def descending_chain_def dual_le by auto + +lemma co_continuous_dist_descending_chain: + assumes "x \ Co_continuous" + and "descending_chain f" + shows "x * (INF n::nat . f n) = (INF n::nat . x * f n)" +proof - + have "x ^ o * (SUP n::nat . (dual o f) n) = (SUP n::nat . x ^ o * (dual o f) n)" + by (smt assms Continuous_dual SUP_cong ascending_chain_dual continuous_dist_ascending_chain descending_chain_def dual_dual o_def) + thus ?thesis + by (smt INF_cong dual_SUP dual_comp dual_dual o_def) +qed + +text \Theorem 53.2\ + +lemma assertion_co_continuous: + "x \ assertion \ x \ Co_continuous" + by (smt Continuous_dual assert_iff_assume assumption_continuous dual_dual) + +text \Theorem 53.2\ + +lemma assumption_co_continuous: + "x \ assumption \ x \ Co_continuous" + by (smt Continuous_dual assert_iff_assume assertion_continuous dual_dual) + +text \Theorem 53.2\ + +lemma mult_co_continuous: + "x \ Co_continuous \ y \ Co_continuous \ x * y \ Co_continuous" + by (smt Continuous_dual dual_comp dual_dual mult_continuous) + +text \Theorem 53.2\ + +lemma sup_co_continuous: + "x \ Co_continuous \ y \ Co_continuous \ x \ y \ Co_continuous" + by (smt Continuous_dual dual_sup dual_dual inf_continuous) + +text \Theorem 53.2\ + +lemma inf_co_continuous: + "x \ Co_continuous \ y \ Co_continuous \ x \ y \ Co_continuous" + by (smt Continuous_dual dual_inf dual_dual sup_continuous) + +text \Theorem 53.2\ + +lemma dual_omega_co_continuous: + "x \ Co_continuous \ x ^ \ \ Co_continuous" + by (smt Continuous_dual dual_omega_def dual_dual omega_continuous) + +text \Theorem 53.2\ + +lemma star_co_continuous: + "x \ Co_continuous \ x ^ * \ Co_continuous" + by (smt Continuous_dual dual_star_def dual_dual dual_star_continuous) + +lemma dual_omega_iterate: + assumes "y \ Co_continuous" + shows "y ^ \ * z = (INF n::nat . ((\x . y * x \ z) ^ n) top)" +proof (rule order.antisym) + show "y ^ \ * z \ (INF n::nat . ((\x . y * x \ z) ^ n) top)" + proof (rule INF_greatest) + fix n + show "y ^ \ * z \ ((\x. y * x \ z) ^ n) top" + apply (induct n) + apply (metis power_zero_id id_def top_greatest) + by (smt dual_omega_comp_fix le_comp mult_assoc order_refl sup_mono power_succ_unfold_ext) + qed +next + have 1: "descending_chain (\n . ((\x. y * x \ z) ^ n) top)" + proof (unfold descending_chain_def, rule allI) + fix n + show "((\x. y * x \ z) ^ Suc n) top \ ((\x. y * x \ z) ^ n) top" + apply (induct n) + apply (metis power_zero_id id_def top_greatest) + by (smt power_succ_unfold_ext sup_mono order_refl le_comp) + qed + have "(INF n. ((\x. y * x \ z) ^ n) top) \ (INF n. ((\x. y * x \ z) ^ Suc n) top)" + apply (rule INF_greatest) + apply (unfold power_succ_unfold_ext) + by (smt power_succ_unfold_ext INF_lower UNIV_I) + thus "(INF n. ((\x. y * x \ z) ^ n) top) \ y ^ \ * z" + using 1 by (smt assms INF_cong co_continuous_dist_descending_chain power_succ_unfold_ext sup_INF sup_commute dual_omega_greatest) +qed + +lemma dual_omega_iterate_one: + "y \ Co_continuous \ y ^ \ = (INF n::nat . ((\x . y * x \ 1) ^ n) top)" + by (metis dual_omega_iterate mult.right_neutral) + +subclass ccpo + apply unfold_locales + apply (simp add: Sup_upper) + using Sup_least by auto + +end + +class post_mbt_algebra_ext = post_mbt_algebra + + assumes post_sub_fusion: "post 1 * neg_assume q \ post (neg_assume q ^ o)" +begin + +lemma post_fusion: + "post (neg_assume q ^ o) = post 1 * neg_assume q" + using order.antisym neg_assumption post_dual_below_post_one post_sub_fusion by auto + +lemma post_dual_post_one: + "q \ assumption \ post 1 * q \ post (q ^ o)" + by (metis assumption_neg_assume post_sub_fusion) + +end + +instance MonoTran :: (complete_boolean_algebra) post_mbt_algebra_ext +proof + fix q :: "'a MonoTran" + show "post 1 * neg_assume q \ post (neg_assume q ^ o)" + proof (unfold neg_assume_def, transfer) + fix f :: "'a \ 'a" + assume "mono f" + have "\x. top \ -f bot \ x \ \ f bot \ x \ top \ bot" + by (metis (no_types, lifting) double_compl inf.sup_bot_left inf_compl_bot sup.order_iff sup_bot_left sup_commute sup_inf_distrib1 top.extremum_uniqueI) + hence "post_fun top \ (dual_fun f \ top) \ id \ post_fun (f bot)" + by (simp add: dual_fun_def le_fun_def post_fun_def) + thus "post_fun (id top) \ (dual_fun f \ top) \ id \ post_fun (dual_fun ((dual_fun f \ top) \ id) top)" + by simp + qed +qed + +class complete_mbt_algebra_ext = complete_mbt_algebra + post_mbt_algebra_ext + +instance MonoTran :: (complete_boolean_algebra) complete_mbt_algebra_ext .. + +end + diff --git a/thys/Correctness_Algebras/Monotonic_Boolean_Transformers_Instances.thy b/thys/Correctness_Algebras/Monotonic_Boolean_Transformers_Instances.thy new file mode 100644 --- /dev/null +++ b/thys/Correctness_Algebras/Monotonic_Boolean_Transformers_Instances.thy @@ -0,0 +1,653 @@ +(* Title: Instances of Monotonic Boolean Transformers + Author: Walter Guttmann + Maintainer: Walter Guttmann +*) + +section \Instances of Monotonic Boolean Transformers\ + +theory Monotonic_Boolean_Transformers_Instances + +imports Monotonic_Boolean_Transformers Pre_Post_Modal General_Refinement_Algebras + +begin + +sublocale mbt_algebra < mbta: bounded_idempotent_left_semiring + apply unfold_locales + apply (simp add: le_comp) + apply (simp add: sup_comp) + apply simp + apply simp + apply simp + apply simp + by (simp add: mult_assoc) + +sublocale mbt_algebra < mbta_dual: bounded_idempotent_left_semiring where less = greater and less_eq = greater_eq and sup = inf and bot = top and top = bot + apply unfold_locales + using inf.bounded_iff inf_le1 inf_le2 mbta.mult_right_isotone apply simp + using inf_comp apply blast + apply simp + apply simp + apply simp + apply simp + by (simp add: mult_assoc) + +sublocale mbt_algebra < mbta: bounded_general_refinement_algebra where star = dual_star and Omega = dual_omega + apply unfold_locales + using dual_star_fix sup_commute apply force + apply (simp add: dual_star_least) + using dual_omega_fix sup_commute apply force + by (simp add: dual_omega_greatest sup_commute) + +sublocale mbt_algebra < mbta_dual: bounded_general_refinement_algebra where less = greater and less_eq = greater_eq and sup = inf and bot = top and Omega = omega and top = bot + apply unfold_locales + using order.eq_iff star_fix apply simp + using star_greatest apply simp + using inf_commute omega_fix apply fastforce + by (simp add: inf.sup_monoid.add_commute omega_least) + +text \Theorem 50.9(b)\ + +sublocale mbt_algebra < mbta: left_conway_semiring_L where circ = dual_star and L = bot + apply unfold_locales + apply (simp add: mbta.star_one) + by simp + +text \Theorem 50.8(a)\ + +sublocale mbt_algebra < mbta_dual: left_conway_semiring_L where circ = omega and less = greater and less_eq = greater_eq and sup = inf and bot = top and L = bot + apply unfold_locales + apply simp + by simp + +text \Theorem 50.8(b)\ + +sublocale mbt_algebra < mbta_fix: left_conway_semiring_L where circ = dual_omega and L = top + apply unfold_locales + apply (simp add: mbta.Omega_one) + by simp + +text \Theorem 50.9(a)\ + +sublocale mbt_algebra < mbta_fix_dual: left_conway_semiring_L where circ = star and less = greater and less_eq = greater_eq and sup = inf and bot = top and L = top + apply unfold_locales + apply (simp add: mbta_dual.star_one) + by simp + +sublocale mbt_algebra < mbta: left_kleene_conway_semiring where circ = dual_star and star = dual_star .. + +sublocale mbt_algebra < mbta_dual: left_kleene_conway_semiring where circ = omega and less = greater and less_eq = greater_eq and sup = inf and bot = top .. + +sublocale mbt_algebra < mbta_fix: left_kleene_conway_semiring where circ = dual_omega and star = dual_star .. + +sublocale mbt_algebra < mbta_fix_dual: left_kleene_conway_semiring where circ = star and less = greater and less_eq = greater_eq and sup = inf and bot = top .. + +sublocale mbt_algebra < mbta: tests where uminus = neg_assert + apply unfold_locales + apply (simp add: mult_assoc) + apply (metis neg_assertion assertion_inf_comp_eq inf_commute) + subgoal for x y + proof - + have "(x ^ o * bot \ y * top) \ ((x ^ o * bot \ y ^ o * bot) \ 1) = x ^ o * bot \ 1" + by (metis inf_assoc dual_neg sup_bot_right sup_inf_distrib1) + thus ?thesis + by (simp add: dual_inf dual_comp inf_comp sup_comp neg_assert_def) + qed + apply (simp add: neg_assertion) + using assertion_inf_comp_eq inf_uminus neg_assertion apply force + apply (simp add: neg_assert_def) + apply (simp add: dual_inf dual_comp sup_comp neg_assert_def inf_sup_distrib2) + apply (simp add: assertion_inf_comp_eq inf.absorb_iff1 neg_assertion) + using inf.less_le_not_le by blast + +sublocale mbt_algebra < mbta_dual: tests where less = greater and less_eq = greater_eq and sup = inf and uminus = neg_assume and bot = top + apply unfold_locales + apply (simp add: mult_assoc) + apply (metis neg_assumption assumption_sup_comp_eq sup_commute) + subgoal for x y + proof - + have "(x ^ o * top \ y * bot) \ ((x ^ o * top \ y ^ o * top) \ 1) = x ^ o * top \ 1" + by (metis dual_dual dual_neg_top inf_sup_distrib1 inf_top_right sup_assoc) + thus ?thesis + by (simp add: dual_comp dual_sup inf_comp sup_comp neg_assume_def) + qed + using assumption_neg_assume comp_assumption neg_assumption apply blast + using assumption_sup_comp_eq inf_uminus_assume neg_assumption apply fastforce + apply (simp add: neg_assume_def) + apply (simp add: dual_inf dual_comp dual_sup inf_comp sup_comp neg_assume_def sup_inf_distrib2) + apply (simp add: assumption_sup_comp_eq neg_assumption sup.absorb_iff1) + using inf.less_le_not_le by auto + +text \Theorem 51.2\ + +sublocale mbt_algebra < mbta: bounded_relative_antidomain_semiring where d = "\x . (x * top) \ 1" and uminus = neg_assert and Z = bot + apply unfold_locales + subgoal for x + proof - + have "x ^ o * bot \ x \ bot" + by (metis dual_neg eq_refl inf.commute inf_mono mbta.top_right_mult_increasing) + thus ?thesis + by (simp add: neg_assert_def inf_comp) + qed + apply (simp add: dual_comp dual_inf neg_assert_def sup_comp mult_assoc) + apply simp + apply simp + apply (simp add: dual_inf dual_comp sup_comp neg_assert_def inf_sup_distrib2) + apply (simp add: dual_sup inf_comp neg_assert_def inf.assoc) + by (simp add: dual_inf dual_comp sup_comp neg_assert_def) + +text \Theorem 51.1\ + +sublocale mbt_algebra < mbta_dual: bounded_relative_antidomain_semiring where d = "\x . (x * bot) \ 1" and less = greater and less_eq = greater_eq and sup = inf and uminus = neg_assume and bot = top and top = bot and Z = top + apply unfold_locales + subgoal for x + proof - + have "top \ x ^ o * top \ x" + by (metis dual_dual dual_neg_top mbta_dual.top_right_mult_increasing sup_commute sup_left_isotone) + thus ?thesis + by (simp add: sup_comp neg_assume_def) + qed + using assume_bot dual_comp neg_assume_def sup_comp mult_assoc apply simp + apply simp + apply simp + apply (simp add: dual_inf dual_comp dual_sup inf_comp sup_comp neg_assume_def sup_inf_distrib2) + apply (simp add: dual_inf sup_comp neg_assume_def sup.assoc) + by (simp add: dual_comp dual_sup inf_comp neg_assume_def) + +sublocale mbt_algebra < mbta: relative_domain_semiring_split where d = "\x . (x * top) \ 1" and Z = bot + apply unfold_locales + by simp + +sublocale mbt_algebra < mbta_dual: relative_domain_semiring_split where d = "\x . (x * bot) \ 1" and less = greater and less_eq = greater_eq and sup = inf and bot = top and Z = top + apply unfold_locales + by simp + +sublocale mbt_algebra < mbta: diamond_while where box = "\x y . neg_assert (x * neg_assert y)" and circ = dual_star and d = "\x . (x * top) \ 1" and diamond = "\x y . (x * y * top) \ 1" and ite = "\x p y . (p * x) \ (neg_assert p * y)" and pre = "\x y . wpt (x * y)" and uminus = neg_assert and while = "\p x . ((p * x) ^ \) * neg_assert p" and Z = bot + apply unfold_locales + apply simp + apply simp + apply (rule wpt_def) + apply simp + by simp + +sublocale mbt_algebra < mbta_dual: box_while where box = "\x y . neg_assume (x * neg_assume y)" and circ = omega and d = "\x . (x * bot) \ 1" and diamond = "\x y . (x * y * bot) \ 1" and ite = "\x p y . (p * x) \ (neg_assume p * y)" and less = greater and less_eq = greater_eq and sup = inf and pre = "\x y . wpb (x ^ o * y)" and uminus = neg_assume and while = "\p x . ((p * x) ^ \) * neg_assume p" and bot = top and top = bot and Z = top + apply unfold_locales + apply simp + apply simp + apply (metis assume_bot dual_comp mbta_dual.a_mult_d_2 mbta_dual.d_def neg_assume_def wpb_def mult_assoc) + apply simp + by simp + +sublocale mbt_algebra < mbta_fix: diamond_while where box = "\x y . neg_assert (x * neg_assert y)" and circ = dual_omega and d = "\x . (x * top) \ 1" and diamond = "\x y . (x * y * top) \ 1" and ite = "\x p y . (p * x) \ (neg_assert p * y)" and pre = "\x y . wpt (x * y)" and uminus = neg_assert and while = "\p x . ((p * x) ^ \) * neg_assert p" and Z = bot + apply unfold_locales + by simp_all + +sublocale mbt_algebra < mbta_fix_dual: box_while where box = "\x y . neg_assume (x * neg_assume y)" and circ = star and d = "\x . (x * bot) \ 1" and diamond = "\x y . (x * y * bot) \ 1" and ite = "\x p y . (p * x) \ (neg_assume p * y)" and less = greater and less_eq = greater_eq and sup = inf and pre = "\x y . wpb (x ^ o * y)" and uminus = neg_assume and while = "\p x . ((p * x) ^ *) * neg_assume p" and bot = top and top = bot and Z = top + apply unfold_locales + by simp_all + +sublocale mbt_algebra < mbta_pre: box_while where box = "\x y . neg_assert (x * neg_assert y)" and circ = dual_star and d = "\x . (x * top) \ 1" and diamond = "\x y . (x * y * top) \ 1" and ite = "\x p y . (p * x) \ (neg_assert p * y)" and pre = "\x y . wpt (x ^ o * y)" and uminus = neg_assert and while = "\p x . ((p * x) ^ \) * neg_assert p" and Z = bot + apply unfold_locales + apply (metis dual_comp dual_dual dual_top inf_top_right mbta_dual.mult_right_dist_sup mult_1_left neg_assert_def top_comp wpt_def mult_assoc) + apply simp + by simp + +sublocale mbt_algebra < mbta_pre_dual: diamond_while where box = "\x y . neg_assume (x * neg_assume y)" and circ = omega and d = "\x . (x * bot) \ 1" and diamond = "\x y . (x * y * bot) \ 1" and ite = "\x p y . (p * x) \ (neg_assume p * y)" and less = greater and less_eq = greater_eq and sup = inf and pre = "\x y . wpb (x * y)" and uminus = neg_assume and while = "\p x . ((p * x) ^ \) * neg_assume p" and bot = top and top = bot and Z = top + apply unfold_locales + apply (simp add: wpb_def) + apply simp + by simp + +sublocale mbt_algebra < mbta_pre_fix: box_while where box = "\x y . neg_assert (x * neg_assert y)" and circ = dual_omega and d = "\x . (x * top) \ 1" and diamond = "\x y . (x * y * top) \ 1" and ite = "\x p y . (p * x) \ (neg_assert p * y)" and pre = "\x y . wpt (x ^ o * y)" and uminus = neg_assert and while = "\p x . ((p * x) ^ \) * neg_assert p" and Z = bot + apply unfold_locales + by simp_all + +sublocale mbt_algebra < mbta_pre_fix_dual: diamond_while where box = "\x y . neg_assume (x * neg_assume y)" and circ = star and d = "\x . (x * bot) \ 1" and diamond = "\x y . (x * y * bot) \ 1" and ite = "\x p y . (p * x) \ (neg_assume p * y)" and less = greater and less_eq = greater_eq and sup = inf and pre = "\x y . wpb (x * y)" and uminus = neg_assume and while = "\p x . ((p * x) ^ *) * neg_assume p" and bot = top and top = bot and Z = top + apply unfold_locales + by simp_all + +sublocale post_mbt_algebra < mbta: pre_post_spec_Hd where box = "\x y . neg_assert (x * neg_assert y)" and d = "\x . (x * top) \ 1" and diamond = "\x y . (x * y * top) \ 1" and pre = "\x y . wpt (x * y)" and pre_post = "\p q . p * post q" and uminus = neg_assert and Hd = "post 1" and Z = bot + apply unfold_locales + apply (metis mult.assoc mult.left_neutral post_1) + apply (metis inf.commute inf_top_right mult.assoc mult.left_neutral post_2) + apply (metis neg_assertion assertion_disjunctive disjunctiveD) + subgoal for p x q + proof + let ?pt = "neg_assert p" + let ?qt = "neg_assert q" + assume "?pt \ wpt (x * ?qt)" + hence "?pt * post ?qt \ x * ?qt * top * post ?qt \ post ?qt" + by (metis mbta.mult_left_isotone wpt_def inf_comp mult.left_neutral) + thus "?pt * post ?qt \ x" + by (smt mbta.top_left_zero mult.assoc post_2 order_trans) + next + let ?pt = "neg_assert p" + let ?qt = "neg_assert q" + assume "?pt * post ?qt \ x" + thus "?pt \ wpt (x * ?qt)" + by (smt mbta.a_d_closed post_1 mult_assoc mbta.diamond_left_isotone wpt_def) + qed + by (simp add: mbta_dual.mult_right_dist_sup) + +sublocale post_mbt_algebra < mbta_dual: pre_post_spec_H where box = "\x y . neg_assume (x * neg_assume y)" and d = "\x . (x * bot) \ 1" and diamond = "\x y . (x * y * bot) \ 1" and less = greater and less_eq = greater_eq and sup = inf and pre = "\x y . wpb (x ^ o * y)" and pre_post = "\p q . (p ^ o) * post (q ^ o)" and uminus = neg_assume and bot = top and H = "post 1" and top = bot and Z = top +proof + fix p x q + let ?pt = "neg_assume p" + let ?qt = "neg_assume q" + show "wpb (x ^ o * ?qt) \ ?pt \ ?pt ^ o * post (?qt ^ o) \ x" + proof + assume "wpb (x ^ o * ?qt) \ ?pt" + hence "?pt ^ o * post (?qt ^ o) \ (x * (?qt ^ o) * top \ 1) * post (?qt ^ o)" + by (smt wpb_def dual_le dual_comp dual_dual dual_one dual_sup dual_top mbta.mult_left_isotone) + thus "?pt ^ o * post (?qt ^ o) \ x" + by (smt inf_comp mult_assoc top_comp mult.left_neutral post_2 order_trans) + next + assume 1: "?pt ^ o * post (?qt ^ o) \ x" + have "?pt ^ o = ?pt ^ o * post (?qt ^ o) * (?qt ^ o) * top \ 1" + by (metis assert_iff_assume assertion_prop dual_dual mult_assoc neg_assumption post_1) + thus "wpb (x ^ o * ?qt) \ ?pt" + using 1 by (smt dual_comp dual_dual dual_le dual_one dual_sup dual_top wpb_def mbta.diamond_left_isotone) + qed + show "post 1 * top = top" + by (simp add: mbta.Hd_total) + have "x * ?qt * bot \ (post 1 * neg_assume ?qt) = (x * neg_assume ?qt ^ o * top \ post 1) * neg_assume ?qt" + by (simp add: assume_bot mbta_dual.mult_right_dist_sup mult_assoc) + also have "... \ x * neg_assume ?qt ^ o" + by (smt assumption_assertion_absorb dual_comp dual_dual mbta.mult_left_isotone mult.right_neutral mult_assoc neg_assumption post_2) + also have "... \ x" + by (metis dual_comp dual_dual dual_le mbta.mult_left_sub_dist_sup_left mult.right_neutral neg_assume_def sup.commute) + finally show "x * ?qt * bot \ (post 1 * neg_assume ?qt) \ x" + . +qed + +sublocale post_mbt_algebra < mbta_pre: pre_post_spec_H where box = "\x y . neg_assert (x * neg_assert y)" and d = "\x . (x * top) \ 1" and diamond = "\x y . (x * y * top) \ 1" and pre = "\x y . wpt (x ^ o * y)" and pre_post = "\p q . p ^ o * (post q ^ o)" and uminus = neg_assert and H = "post 1 ^ o" and Z = bot +proof + fix p x q + let ?pt = "neg_assert p" + let ?qt = "neg_assert q" + show "?pt \ wpt (x ^ o * ?qt) \ x \ ?pt ^ o * (post ?qt ^ o)" + proof + assume "?pt \ wpt (x ^ o * ?qt)" + hence "?pt * post ?qt \ (x ^ o * ?qt * top \ 1) * post ?qt" + by (simp add: mbta_dual.mult_left_isotone wpt_def) + also have "... \ x ^ o" + using mbta.pre_pre_post wpt_def by auto + finally show "x \ ?pt ^ o * (post ?qt ^ o)" + by (metis dual_le dual_comp dual_dual) + next + assume "x \ ?pt ^ o * (post ?qt ^ o)" + hence "x * ?qt ^ o * bot \ 1 \ (?pt * post ?qt * ?qt * top \ 1) ^ o" + by (smt (z3) inf.absorb_iff1 sup_inf_distrib2 dual_comp dual_inf dual_one dual_top mbta.mult_left_isotone) + also have "... = ?pt ^ o" + by (simp add: mbta.diamond_a_export post_1) + finally show "?pt \ wpt (x ^ o * ?qt)" + by (smt dual_comp dual_dual dual_le dual_neg_top dual_one dual_sup dual_top wpt_def) + qed + show "post 1 ^ o * bot = bot" + by (metis dual_comp dual_top mbta.Hd_total) + have "x ^ o * ?qt ^ o * bot \ (post 1 * neg_assert ?qt ^ o) \ x ^ o * neg_assert ?qt * neg_assert ?qt ^ o" + by (smt (verit, del_insts) bot_comp inf.commute inf_comp inf_top_left mbta.mult_left_isotone mult.left_neutral mult_assoc neg_assert_def post_2) + also have "... \ x ^ o" + by (smt assert_iff_assume assumption_assertion_absorb dual_comp dual_dual le_comp mbta.a_below_one mult_assoc neg_assertion mult_1_right) + finally show "x \ x * ?qt * top \ post 1 ^ o * neg_assert ?qt" + by (smt dual_comp dual_dual dual_inf dual_le dual_top) +qed + +sublocale post_mbt_algebra < mbta_pre_dual: pre_post_spec_Hd where box = "\x y . neg_assume (x * neg_assume y)" and d = "\x . (x * bot) \ 1" and diamond = "\x y . (x * y * bot) \ 1" and less = greater and less_eq = greater_eq and sup = inf and pre = "\x y . wpb (x * y)" and pre_post = "\p q . p * (post (q ^ o) ^ o)" and uminus = neg_assume and bot = top and Hd = "post 1 ^ o" and top = bot and Z = top + apply unfold_locales + apply (simp add: mbta_pre.H_zero_2) + apply (simp add: mbta_pre.H_greatest_finite) + apply (metis (no_types, lifting) dual_comp dual_dual dual_inf dual_top mbta_dual.mult_L_circ_mult mult_1_right neg_assume_def sup_commute sup_inf_distrib2) + subgoal for p x q + proof + let ?pt = "neg_assume p" + let ?qt = "neg_assume q" + assume "wpb (x * ?qt) \ ?pt" + hence "?pt ^ o * post (?qt ^ o) \ (x ^ o * ?qt ^ o * top \ 1) * post (?qt ^ o)" + by (smt dual_comp dual_dual dual_le dual_one dual_sup dual_top le_comp_right wpb_def) + also have "... \ x ^ o" + using mbta_dual.mult_right_dist_sup post_2 by force + finally show "x \ ?pt * post (?qt ^ o) ^ o" + by (smt dual_comp dual_dual dual_le) + next + let ?pt = "neg_assume p" + let ?qt = "neg_assume q" + assume "x \ ?pt * post (?qt ^ o) ^ o" + thus "wpb (x * ?qt) \ ?pt" + by (metis dual_comp dual_dual dual_le mbta_dual.pre_post_galois) + qed + by (simp add: sup_comp) + +sublocale post_mbt_algebra < mbta_dual: pre_post_spec_whiledo where ite = "\x p y . (p * x) \ (neg_assume p * y)" and less = greater and less_eq = greater_eq and sup = inf and pre = "\x y . wpb (x ^ o * y)" and pre_post = "\p q . (p ^ o) * post (q ^ o)" and uminus = neg_assume and while = "\p x . ((p * x) ^ \) * neg_assume p" and bot = top and top = bot .. + +sublocale post_mbt_algebra < mbta_fix_dual: pre_post_spec_whiledo where ite = "\x p y . (p * x) \ (neg_assume p * y)" and less = greater and less_eq = greater_eq and sup = inf and pre = "\x y . wpb (x ^ o * y)" and pre_post = "\p q . (p ^ o) * post (q ^ o)" and uminus = neg_assume and while = "\p x . ((p * x) ^ *) * neg_assume p" and bot = top and top = bot .. + +sublocale post_mbt_algebra < mbta_pre: pre_post_spec_whiledo where ite = "\x p y . (p * x) \ (neg_assert p * y)" and pre = "\x y . wpt (x ^ o * y)" and pre_post = "\p q . p ^ o * (post q ^ o)" and uminus = neg_assert and while = "\p x . ((p * x) ^ \) * neg_assert p" .. + +sublocale post_mbt_algebra < mbta_pre_fix: pre_post_spec_whiledo where ite = "\x p y . (p * x) \ (neg_assert p * y)" and pre = "\x y . wpt (x ^ o * y)" and pre_post = "\p q . p ^ o * (post q ^ o)" and uminus = neg_assert and while = "\p x . ((p * x) ^ \) * neg_assert p" .. + +sublocale post_mbt_algebra < mbta_dual: pre_post_L where box = "\x y . neg_assume (x * neg_assume y)" and circ = omega and d = "\x . (x * bot) \ 1" and diamond = "\x y . (x * y * bot) \ 1" and ite = "\x p y . (p * x) \ (neg_assume p * y)" and less = greater and less_eq = greater_eq and sup = inf and pre = "\x y . wpb (x ^ o * y)" and pre_post = "\p q . (p ^ o) * post (q ^ o)" and uminus = neg_assume and while = "\p x . ((p * x) ^ \) * neg_assume p" and bot = top and L = bot and top = bot and Z = top + apply unfold_locales + by simp + +sublocale post_mbt_algebra < mbta_fix_dual: pre_post_L where box = "\x y . neg_assume (x * neg_assume y)" and circ = star and d = "\x . (x * bot) \ 1" and diamond = "\x y . (x * y * bot) \ 1" and ite = "\x p y . (p * x) \ (neg_assume p * y)" and less = greater and less_eq = greater_eq and sup = inf and pre = "\x y . wpb (x ^ o * y)" and pre_post = "\p q . (p ^ o) * post (q ^ o)" and uminus = neg_assume and while = "\p x . ((p * x) ^ *) * neg_assume p" and bot = top and L = top and top = bot and Z = top + apply unfold_locales + by simp + +sublocale post_mbt_algebra < mbta_pre: pre_post_L where box = "\x y . neg_assert (x * neg_assert y)" and circ = dual_star and d = "\x . (x * top) \ 1" and diamond = "\x y . (x * y * top) \ 1" and ite = "\x p y . (p * x) \ (neg_assert p * y)" and pre = "\x y . wpt (x ^ o * y)" and pre_post = "\p q . p ^ o * (post q ^ o)" and star = dual_star and uminus = neg_assert and while = "\p x . ((p * x) ^ \) * neg_assert p" and L = bot and Z = bot + apply unfold_locales + by simp + +sublocale post_mbt_algebra < mbta_pre_fix: pre_post_L where box = "\x y . neg_assert (x * neg_assert y)" and circ = dual_omega and d = "\x . (x * top) \ 1" and diamond = "\x y . (x * y * top) \ 1" and ite = "\x p y . (p * x) \ (neg_assert p * y)" and pre = "\x y . wpt (x ^ o * y)" and pre_post = "\p q . p ^ o * (post q ^ o)" and star = dual_star and uminus = neg_assert and while = "\p x . ((p * x) ^ \) * neg_assert p" and L = top and Z = bot + apply unfold_locales + by simp + +sublocale complete_mbt_algebra < mbta: complete_tests where uminus = neg_assert + apply unfold_locales + apply (smt mbta.test_set_def neg_assertion subset_eq Sup_assertion assertion_neg_assert) + apply (simp add: Sup_upper) + by (simp add: Sup_least) + +sublocale complete_mbt_algebra < mbta_dual: complete_tests where less = greater and less_eq = greater_eq and sup = inf and uminus = neg_assume and bot = top and Inf = Sup and Sup = Inf + apply unfold_locales + apply (smt mbta_dual.test_set_def neg_assumption subset_eq Inf_assumption assumption_neg_assume) + apply (simp add: Inf_lower) + by (simp add: Inf_greatest) + +sublocale complete_mbt_algebra < mbta: complete_antidomain_semiring where d = "\x . (x * top) \ 1" and uminus = neg_assert and Z = bot +proof + fix f :: "nat \ 'a" + let ?F = "dual ` {f n | n . True}" + show "ascending_chain f \ neg_assert (complete_tests.Sum Sup f) = complete_tests.Prod Inf (\n. neg_assert (f n))" + proof + have "neg_assert (complete_tests.Sum Sup f) = 1 \ (\x\?F . x * bot)" + using Inf_comp dual_Sup mbta.Sum_def neg_assert_def inf_commute by auto + also have "... = (\x\?F . 1 \ x * bot)" + apply (subst inf_Inf) + apply blast + by (simp add: image_image) + also have "... = \{f n ^ o * bot \ 1 | n . True}" + apply (rule arg_cong[where f="Inf"]) + using inf_commute by auto + also have "... = complete_tests.Prod Inf (\n. neg_assert (f n))" + using mbta.Prod_def neg_assert_def by auto + finally show "neg_assert (complete_tests.Sum Sup f) = complete_tests.Prod Inf (\n. neg_assert (f n))" + . + qed + show "descending_chain f \ neg_assert (complete_tests.Prod Inf f) = complete_tests.Sum Sup (\n. neg_assert (f n))" + proof + have "neg_assert (complete_tests.Prod Inf f) = 1 \ (\x\?F . x * bot)" + using Sup_comp dual_Inf mbta.Prod_def neg_assert_def inf_commute by auto + also have "... = (\x\?F . 1 \ x * bot)" + by (simp add: inf_Sup image_image) + also have "... = \{f n ^ o * bot \ 1 |n. True}" + apply (rule arg_cong[where f="Sup"]) + using inf_commute by auto + also have "... = complete_tests.Sum Sup (\n. neg_assert (f n))" + using mbta.Sum_def neg_assert_def by auto + finally show "neg_assert (complete_tests.Prod Inf f) = complete_tests.Sum Sup (\n. neg_assert (f n))" + . + qed +qed + +sublocale complete_mbt_algebra < mbta_dual: complete_antidomain_semiring where d = "\x . (x * bot) \ 1" and less = greater and less_eq = greater_eq and sup = inf and uminus = neg_assume and bot = top and Inf = Sup and Sup = Inf and Z = top +proof + fix f :: "nat \ 'a" + let ?F = "dual ` {f n | n . True}" + show "ord.ascending_chain greater_eq f \ neg_assume (complete_tests.Sum Inf f) = complete_tests.Prod Sup (\n. neg_assume (f n))" + proof + have "neg_assume (complete_tests.Sum Inf f) = 1 \ (\x\?F . x * top)" + using mbta_dual.Sum_def neg_assume_def dual_Inf Sup_comp sup_commute by auto + also have "... = (\x\?F . 1 \ x * top)" + apply (subst sup_Sup) + apply blast + by (simp add: image_image) + also have "... = \{f n ^ o * top \ 1 | n . True}" + apply (rule arg_cong[where f="Sup"]) + using sup_commute by auto + also have "... = complete_tests.Prod Sup (\n. neg_assume (f n))" + using mbta_dual.Prod_def neg_assume_def by auto + finally show "neg_assume (complete_tests.Sum Inf f) = complete_tests.Prod Sup (\n. neg_assume (f n))" + . + qed + show "ord.descending_chain greater_eq f \ neg_assume (complete_tests.Prod Sup f) = complete_tests.Sum Inf (\n. neg_assume (f n))" + proof + have "neg_assume (complete_tests.Prod Sup f) = 1 \ (\x\?F . x * top)" + using mbta_dual.Prod_def neg_assume_def dual_Inf dual_Sup Inf_comp sup_commute by auto + also have "... = (\x\?F . 1 \ x * top)" + by (simp add: sup_Inf image_image) + also have "... = \{f n ^ o * top \ 1 |n. True}" + apply (rule arg_cong[where f="Inf"]) + using sup_commute by auto + also have "... = complete_tests.Sum Inf (\n. neg_assume (f n))" + using mbta_dual.Sum_def neg_assume_def by auto + finally show "neg_assume (complete_tests.Prod Sup f) = complete_tests.Sum Inf (\n. neg_assume (f n))" + . + qed +qed + +sublocale complete_mbt_algebra < mbta: diamond_while_program where box = "\x y . neg_assert (x * neg_assert y)" and circ = dual_star and d = "\x . (x * top) \ 1" and diamond = "\x y . (x * y * top) \ 1" and ite = "\x p y . (p * x) \ (neg_assert p * y)" and pre = "\x y . wpt (x * y)" and uminus = neg_assert and while = "\p x . ((p * x) ^ \) * neg_assert p" and Atomic_program = Continuous and Atomic_test = assertion and Z = bot + apply unfold_locales + apply (simp add: one_continuous) + by simp_all + +sublocale complete_mbt_algebra < mbta_dual: box_while_program where box = "\x y . neg_assume (x * neg_assume y)" and circ = omega and d = "\x . (x * bot) \ 1" and diamond = "\x y . (x * y * bot) \ 1" and ite = "\x p y . (p * x) \ (neg_assume p * y)" and less = greater and less_eq = greater_eq and sup = inf and pre = "\x y . wpb (x ^ o * y)" and uminus = neg_assume and while = "\p x . ((p * x) ^ \) * neg_assume p" and bot = top and Atomic_program = Continuous and Atomic_test = assumption and top = bot and Z = top + apply unfold_locales + apply (simp add: one_continuous) + by simp_all + +sublocale complete_mbt_algebra < mbta_fix: diamond_while_program where box = "\x y . neg_assert (x * neg_assert y)" and circ = dual_omega and d = "\x . (x * top) \ 1" and diamond = "\x y . (x * y * top) \ 1" and ite = "\x p y . (p * x) \ (neg_assert p * y)" and pre = "\x y . wpt (x * y)" and uminus = neg_assert and while = "\p x . ((p * x) ^ \) * neg_assert p" and Atomic_program = Co_continuous and Atomic_test = assertion and Z = bot + apply unfold_locales + apply (simp add: one_co_continuous) + by simp_all + +sublocale complete_mbt_algebra < mbta_fix_dual: box_while_program where box = "\x y . neg_assume (x * neg_assume y)" and circ = star and d = "\x . (x * bot) \ 1" and diamond = "\x y . (x * y * bot) \ 1" and ite = "\x p y . (p * x) \ (neg_assume p * y)" and less = greater and less_eq = greater_eq and sup = inf and pre = "\x y . wpb (x ^ o * y)" and uminus = neg_assume and while = "\p x . ((p * x) ^ *) * neg_assume p" and bot = top and Atomic_program = Co_continuous and Atomic_test = assumption and top = bot and Z = top + apply unfold_locales + apply (simp add: one_co_continuous) + by simp_all + +sublocale complete_mbt_algebra < mbta_pre: box_while_program where box = "\x y . neg_assert (x * neg_assert y)" and circ = dual_star and d = "\x . (x * top) \ 1" and diamond = "\x y . (x * y * top) \ 1" and ite = "\x p y . (p * x) \ (neg_assert p * y)" and pre = "\x y . wpt (x ^ o * y)" and uminus = neg_assert and while = "\p x . ((p * x) ^ \) * neg_assert p" and Atomic_program = Continuous and Atomic_test = assertion and Z = bot .. + +sublocale complete_mbt_algebra < mbta_pre_dual: diamond_while_program where box = "\x y . neg_assume (x * neg_assume y)" and circ = omega and d = "\x . (x * bot) \ 1" and diamond = "\x y . (x * y * bot) \ 1" and ite = "\x p y . (p * x) \ (neg_assume p * y)" and less = greater and less_eq = greater_eq and sup = inf and pre = "\x y . wpb (x * y)" and uminus = neg_assume and while = "\p x . ((p * x) ^ \) * neg_assume p" and bot = top and Atomic_program = Continuous and Atomic_test = assumption and top = bot and Z = top .. + +sublocale complete_mbt_algebra < mbta_pre_fix: box_while_program where box = "\x y . neg_assert (x * neg_assert y)" and circ = dual_omega and d = "\x . (x * top) \ 1" and diamond = "\x y . (x * y * top) \ 1" and ite = "\x p y . (p * x) \ (neg_assert p * y)" and pre = "\x y . wpt (x ^ o * y)" and uminus = neg_assert and while = "\p x . ((p * x) ^ \) * neg_assert p" and Atomic_program = Co_continuous and Atomic_test = assertion and Z = bot .. + +sublocale complete_mbt_algebra < mbta_pre_fix_dual: diamond_while_program where box = "\x y . neg_assume (x * neg_assume y)" and circ = star and d = "\x . (x * bot) \ 1" and diamond = "\x y . (x * y * bot) \ 1" and ite = "\x p y . (p * x) \ (neg_assume p * y)" and less = greater and less_eq = greater_eq and sup = inf and pre = "\x y . wpb (x * y)" and uminus = neg_assume and while = "\p x . ((p * x) ^ *) * neg_assume p" and bot = top and Atomic_program = Co_continuous and Atomic_test = assumption and top = bot and Z = top .. + +text \Theorem 52\ + +sublocale complete_mbt_algebra < mbta: diamond_hoare_sound where box = "\x y . neg_assert (x * neg_assert y)" and circ = dual_star and d = "\x . (x * top) \ 1" and diamond = "\x y . (x * y * top) \ 1" and ite = "\x p y . (p * x) \ (neg_assert p * y)" and pre = "\x y . wpt (x * y)" and star = dual_star and uminus = neg_assert and while = "\p x . ((p * x) ^ \) * neg_assert p" and Atomic_program = Continuous and Atomic_test = assertion and Z = bot + apply unfold_locales + by (simp add: mbta.aL_one_circ mbta.star_one) + +text \Theorem 52\ + +sublocale complete_mbt_algebra < mbta_dual: box_hoare_sound where box = "\x y . neg_assume (x * neg_assume y)" and circ = omega and d = "\x . (x * bot) \ 1" and diamond = "\x y . (x * y * bot) \ 1" and ite = "\x p y . (p * x) \ (neg_assume p * y)" and less = greater and less_eq = greater_eq and sup = inf and pre = "\x y . wpb (x ^ o * y)" and uminus = neg_assume and while = "\p x . ((p * x) ^ \) * neg_assume p" and bot = top and Atomic_program = Continuous and Atomic_test = assumption and Inf = Sup and Sup = Inf and top = bot and Z = top + apply unfold_locales + using mbta.top_greatest mbta.vector_bot_closed mbta_dual.aL_one_circ mbta_dual.a_top omega_one top_comp by auto + +text \Theorem 52\ + +sublocale complete_mbt_algebra < mbta_fix: diamond_hoare_sound_2 where box = "\x y . neg_assert (x * neg_assert y)" and circ = dual_omega and d = "\x . (x * top) \ 1" and diamond = "\x y . (x * y * top) \ 1" and ite = "\x p y . (p * x) \ (neg_assert p * y)" and pre = "\x y . wpt (x * y)" and star = dual_star and uminus = neg_assert and while = "\p x . ((p * x) ^ \) * neg_assert p" and Atomic_program = Co_continuous and Atomic_test = assertion and Z = bot +proof (unfold_locales, rule impI) + fix p q x + let ?pt = "neg_assert p" + let ?qt = "neg_assert q" + assume "neg_assert ?pt * ?qt \ x * ?qt * top \ 1" + hence "?qt * top \ x ^ \ * ?pt * top" + by (smt mbta.Omega_induct mbta.d_def mbta.d_mult_top mbta.mult_left_isotone mbta.shunting_top_1 mult.assoc) + thus "mbta_fix.aL * ?qt \ x ^ \ * ?pt * top \ 1" + by (smt (z3) inf.absorb_iff1 inf.sup_monoid.add_commute inf_comp inf_le2 inf_left_commute inf_top_left mbta_fix.aL_one_circ mbta_pre_dual.top_left_zero mult_1_left neg_assert_def mult.assoc) +qed + +text \Theorem 52\ + +sublocale complete_mbt_algebra < mbta_fix_dual: box_hoare_sound where box = "\x y . neg_assume (x * neg_assume y)" and circ = star and d = "\x . (x * bot) \ 1" and diamond = "\x y . (x * y * bot) \ 1" and ite = "\x p y . (p * x) \ (neg_assume p * y)" and less = greater and less_eq = greater_eq and sup = inf and pre = "\x y . wpb (x ^ o * y)" and uminus = neg_assume and while = "\p x . ((p * x) ^ *) * neg_assume p" and bot = top and Atomic_program = Co_continuous and Atomic_test = assumption and Inf = Sup and Sup = Inf and top = bot and Z = top + apply unfold_locales + by (simp add: mbta_dual.star_one mbta_fix_dual.aL_one_circ) + +text \Theorem 52\ + +sublocale complete_mbt_algebra < mbta_pre: box_hoare_sound where box = "\x y . neg_assert (x * neg_assert y)" and circ = dual_star and d = "\x . (x * top) \ 1" and diamond = "\x y . (x * y * top) \ 1" and ite = "\x p y . (p * x) \ (neg_assert p * y)" and pre = "\x y . wpt (x ^ o * y)" and star = dual_star and uminus = neg_assert and while = "\p x . ((p * x) ^ \) * neg_assert p" and Atomic_program = Continuous and Atomic_test = assertion and Z = bot + apply unfold_locales + using mbta.star_one mbta_pre.aL_one_circ by auto + +text \Theorem 52\ + +sublocale complete_mbt_algebra < mbta_pre_dual: diamond_hoare_sound_2 where box = "\x y . neg_assume (x * neg_assume y)" and circ = omega and d = "\x . (x * bot) \ 1" and diamond = "\x y . (x * y * bot) \ 1" and ite = "\x p y . (p * x) \ (neg_assume p * y)" and less = greater and less_eq = greater_eq and sup = inf and pre = "\x y . wpb (x * y)" and uminus = neg_assume and while = "\p x . ((p * x) ^ \) * neg_assume p" and bot = top and Atomic_program = Continuous and Atomic_test = assumption and Inf = Sup and Sup = Inf and top = bot and Z = top +proof (unfold_locales, rule impI) + fix p q x + let ?pt = "neg_assume p" + let ?qt = "neg_assume q" + assume "x * ?qt * bot \ 1 \ neg_assume ?pt * ?qt" + hence "x * ?qt * bot \ ?pt \ ?qt" + by (smt (z3) inf.absorb_iff1 inf_left_commute inf_commute inf_le1 le_supE mbta_dual.a_compl_intro mbta_dual.d_def order_trans) + hence "(x * ?qt * bot \ ?pt) * bot \ ?qt * bot" + using mbta.mult_left_isotone by blast + hence "x ^ \ * ?pt * bot \ 1 \ ?qt" + by (smt bot_comp inf_comp sup_left_isotone mbta_dual.a_d_closed mult_assoc omega_least) + thus "x ^ \ * ?pt * bot \ 1 \ mbta_pre_dual.aL * ?qt" + by (simp add: mbta_pre_dual.aL_one_circ) +qed + +text \Theorem 52\ + +sublocale complete_mbt_algebra < mbta_pre_fix: box_hoare_sound where box = "\x y . neg_assert (x * neg_assert y)" and circ = dual_omega and d = "\x . (x * top) \ 1" and diamond = "\x y . (x * y * top) \ 1" and ite = "\x p y . (p * x) \ (neg_assert p * y)" and pre = "\x y . wpt (x ^ o * y)" and star = dual_star and uminus = neg_assert and while = "\p x . ((p * x) ^ \) * neg_assert p" and Atomic_program = Co_continuous and Atomic_test = assertion and Z = bot + apply unfold_locales + using mbta.Omega_one mbta_pre_fix.aL_def by auto + +text \Theorem 52\ + +sublocale complete_mbt_algebra < mbta_pre_fix_dual: diamond_hoare_sound where box = "\x y . neg_assume (x * neg_assume y)" and circ = star and d = "\x . (x * bot) \ 1" and diamond = "\x y . (x * y * bot) \ 1" and ite = "\x p y . (p * x) \ (neg_assume p * y)" and less = greater and less_eq = greater_eq and sup = inf and pre = "\x y . wpb (x * y)" and uminus = neg_assume and while = "\p x . ((p * x) ^ *) * neg_assume p" and bot = top and Atomic_program = Co_continuous and Atomic_test = assumption and Inf = Sup and Sup = Inf and top = bot and Z = top + apply unfold_locales + by (simp add: mbta_dual.star_one mbta_pre_fix_dual.aL_one_circ) + +text \Theorem 52\ + +sublocale complete_mbt_algebra < mbta: diamond_hoare_valid where box = "\x y . neg_assert (x * neg_assert y)" and circ = dual_star and d = "\x . (x * top) \ 1" and diamond = "\x y . (x * y * top) \ 1" and hoare_triple = "\p x q . p \ wpt(x * q)" and ite = "\x p y . (p * x) \ (neg_assert p * y)" and pre = "\x y . wpt (x * y)" and star = dual_star and uminus = neg_assert and while = "\p x . ((p * x) ^ \) * neg_assert p" and Atomic_program = Continuous and Atomic_test = assertion and Z = bot + apply unfold_locales + apply (simp add: mbta.aL_zero) + using mbta.aL_zero apply blast + subgoal for x t + proof + assume 1: "x \ while_program.While_program (*) neg_assert Continuous assertion (\p x . (p * x) ^ \ * neg_assert p) (\x p y . p * x \ neg_assert p * y) \ ascending_chain t \ tests.test_seq neg_assert t" + have "x \ Continuous" + apply (induct x rule: while_program.While_program.induct[where pre="\x y . wpt (x * y)" and while="\p x . ((p * x) ^ \) * neg_assert p"]) + apply unfold_locales + using 1 apply blast + apply simp + using mult_continuous apply blast + apply (metis assertion_continuous mbta.test_expression_test mult_continuous neg_assertion sup_continuous) + by (metis assertion_continuous dual_star_continuous mbta.test_expression_test mult_continuous neg_assertion) + thus "x * complete_tests.Sum Sup t = complete_tests.Sum Sup (\n. x * t n)" + using 1 by (smt continuous_dist_ascending_chain SUP_cong mbta.Sum_range) + qed + using wpt_def by auto + +text \Theorem 52\ + +sublocale complete_mbt_algebra < mbta_dual: box_hoare_valid where box = "\x y . neg_assume (x * neg_assume y)" and circ = omega and d = "\x . (x * bot) \ 1" and diamond = "\x y . (x * y * bot) \ 1" and hoare_triple = "\p x q . wpb(x ^ o * q) \ p" and ite = "\x p y . (p * x) \ (neg_assume p * y)" and less = greater and less_eq = greater_eq and sup = inf and pre = "\x y . wpb (x ^ o * y)" and uminus = neg_assume and while = "\p x . ((p * x) ^ \) * neg_assume p" and bot = top and Atomic_program = Continuous and Atomic_test = assumption and Inf = Sup and Sup = Inf and top = bot and Z = top +proof + fix p x q t + show "neg_assume q \ neg_assume p * neg_assume (x * neg_assume (neg_assume q)) \ neg_assume q \ whiledo.aL (\p x. (p * x) ^ \ * neg_assume p) (\x y. wpb (x ^ o * y)) 1 \ neg_assume (x ^ \ * neg_assume (neg_assume p))" + proof + let ?pt = "neg_assume p" + let ?qt = "neg_assume q" + assume "?qt \ ?pt * neg_assume (x * neg_assume ?qt)" + also have "... \ x ^ o * ?qt \ ?pt" + by (smt assumption_sup_comp_eq sup_left_isotone mbta.zero_right_mult_decreasing mbta_dual.pre_def neg_assume_def neg_assumption sup.commute sup.left_commute sup.left_idem wpb_def) + finally show "?qt \ mbta_dual.aL \ neg_assume (x ^ \ * neg_assume ?pt)" + by (smt dual_dual dual_omega_def dual_omega_greatest le_infI1 mbta_dual.a_d_closed mbta_dual.d_isotone mbta_dual.pre_def wpb_def) + qed + show "whiledo.aL (\p x. (p * x) ^ \ * neg_assume p) (\x y. wpb (x ^ o * y)) 1 = top \ whiledo.aL (\p x. (p * x) ^ \ * neg_assume p) (\x y. wpb (x ^ o * y)) 1 = 1" + using mbta_dual.L_def mbta_dual.aL_one_circ mbta_dual.a_top by auto + show "x \ while_program.While_program (*) neg_assume Continuous assumption (\p x. (p * x) ^ \ * neg_assume p) (\x p y. p * x \ neg_assume p * y) \ ord.descending_chain (\x y. y \ x) t \ tests.test_seq neg_assume t \ x * complete_tests.Prod Sup t = complete_tests.Prod Sup (\n. x * t n)" + proof + assume 1: "x \ while_program.While_program (*) neg_assume Continuous assumption (\p x . (p * x) ^ \ * neg_assume p) (\x p y . (p * x) \ (neg_assume p * y)) \ ord.descending_chain greater_eq t \ tests.test_seq neg_assume t" + have "x \ Continuous" + apply (induct x rule: while_program.While_program.induct[where pre="\x y . wpb (x ^ o * y)" and while="\p x . ((p * x) ^ \) * neg_assume p"]) + apply unfold_locales + using 1 apply blast + apply simp + apply (simp add: mult_continuous) + apply (metis assumption_continuous mbta_dual.test_expression_test mult_continuous neg_assumption inf_continuous) + by (metis assumption_continuous omega_continuous mbta_dual.test_expression_test mult_continuous neg_assumption) + thus "x * complete_tests.Prod Sup t = complete_tests.Prod Sup (\n. x * t n)" + using 1 by (smt ord.descending_chain_def ascending_chain_def continuous_dist_ascending_chain SUP_cong mbta_dual.Prod_range) + qed + show "(wpb (x ^ o * q) \ p) = (neg_assume (x * neg_assume q) \ p)" + by (simp add: mbta_dual.pre_def) +qed + +text \Theorem 52\ + +sublocale complete_mbt_algebra < mbta_pre_fix_dual: diamond_hoare_valid where box = "\x y . neg_assume (x * neg_assume y)" and circ = star and d = "\x . (x * bot) \ 1" and diamond = "\x y . (x * y * bot) \ 1" and hoare_triple = "\p x q . wpb(x * q) \ p" and ite = "\x p y . (p * x) \ (neg_assume p * y)" and less = greater and less_eq = greater_eq and sup = inf and pre = "\x y . wpb (x * y)" and uminus = neg_assume and while = "\p x . ((p * x) ^ *) * neg_assume p" and bot = top and Atomic_program = Co_continuous and Atomic_test = assumption and Inf = Sup and Sup = Inf and top = bot and Z = top + apply unfold_locales + using mbta_dual.star_one mbta_pre_fix_dual.aL_one_circ apply simp + using mbta_pre_fix_dual.aL_zero apply blast + subgoal for x t + proof + assume 1: "x \ while_program.While_program (*) neg_assume Co_continuous assumption (\p x . (p * x) ^ * * neg_assume p) (\x p y . (p * x) \ (neg_assume p * y)) \ ord.ascending_chain greater_eq t \ tests.test_seq neg_assume t" + have "x \ Co_continuous" + apply (induct x rule: while_program.While_program.induct[where pre="\x y . wpb (x * y)" and while="\p x . ((p * x) ^ * ) * neg_assume p"]) + apply unfold_locales + using 1 apply blast + apply simp + apply (simp add: mult_co_continuous) + apply (metis assumption_co_continuous mbta_dual.test_expression_test mult_co_continuous neg_assumption inf_co_continuous) + by (metis assumption_co_continuous star_co_continuous mbta_dual.test_expression_test mult_co_continuous neg_assumption) + thus "x * complete_tests.Sum Inf t = complete_tests.Sum Inf (\n. x * t n)" + using 1 by (smt descending_chain_def ord.ascending_chain_def co_continuous_dist_descending_chain INF_cong mbta_dual.Sum_range) + qed + using wpb_def by auto + +text \Theorem 52\ + +sublocale complete_mbt_algebra < mbta_pre_fix: box_hoare_valid where box = "\x y . neg_assert (x * neg_assert y)" and circ = dual_omega and d = "\x . (x * top) \ 1" and diamond = "\x y . (x * y * top) \ 1" and hoare_triple = "\p x q . p \ wpt(x ^ o * q)" and ite = "\x p y . (p * x) \ (neg_assert p * y)" and pre = "\x y . wpt (x ^ o * y)" and star = dual_star and uminus = neg_assert and while = "\p x . ((p * x) ^ \) * neg_assert p" and Atomic_program = Co_continuous and Atomic_test = assertion and Z = bot +proof + fix p x q t + show "neg_assert p * neg_assert (x * neg_assert (neg_assert q)) \ neg_assert q \ neg_assert (x ^ \ * neg_assert (neg_assert p)) \ neg_assert q \ whiledo.aL (\p x. (p * x) ^ \ * neg_assert p) (\x y. wpt (x ^ o * y)) 1" + proof + let ?pt = "neg_assert p" + let ?qt = "neg_assert q" + assume 1: "?pt * neg_assert (x * neg_assert ?qt) \ ?qt" + have "x ^ o * ?qt \ ?pt \ ?pt * neg_assert (x * neg_assert ?qt)" + by (smt (z3) inf.boundedI inf.cobounded1 inf.sup_monoid.add_commute le_infI2 inf_comp mbta.tests_dual.sub_commutative mbta.top_right_mult_increasing mbta_pre.pre_def mult.left_neutral mult_assoc top_comp wpt_def) + also have "... \ ?qt" + using 1 by simp + finally have "(x ^ o) ^ \ * ?pt * top \ ?qt * top" + using mbta.mult_left_isotone omega_least by blast + hence "neg_assert (x ^ \ * neg_assert ?pt) \ ?qt" + by (smt dual_omega_def inf_mono mbta.d_a_closed mbta.d_def mbta_pre.pre_def order_refl wpt_def mbta.a_d_closed) + thus "neg_assert (x ^ \ * neg_assert ?pt) \ ?qt \ mbta_pre_fix.aL" + using le_supI1 by blast + qed + show "whiledo.aL (\p x. (p * x) ^ \ * neg_assert p) (\x y. wpt (x ^ o * y)) 1 = bot \ whiledo.aL (\p x. (p * x) ^ \ * neg_assert p) (\x y. wpt (x ^ o * y)) 1 = 1" + using mbta.Omega_one mbta.a_top mbta_dual.vector_bot_closed mbta_pre_fix.aL_one_circ by auto + show "x \ while_program.While_program (*) neg_assert Co_continuous assertion (\p x. (p * x) ^ \ * neg_assert p) (\x p y. p * x \ neg_assert p * y) \ descending_chain t \ tests.test_seq neg_assert t \ x * complete_tests.Prod Inf t = complete_tests.Prod Inf (\n. x * t n)" + proof + assume 1: "x \ while_program.While_program (*) neg_assert Co_continuous assertion (\p x . (p * x) ^ \ * neg_assert p) (\x p y . p * x \ neg_assert p * y) \ descending_chain t \ tests.test_seq neg_assert t" + have "x \ Co_continuous" + apply (induct x rule: while_program.While_program.induct[where pre="\x y . wpt (x ^ o * y)" and while="\p x . ((p * x) ^ \) * neg_assert p"]) + apply unfold_locales + using 1 apply blast + apply simp + apply (simp add: mult_co_continuous) + apply (metis assertion_co_continuous mbta.test_expression_test mult_co_continuous neg_assertion sup_co_continuous) + by (metis assertion_co_continuous dual_omega_co_continuous mbta.test_expression_test mult_co_continuous neg_assertion) + thus "x * complete_tests.Prod Inf t = complete_tests.Prod Inf (\n. x * t n)" + using 1 by (smt descending_chain_def co_continuous_dist_descending_chain INF_cong mbta.Prod_range) + qed + show "(p \ wpt (x ^ o * q)) = (p \ neg_assert (x * neg_assert q))" + by (simp add: mbta_pre.pre_def) +qed + +sublocale complete_mbt_algebra < mbta_dual: pre_post_spec_hoare where ite = "\x p y . (p * x) \ (neg_assume p * y)" and less = greater and less_eq = greater_eq and sup = inf and pre = "\x y . wpb (x ^ o * y)" and pre_post = "\p q . (p ^ o) * post (q ^ o)" and uminus = neg_assume and while = "\p x . ((p * x) ^ \) * neg_assume p" and bot = top and Atomic_program = Continuous and Atomic_test = assumption and Inf = Sup and Sup = Inf and top = bot .. + +sublocale complete_mbt_algebra < mbta_fix_dual: pre_post_spec_hoare where ite = "\x p y . (p * x) \ (neg_assume p * y)" and less = greater and less_eq = greater_eq and sup = inf and pre = "\x y . wpb (x ^ o * y)" and pre_post = "\p q . (p ^ o) * post (q ^ o)" and uminus = neg_assume and while = "\p x . ((p * x) ^ *) * neg_assume p" and bot = top and Atomic_program = Co_continuous and Atomic_test = assumption and Inf = Sup and Sup = Inf and top = bot .. + +sublocale complete_mbt_algebra < mbta_pre: pre_post_spec_hoare where ite = "\x p y . (p * x) \ (neg_assert p * y)" and pre = "\x y . wpt (x ^ o * y)" and pre_post = "\p q . p ^ o * (post q ^ o)" and uminus = neg_assert and while = "\p x . ((p * x) ^ \) * neg_assert p" and Atomic_program = Continuous and Atomic_test = assertion .. + +sublocale complete_mbt_algebra < mbta_pre_fix: pre_post_spec_hoare where ite = "\x p y . (p * x) \ (neg_assert p * y)" and pre = "\x y . wpt (x ^ o * y)" and pre_post = "\p q . p ^ o * (post q ^ o)" and uminus = neg_assert and while = "\p x . ((p * x) ^ \) * neg_assert p" and Atomic_program = Co_continuous and Atomic_test = assertion .. + +end + diff --git a/thys/Correctness_Algebras/N_Algebras.thy b/thys/Correctness_Algebras/N_Algebras.thy new file mode 100644 --- /dev/null +++ b/thys/Correctness_Algebras/N_Algebras.thy @@ -0,0 +1,543 @@ +(* Title: N-Algebras + Author: Walter Guttmann + Maintainer: Walter Guttmann +*) + +section \N-Algebras\ + +theory N_Algebras + +imports Stone_Kleene_Relation_Algebras.Iterings Base Lattice_Ordered_Semirings + +begin + +class C_left_n_algebra = bounded_idempotent_left_semiring + bounded_distrib_lattice + n + L +begin + +abbreviation C :: "'a \ 'a" where "C x \ n(L) * top \ x" + +text \AACP Theorem 3.38\ + +lemma C_isotone: + "x \ y \ C x \ C y" + using inf.sup_right_isotone by auto + +text \AACP Theorem 3.40\ + +lemma C_decreasing: + "C x \ x" + by simp + +end + +class left_n_algebra = C_left_n_algebra + + assumes n_dist_n_add : "n(x) \ n(y) = n(n(x) * top \ y)" + assumes n_export : "n(x) * n(y) = n(n(x) * y)" + assumes n_left_upper_bound : "n(x) \ n(x \ y)" + assumes n_nL_meet_L_nL0 : "n(L) * x = (x \ L) \ n(L * bot) * x" + assumes n_n_L_split_n_n_L_L : "x * n(y) * L = x * bot \ n(x * n(y) * L) * L" + assumes n_sub_nL : "n(x) \ n(L)" + assumes n_L_decreasing : "n(x) * L \ x" + assumes n_L_T_meet_mult_combined: "C (x * y) * z \ C x * y * C z" + assumes n_n_top_split_n_top : "x * n(y) * top \ x * bot \ n(x * y) * top" + assumes n_top_meet_L_below_L : "x * top * y \ L \ x * L * y" +begin + +subclass lattice_ordered_pre_left_semiring .. + +lemma n_L_T_meet_mult_below: + "C (x * y) \ C x * y" +proof - + have "C (x * y) \ C x * y * C 1" + by (meson order.trans mult_sub_right_one n_L_T_meet_mult_combined) + also have "... \ C x * y" + by (metis mult_1_right mult_left_sub_dist_inf_right) + finally show ?thesis + . +qed + +text \AACP Theorem 3.41\ + +lemma n_L_T_meet_mult_propagate: + "C x * y \ x * C y" +proof - + have "C x * y \ C x * 1 * C y" + by (metis mult_1_right mult_assoc n_L_T_meet_mult_combined mult_1_right) + also have "... \ x * C y" + by (simp add: mult_right_sub_dist_inf_right) + finally show ?thesis + . +qed + +text \AACP Theorem 3.43\ + +lemma C_n_mult_closed: + "C (n(x) * y) = n(x) * y" + by (simp add: inf.absorb2 mult_isotone n_sub_nL) + +text \AACP Theorem 3.40\ + +lemma meet_L_below_C: + "x \ L \ C x" + by (simp add: le_supI1 n_nL_meet_L_nL0) + +text \AACP Theorem 3.42\ + +lemma n_L_T_meet_mult: + "C (x * y) = C x * y" + apply (rule order.antisym) + apply (rule n_L_T_meet_mult_below) + by (smt (z3) C_n_mult_closed inf.boundedE inf.sup_monoid.add_assoc inf.sup_monoid.add_commute mult_right_sub_dist_inf mult_assoc) + +text \AACP Theorem 3.42\ + +lemma C_mult_propagate: + "C x * y = C x * C y" + by (smt (z3) C_n_mult_closed order.eq_iff inf.left_commute inf.sup_monoid.add_commute mult_left_sub_dist_inf_right n_L_T_meet_mult_propagate) + +text \AACP Theorem 3.32\ + +lemma meet_L_below_n_L: + "x \ L \ n(L) * x" + by (simp add: n_nL_meet_L_nL0) + +text \AACP Theorem 3.27\ + +lemma n_vector_meet_L: + "x * top \ L \ x * L" + by (metis mult_1_right n_top_meet_L_below_L) + +lemma n_right_upper_bound: + "n(x) \ n(y \ x)" + by (simp add: n_left_upper_bound sup_commute) + +text \AACP Theorem 3.1\ + +lemma n_isotone: + "x \ y \ n(x) \ n(y)" + by (metis le_iff_sup n_left_upper_bound) + +lemma n_add_left_zero: + "n(bot) \ n(x) = n(x)" + using le_iff_sup sup_bot_right sup_right_divisibility n_isotone by auto + +text \AACP Theorem 3.13\ + +lemma n_mult_right_zero_L: + "n(x) * bot \ L" + by (meson bot_least mult_isotone n_L_decreasing n_sub_nL order_trans) + +lemma n_add_left_top: + "n(top) \ n(x) = n(top)" + by (simp add: sup_absorb1 n_isotone) + +text \AACP Theorem 3.18\ + +lemma n_n_L: + "n(n(x) * L) = n(x)" + by (metis order.antisym n_dist_n_add n_export n_sub_nL sup_bot_right sup_commute sup_top_left n_add_left_zero n_right_upper_bound) + +lemma n_mult_transitive: + "n(x) * n(x) \ n(x)" + by (metis mult_right_isotone n_export n_sub_nL n_n_L) + +lemma n_mult_left_absorb_add_sub: + "n(x) * (n(x) \ n(y)) \ n(x)" + by (metis mult_right_isotone n_dist_n_add n_export n_sub_nL n_n_L) + +text \AACP Theorem 3.21\ + +lemma n_mult_left_lower_bound: + "n(x) * n(y) \ n(x)" + by (metis mult_right_isotone n_export n_sub_nL n_n_L) + +text \AACP Theorem 3.20\ + +lemma n_mult_left_zero: + "n(bot) * n(x) = n(bot)" + by (metis n_export sup_absorb1 n_add_left_zero n_mult_left_lower_bound) + +lemma n_mult_right_one: + "n(x) * n(top) = n(x)" + using n_dist_n_add n_export sup_commute n_add_left_zero by fastforce + +lemma n_L_increasing: + "n(x) \ n(n(x) * L)" + by (simp add: n_n_L) + +text \AACP Theorem 3.2\ + +lemma n_galois: + "n(x) \ n(y) \ n(x) * L \ y" + by (metis mult_left_isotone n_L_decreasing n_L_increasing n_isotone order_trans) + +lemma n_add_n_top: + "n(x \ n(x) * top) = n(x)" + by (metis n_dist_n_add sup.idem sup_commute) + +text \AACP Theorem 3.6\ + +lemma n_L_below_nL_top: + "L \ n(L) * top" + by (metis inf_top.left_neutral meet_L_below_n_L) + +text \AACP Theorem 3.4\ + +lemma n_less_eq_char_n: + "x \ y \ x \ y \ L \ C x \ y \ n(y) * top" +proof + assume "x \ y" + thus "x \ y \ L \ C x \ y \ n(y) * top" + by (simp add: inf.coboundedI2 le_supI1) +next + assume 1: "x \ y \ L \ C x \ y \ n(y) * top" + hence "x \ y \ (x \ L)" + using sup_commute sup_inf_distrib2 by force + also have "... \ y \ C x" + using sup_right_isotone meet_L_below_C by blast + also have "... \ y \ n(y) * top" + using 1 by simp + finally have "x \ y \ (L \ n(y) * top)" + using 1 by (simp add: sup_inf_distrib1) + thus "x \ y" + by (metis inf_commute n_L_decreasing order_trans sup_absorb1 n_vector_meet_L) +qed + +text \AACP Theorem 3.31\ + +lemma n_L_decreasing_meet_L: + "n(x) * L \ x \ L" + using n_sub_nL n_galois by auto + +text \AACP Theorem 3.5\ + +lemma n_zero_L_zero: + "n(bot) * L = bot" + by (simp add: le_bot n_L_decreasing) + +lemma n_L_top_below_L: + "L * top \ L" +proof - + have "n(L * bot) * L * top \ L * bot" + by (metis dense_top_closed mult_isotone n_L_decreasing zero_vector mult_assoc) + hence "n(L * bot) * L * top \ L" + using order_lesseq_imp zero_right_mult_decreasing by blast + hence "n(L) * L * top \ L" + by (metis inf.absorb2 n_nL_meet_L_nL0 order.refl sup.absorb_iff1 top_right_mult_increasing mult_assoc) + thus "L * top \ L" + by (metis inf.absorb2 inf.sup_monoid.add_commute n_L_decreasing n_L_below_nL_top n_vector_meet_L) +qed + +text \AACP Theorem 3.9\ + +lemma n_L_top_L: + "L * top = L" + by (simp add: order.antisym top_right_mult_increasing n_L_top_below_L) + +text \AACP Theorem 3.10\ + +lemma n_L_below_L: + "L * x \ L" + by (metis mult_right_isotone top.extremum n_L_top_L) + +text \AACP Theorem 3.7\ + +lemma n_nL_nT: + "n(L) = n(top)" + using order.eq_iff n_sub_nL n_add_left_top by auto + +text \AACP Theorem 3.8\ + +lemma n_L_L: + "n(L) * L = L" + using order.antisym meet_L_below_n_L n_L_decreasing_meet_L by fastforce + +lemma n_top_L: + "n(top) * L = L" + using n_L_L n_nL_nT by auto + +text \AACP Theorem 3.23\ + +lemma n_n_L_split_n_L: + "x * n(y) * L \ x * bot \ n(x * y) * L" + by (metis n_n_L_split_n_n_L_L n_L_decreasing mult_assoc mult_left_isotone mult_right_isotone n_isotone sup_right_isotone) + +text \AACP Theorem 3.12\ + +lemma n_L_split_n_L_L: + "x * L = x * bot \ n(x * L) * L" + apply (rule order.antisym) + apply (metis mult_assoc n_n_L_split_n_L n_L_L) + by (simp add: mult_right_isotone n_L_decreasing) + +text \AACP Theorem 3.11\ + +lemma n_L_split_L: + "x * L \ x * bot \ L" + by (metis n_n_L_split_n_n_L_L n_sub_nL sup_right_isotone mult_assoc n_L_L n_galois) + +text \AACP Theorem 3.24\ + +lemma n_split_top: + "x * n(y) * top \ x * y \ n(x * y) * top" +proof - + have "x * bot \ n(x * y) * top \ x * y \ n(x * y) * top" + by (meson bot_least mult_isotone order.refl sup_left_isotone) + thus ?thesis + using order.trans n_n_top_split_n_top by blast +qed + +text \AACP Theorem 3.9\ + +lemma n_L_L_L: + "L * L = L" + by (metis inf.sup_monoid.add_commute inf_absorb1 n_L_below_L n_L_top_L n_vector_meet_L) + +text \AACP Theorem 3.9\ + +lemma n_L_top_L_L: + "L * top * L = L" + by (simp add: n_L_L_L n_L_top_L) + +text \AACP Theorem 3.19\ + +lemma n_n_nL: + "n(x) = n(x) * n(L)" + by (simp add: n_export n_n_L) + +lemma n_L_mult_idempotent: + "n(L) * n(L) = n(L)" + using n_n_nL by auto + +text \AACP Theorem 3.22\ + +lemma n_n_L_n: + "n(x * n(y) * L) \ n(x * y)" + by (simp add: mult_right_isotone n_L_decreasing mult_assoc n_isotone) + +text \AACP Theorem 3.3\ + +lemma n_less_eq_char: + "x \ y \ x \ y \ L \ x \ y \ n(y) * top" + by (meson inf.coboundedI2 le_supI1 n_less_eq_char_n) + +text \AACP Theorem 3.28\ + +lemma n_top_meet_L_split_L: + "x * top * y \ L \ x * bot \ L * y" +proof - + have "x * top * y \ L \ x * bot \ n(x * L) * L * y" + by (smt n_top_meet_L_below_L mult_assoc n_L_L_L n_L_split_n_L_L mult_right_dist_sup mult_left_zero) + also have "... \ x * bot \ x * L * y" + using mult_left_isotone n_L_decreasing sup_right_isotone by force + also have "... \ x * bot \ (x * bot \ L) * y" + using mult_left_isotone sup_right_isotone n_L_split_L by blast + also have "... = x * bot \ x * bot * y \ L * y" + by (simp add: mult_right_dist_sup sup_assoc) + also have "... = x * bot \ L * y" + by (simp add: mult_assoc) + finally show ?thesis + . +qed + +text \AACP Theorem 3.29\ + +lemma n_top_meet_L_L_meet_L: + "x * top * y \ L = x * L * y \ L" + apply (rule order.antisym) + apply (simp add: n_top_meet_L_below_L) + by (metis inf.sup_monoid.add_commute inf.sup_right_isotone mult_isotone order.refl top.extremum) + +lemma n_n_top_below_n_L: + "n(x * top) \ n(x * L)" + by (meson order.trans n_L_decreasing_meet_L n_galois n_vector_meet_L) + +text \AACP Theorem 3.14\ + +lemma n_n_top_n_L: + "n(x * top) = n(x * L)" + by (metis order.antisym mult_right_isotone n_isotone n_n_top_below_n_L top_greatest) + +text \AACP Theorem 3.30\ + +lemma n_meet_L_0_below_0_meet_L: + "(x \ L) * bot \ x * bot \ L" + by (meson inf.boundedE inf.boundedI mult_right_sub_dist_inf_left zero_right_mult_decreasing) + +text \AACP Theorem 3.15\ + +lemma n_n_L_below_L: + "n(x) * L \ x * L" + by (metis mult_assoc mult_left_isotone n_L_L_L n_L_decreasing) + +lemma n_n_L_below_n_L_L: + "n(x) * L \ n(x * L) * L" + by (simp add: mult_left_isotone n_galois n_n_L_below_L) + +text \AACP Theorem 3.16\ + +lemma n_below_n_L: + "n(x) \ n(x * L)" + by (simp add: n_galois n_n_L_below_L) + +text \AACP Theorem 3.17\ + +lemma n_below_n_L_mult: + "n(x) \ n(L) * n(x)" + by (metis n_export order_trans meet_L_below_n_L n_L_decreasing_meet_L n_isotone n_n_L) + +text \AACP Theorem 3.33\ + +lemma n_meet_L_below: + "n(x) \ L \ x" + by (meson inf.coboundedI1 inf.coboundedI2 le_supI2 sup.cobounded1 top_right_mult_increasing n_less_eq_char) + +text \AACP Theorem 3.35\ + +lemma n_meet_L_top_below_n_L: + "(n(x) \ L) * top \ n(x) * L" +proof - + have "(n(x) \ L) * top \ n(x) * top \ L * top" + by (meson mult_right_sub_dist_inf) + thus ?thesis + by (metis n_L_top_L n_vector_meet_L order_trans) +qed + +text \AACP Theorem 3.34\ + +lemma n_meet_L_top_below: + "(n(x) \ L) * top \ x" + using order.trans n_L_decreasing n_meet_L_top_below_n_L by blast + +text \AACP Theorem 3.36\ + +lemma n_n_meet_L: + "n(x) = n(x \ L)" + by (meson order.antisym inf.cobounded1 n_L_decreasing_meet_L n_galois n_isotone) + +lemma n_T_below_n_meet: + "n(x) * top = n(C x) * top" + by (metis inf.absorb2 inf.sup_monoid.add_assoc meet_L_below_C n_n_meet_L) + +text \AACP Theorem 3.44\ + +lemma n_C: + "n(C x) = n(x)" + by (metis n_T_below_n_meet n_export n_mult_right_one) + +text \AACP Theorem 3.37\ + +lemma n_T_meet_L: + "n(x) * top \ L = n(x) * L" + by (metis antisym_conv n_L_decreasing_meet_L n_n_L n_n_top_n_L n_vector_meet_L) + +text \AACP Theorem 3.39\ + +lemma n_L_top_meet_L: + "C L = L" + by (simp add: n_L_L n_T_meet_L) + +end + +class n_algebra = left_n_algebra + idempotent_left_zero_semiring +begin + +(* independence of axioms, checked in n_algebra without the respective axiom: + lemma n_dist_n_add : "n(x) \ n(y) = n(n(x) * top \ y)" nitpick [expect=genuine,card=5] oops + lemma n_export : "n(x) * n(y) = n(n(x) * y)" nitpick [expect=genuine,card=4] oops + lemma n_left_upper_bound : "n(x) \ n(x \ y)" nitpick [expect=genuine,card=5] oops + lemma n_nL_meet_L_nL0 : "n(L) * x = (x \ L) \ n(L * bot) * x" nitpick [expect=genuine,card=2] oops + lemma n_n_L_split_n_n_L_L : "x * n(y) * L = x * bot \ n(x * n(y) * L) * L" nitpick [expect=genuine,card=6] oops + lemma n_sub_nL : "n(x) \ n(L)" nitpick [expect=genuine,card=2] oops + lemma n_L_decreasing : "n(x) * L \ x" nitpick [expect=genuine,card=3] oops + lemma n_L_T_meet_mult_combined: "C (x * y) * z \ C x * y * C z" nitpick [expect=genuine,card=4] oops + lemma n_n_top_split_n_top : "x * n(y) * top \ x * bot \ n(x * y) * top" nitpick [expect=genuine,card=4] oops + lemma n_top_meet_L_below_L : "x * top * y \ L \ x * L * y" nitpick [expect=genuine,card=5] oops +*) + +text \AACP Theorem 3.25\ + +lemma n_top_split_0: + "n(x) * top * y \ x * y \ n(x * bot) * top" +proof - + have 1: "n(x) * top * y \ L \ x * y" + using inf.coboundedI1 mult_left_isotone n_L_decreasing_meet_L n_top_meet_L_L_meet_L by force + have "n(x) * top * y = n(x) * n(L) * top * y" + using n_n_nL by auto + also have "... = n(x) * ((top * y \ L) \ n(L * bot) * top * y)" + by (metis mult_assoc n_nL_meet_L_nL0) + also have "... \ n(x) * (top * y \ L) \ n(x) * n(L * bot) * top" + by (metis sup_right_isotone mult_assoc mult_left_dist_sup mult_right_isotone top_greatest) + also have "... \ (n(x) * top * y \ L) \ n(n(x) * L * bot) * top" + by (smt sup_left_isotone order.trans inf_greatest mult_assoc mult_left_sub_dist_inf_left mult_left_sub_dist_inf_right n_export n_galois n_sub_nL) + also have "... \ x * y \ n(n(x) * L * bot) * top" + using 1 sup_left_isotone by blast + also have "... \ x * y \ n(x * bot) * top" + using mult_left_isotone n_galois n_isotone order.refl sup_right_isotone by auto + finally show ?thesis + . +qed + +text \AACP Theorem 3.26\ + +lemma n_top_split: + "n(x) * top * y \ x * y \ n(x * y) * top" + by (metis order.trans sup_bot_right mult_assoc sup_right_isotone mult_left_isotone mult_left_sub_dist_sup_right n_isotone n_top_split_0) + +(* +lemma n_zero: "n(bot) = bot" nitpick [expect=genuine,card=2] oops +lemma n_one: "n(1) = bot" nitpick [expect=genuine,card=2] oops +lemma n_nL_one: "n(L) = 1" nitpick [expect=genuine,card=2] oops +lemma n_nT_one: "n(top) = 1" nitpick [expect=genuine,card=2] oops +lemma n_n_zero: "n(x) = n(x * bot)" nitpick [expect=genuine,card=2] oops +lemma n_dist_add: "n(x) \ n(y) = n(x \ y)" nitpick [expect=genuine,card=4] oops +lemma n_L_split: "x * n(y) * L = x * bot \ n(x * y) * L" nitpick [expect=genuine,card=3] oops +lemma n_split: "x \ x * bot \ n(x * L) * top" nitpick [expect=genuine,card=2] oops +lemma n_mult_top_1: "n(x * y) \ n(x * n(y) * top)" nitpick [expect=genuine,card=3] oops +lemma l91_1: "n(L) * x \ n(x * top) * top" nitpick [expect=genuine,card=3] oops +lemma meet_domain_top: "x \ n(y) * top = n(y) * x" nitpick [expect=genuine,card=3] oops +lemma meet_domain_2: "x \ n(y) * top \ n(L) * x" nitpick [expect=genuine,card=4] oops +lemma n_nL_top_n_top_meet_L_top_2: "n(L) * x * top \ n(x * top \ L) * top" nitpick [expect=genuine,card=3] oops +lemma n_nL_top_n_top_meet_L_top_1: "n(x * top \ L) * top \ n(L) * x * top" nitpick [expect=genuine,card=2] oops +lemma l9: "x * bot \ L \ n(x * L) * L" nitpick [expect=genuine,card=4] oops +lemma l18_2: "n(x * L) * L \ n(x) * L" nitpick [expect=genuine,card=3] oops +lemma l51_1: "n(x) * L \ (x \ L) * bot" nitpick [expect=genuine,card=2] oops +lemma l51_2: "(x \ L) * bot \ n(x) * L" nitpick [expect=genuine,card=4] oops + +lemma n_split_equal: "x \ n(x * L) * top = x * bot \ n(x * L) * top" nitpick [expect=genuine,card=2] oops +lemma n_split_top: "x * top \ x * bot \ n(x * L) * top" nitpick [expect=genuine,card=2] oops +lemma n_mult: "n(x * n(y) * L) = n(x * y)" nitpick [expect=genuine,card=3] oops +lemma n_mult_1: "n(x * y) \ n(x * n(y) * L)" nitpick [expect=genuine,card=3] oops +lemma n_mult_top: "n(x * n(y) * top) = n(x * y)" nitpick [expect=genuine,card=3] oops +lemma n_mult_right_upper_bound: "n(x * y) \ n(z) \ n(x) \ n(z) \ x * n(y) * L \ x * bot \ n(z) * L" nitpick [expect=genuine,card=2] oops +lemma meet_domain: "x \ n(y) * z = n(y) * (x \ z)" nitpick [expect=genuine,card=3] oops +lemma meet_domain_1: "x \ n(y) * z \ n(y) * x" nitpick [expect=genuine,card=3] oops +lemma meet_domain_top_3: "x \ n(y) * top \ n(y) * x" nitpick [expect=genuine,card=3] oops +lemma n_n_top_n_top_split_n_n_top_top: "n(x) * top \ x * n(y) * top = x * bot \ n(x * n(y) * top) * top" nitpick [expect=genuine,card=2] oops +lemma n_n_top_n_top_split_n_n_top_top_1: "x * bot \ n(x * n(y) * top) * top \ n(x) * top \ x * n(y) * top" nitpick [expect=genuine,card=5] oops +lemma n_n_top_n_top_split_n_n_top_top_2: "n(x) * top \ x * n(y) * top \ x * bot \ n(x * n(y) * top) * top" nitpick [expect=genuine,card=2] oops +lemma n_nL_top_n_top_meet_L_top: "n(L) * x * top = n(x * top \ L) * top" nitpick [expect=genuine,card=2] oops +lemma l18: "n(x) * L = n(x * L) * L" nitpick [expect=genuine,card=3] oops +lemma l22: "x * bot \ L = n(x) * L" nitpick [expect=genuine,card=2] oops +lemma l22_1: "x * bot \ L = n(x * L) * L" nitpick [expect=genuine,card=2] oops +lemma l22_2: "x \ L = n(x) * L" nitpick [expect=genuine,card=3] oops +lemma l22_3: "x \ L = n(x * L) * L" nitpick [expect=genuine,card=3] oops +lemma l22_4: "x \ L \ n(x) * L" nitpick [expect=genuine,card=3] oops +lemma l22_5: "x * bot \ L \ n(x) * L" nitpick [expect=genuine,card=4] oops +lemma l23: "x * top \ L = n(x) * L" nitpick [expect=genuine,card=3] oops +lemma l51: "n(x) * L = (x \ L) * bot" nitpick [expect=genuine,card=2] oops +lemma l91: "x = x * top \ n(L) * x \ n(x) * top" nitpick [expect=genuine,card=3] oops +lemma l92: "x = x * top \ n(L) * x \ n(x \ L) * top" nitpick [expect=genuine,card=3] oops +lemma "x \ L \ n(x) * top" nitpick [expect=genuine,card=3] oops +lemma n_meet_comp: "n(x) \ n(y) \ n(x) * n(y)" nitpick [expect=genuine,card=3] oops + +lemma n_n_meet_L_n_zero: "n(x) = (n(x) \ L) \ n(x * bot)" oops +lemma n_below_n_zero: "n(x) \ x \ n(x * bot)" oops +lemma n_n_top_split_n_L_n_zero_top: "n(x) * top = n(x) * L \ n(x * bot) * top" oops +lemma n_meet_L_0_0_meet_L: "(x \ L) * bot = x * bot \ L" oops +*) + +end + +end + diff --git a/thys/Correctness_Algebras/N_Omega_Algebras.thy b/thys/Correctness_Algebras/N_Omega_Algebras.thy new file mode 100644 --- /dev/null +++ b/thys/Correctness_Algebras/N_Omega_Algebras.thy @@ -0,0 +1,580 @@ +(* Title: N-Omega-Algebras + Author: Walter Guttmann + Maintainer: Walter Guttmann +*) + +section \N-Omega-Algebras\ + +theory N_Omega_Algebras + +imports Omega_Algebras Recursion + +begin + +class itering_apx = bounded_itering + n_algebra_apx +begin + +lemma circ_L: + "L\<^sup>\ = L \ 1" + by (metis sup_commute mult_top_circ n_L_top_L) + +lemma C_circ_import: + "C (x\<^sup>\) \ (C x)\<^sup>\" +proof - + have 1: "C x * x\<^sup>\ \ (C x)\<^sup>\ * C x" + using C_mult_propagate circ_simulate order.eq_iff by blast + have "C (x\<^sup>\) = C (1 \ x * x\<^sup>\)" + by (simp add: circ_left_unfold) + also have "... = C 1 \ C (x * x\<^sup>\)" + by (simp add: inf_sup_distrib1) + also have "... \ 1 \ C (x * x\<^sup>\)" + using sup_left_isotone by auto + also have "... = 1 \ C x * x\<^sup>\" + by (simp add: n_L_T_meet_mult) + also have "... \ (C x)\<^sup>\" + using 1 by (meson circ_reflexive order.trans le_supI right_plus_below_circ) + finally show ?thesis + . +qed + +text \AACP Theorem 4.3 and Theorem 4.4\ + +lemma circ_apx_isotone: + assumes "x \ y" + shows "x\<^sup>\ \ y\<^sup>\" +proof - + have 1: "x \ y \ L \ C y \ x \ n(x) * top" + using assms apx_def by auto + have "C (y\<^sup>\) \ (C y)\<^sup>\" + by (simp add: C_circ_import) + also have "... \ x\<^sup>\ \ x\<^sup>\ * n(x) * top" + using 1 by (metis circ_isotone circ_left_top circ_unfold_sum mult_assoc) + also have "... \ x\<^sup>\ \ (x\<^sup>\ * bot \ n(x\<^sup>\ * x) * top)" + using n_n_top_split_n_top sup_right_isotone by blast + also have "... \ x\<^sup>\ \ (x\<^sup>\ * bot \ n(x\<^sup>\) * top)" + using circ_plus_same left_plus_below_circ mult_left_isotone n_isotone sup_right_isotone by auto + also have "... = x\<^sup>\ \ n(x\<^sup>\) * top" + by (meson sup.left_idem sup_relative_same_increasing zero_right_mult_decreasing) + finally have 2: "C (y\<^sup>\) \ x\<^sup>\ \ n(x\<^sup>\) * top" + . + have "x\<^sup>\ \ y\<^sup>\ * L\<^sup>\" + using 1 by (metis circ_sup_1 circ_back_loop_fixpoint circ_isotone n_L_below_L le_iff_sup mult_assoc) + also have "... = y\<^sup>\ \ y\<^sup>\ * L" + using circ_L mult_left_dist_sup sup_commute by auto + also have "... \ y\<^sup>\ \ y\<^sup>\ * bot \ L" + using n_L_split_L semiring.add_left_mono sup_assoc by auto + finally have "x\<^sup>\ \ y\<^sup>\ \ L" + using sup.absorb1 zero_right_mult_decreasing by force + thus "x\<^sup>\ \ y\<^sup>\" + using 2 by (simp add: apx_def) +qed + +end + +class n_omega_algebra_1 = bounded_left_zero_omega_algebra + n_algebra_apx + Omega + + assumes Omega_def: "x\<^sup>\ = n(x\<^sup>\) * L \ x\<^sup>\" +begin + +text \AACP Theorem 8.13\ + +lemma C_omega_export: + "C (x\<^sup>\) = (C x)\<^sup>\" +proof - + have "C (x\<^sup>\) = C x * C (x\<^sup>\)" + by (metis C_mult_propagate n_L_T_meet_mult omega_unfold) + hence 1: "C (x\<^sup>\) \ (C x)\<^sup>\" + using eq_refl omega_induct_mult by auto + have "(C x)\<^sup>\ = C (x * (C x)\<^sup>\)" + using n_L_T_meet_mult omega_unfold by auto + also have "... \ C (x\<^sup>\)" + by (metis calculation C_decreasing inf_le1 le_infI omega_induct_mult) + finally show ?thesis + using 1 order.antisym by blast +qed + +text \AACP Theorem 8.2\ + +lemma L_mult_star: + "L * x\<^sup>\ = L" + by (metis n_L_top_L star.circ_left_top mult_assoc) + +text \AACP Theorem 8.3\ + +lemma mult_L_star: + "(x * L)\<^sup>\ = 1 \ x * L" + by (metis L_mult_star star.circ_mult_1 mult_assoc) + +lemma mult_L_omega_below: + "(x * L)\<^sup>\ \ x * L" + by (metis mult_right_isotone n_L_below_L omega_slide) + +text \AACP Theorem 8.5\ + +lemma mult_L_sup_star: + "(x * L \ y)\<^sup>\ = y\<^sup>\ \ y\<^sup>\ * x * L" + using L_mult_star mult_1_right mult_left_dist_sup star_sup_1 sup_commute mult_L_star mult_assoc by auto + +lemma mult_L_sup_omega_below: + "(x * L \ y)\<^sup>\ \ y\<^sup>\ \ y\<^sup>\ * x * L" +proof - + have "(x * L \ y)\<^sup>\ \ y\<^sup>\ * x * L \ (y\<^sup>\ * x * L)\<^sup>\ * y\<^sup>\" + by (metis sup_commute mult_assoc omega_decompose sup_left_isotone mult_L_omega_below) + also have "... \ y\<^sup>\ \ y\<^sup>\ * x * L" + by (smt (z3) le_iff_sup le_supI mult_left_dist_sup n_L_below_L star_left_induct sup.cobounded2 sup.left_idem sup.orderE sup_assoc sup_commute mult_assoc) + finally show ?thesis + . +qed + +lemma n_Omega_isotone: + "x \ y \ x\<^sup>\ \ y\<^sup>\" + by (metis Omega_def sup_mono mult_left_isotone n_isotone omega_isotone star_isotone) + +lemma n_star_below_Omega: + "x\<^sup>\ \ x\<^sup>\" + by (simp add: Omega_def) + +lemma mult_L_star_mult_below: + "(x * L)\<^sup>\ * y \ y \ x * L" + by (metis sup_right_isotone mult_assoc mult_right_isotone n_L_below_L star_left_induct) + +end + +sublocale n_omega_algebra_1 < star: itering_apx where circ = star .. + +class n_omega_algebra = n_omega_algebra_1 + n_algebra_apx + + assumes n_split_omega_mult: "C (x\<^sup>\) \ x\<^sup>\ * n(x\<^sup>\) * top" + assumes tarski: "x * L \ x * L * x * L" +begin + +text \AACP Theorem 8.4\ + +lemma mult_L_omega: + "(x * L)\<^sup>\ = x * L" + apply (rule order.antisym) + apply (rule mult_L_omega_below) + using omega_induct_mult tarski mult_assoc by auto + +text \AACP Theorem 8.6\ + +lemma mult_L_sup_omega: + "(x * L \ y)\<^sup>\ = y\<^sup>\ \ y\<^sup>\ * x * L" + apply (rule order.antisym) + apply (rule mult_L_sup_omega_below) + by (metis le_supI omega_isotone omega_sub_dist_2 sup.cobounded2 sup_commute mult_L_omega mult_assoc) + +text \AACP Theorem 8.1\ + +lemma tarski_mult_top_idempotent: + "x * L = x * L * x * L" + by (metis omega_unfold mult_L_omega mult_assoc) + +text \AACP Theorem 8.7\ + +lemma n_below_n_omega: + "n(x) \ n(x\<^sup>\)" +proof - + have "n(x) * L \ n(x) * L * n(x) * L" + by (simp add: tarski) + also have "... \ x * n(x) * L" + by (simp add: mult_isotone n_L_decreasing) + finally have "n(x) * L \ x\<^sup>\" + by (simp add: omega_induct_mult mult_assoc) + thus ?thesis + by (simp add: n_galois) +qed + +text \AACP Theorem 8.14\ + +lemma n_split_omega_sup_zero: + "C (x\<^sup>\) \ x\<^sup>\ * bot \ n(x\<^sup>\) * top" +proof - + have "n(x\<^sup>\) * top \ x * (x\<^sup>\ * bot \ n(x\<^sup>\) * top) = n(x\<^sup>\) * top \ x * x\<^sup>\ * bot \ x * n(x\<^sup>\) * top" + by (simp add: mult_left_dist_sup sup_assoc mult_assoc) + also have "... \ n(x\<^sup>\) * top \ x * x\<^sup>\ * bot \ x * bot \ n(x\<^sup>\) * top" + by (metis sup_assoc sup_right_isotone n_n_top_split_n_top omega_unfold) + also have "... = x * x\<^sup>\ * bot \ n(x\<^sup>\) * top" + by (smt sup_assoc sup_commute sup_left_top sup_bot_right mult_assoc mult_left_dist_sup) + also have "... \ x\<^sup>\ * bot \ n(x\<^sup>\) * top" + by (metis sup_left_isotone mult_left_isotone star.left_plus_below_circ) + finally have "x\<^sup>\ * n(x\<^sup>\) * top \ x\<^sup>\ * bot \ n(x\<^sup>\) * top" + using star_left_induct mult_assoc by auto + thus ?thesis + using n_split_omega_mult order_trans by blast +qed + +lemma n_split_omega_sup: + "C (x\<^sup>\) \ x\<^sup>\ \ n(x\<^sup>\) * top" + by (metis sup_left_isotone n_split_omega_sup_zero order_trans zero_right_mult_decreasing) + +text \AACP Theorem 8.12\ + +lemma n_dist_omega_star: + "n(y\<^sup>\ \ y\<^sup>\ * z) = n(y\<^sup>\) \ n(y\<^sup>\ * z)" +proof - + have "n(y\<^sup>\ \ y\<^sup>\ * z) = n(C (y\<^sup>\) \ C (y\<^sup>\ * z))" + by (metis inf_sup_distrib1 n_C) + also have "... \ n(C (y\<^sup>\) \ y\<^sup>\ * z)" + using n_isotone semiring.add_right_mono sup_commute by auto + also have "... \ n(y\<^sup>\ * bot \ n(y\<^sup>\) * top \ y\<^sup>\ * z)" + using n_isotone semiring.add_right_mono n_split_omega_sup_zero by blast + also have "... = n(y\<^sup>\) \ n(y\<^sup>\ * z)" + by (smt sup_assoc sup_commute sup_bot_right mult_left_dist_sup n_dist_n_add) + finally show ?thesis + by (simp add: order.antisym n_isotone) +qed + +lemma mult_L_sup_circ_below: + "(x * L \ y)\<^sup>\ \ n(y\<^sup>\) * L \ y\<^sup>\ \ y\<^sup>\ * x * L" +proof - + have "(x * L \ y)\<^sup>\ \ n(y\<^sup>\ \ y\<^sup>\ * x * L) * L \ (x * L \ y)\<^sup>\" + by (simp add: Omega_def mult_L_sup_omega) + also have "... = n(y\<^sup>\) * L \ n(y\<^sup>\ * x * L) * L \ (x * L \ y)\<^sup>\" + by (simp add: semiring.distrib_right mult_assoc n_dist_omega_star) + also have "... \ n(y\<^sup>\) * L \ y\<^sup>\ \ y\<^sup>\ * x * L" + by (smt (z3) le_supI sup.cobounded1 sup_assoc sup_commute sup_idem sup_right_isotone mult_L_sup_star n_L_decreasing) + finally show ?thesis + . +qed + +lemma n_mult_omega_L_below_zero: + "n(y * x\<^sup>\) * L \ y * x\<^sup>\ * bot \ y * n(x\<^sup>\) * L" +proof - + have "n(y * x\<^sup>\) * L \ C (y * x\<^sup>\) \ L" + by (metis n_C n_L_decreasing_meet_L) + also have "... \ y * C (x\<^sup>\) \ L" + using inf.sup_left_isotone n_L_T_meet_mult n_L_T_meet_mult_propagate by auto + also have "... \ y * (x\<^sup>\ * bot \ n(x\<^sup>\) * top) \ L" + using inf.sup_left_isotone mult_right_isotone n_split_omega_sup_zero by auto + also have "... = (y * x\<^sup>\ * bot \ L) \ (y * n(x\<^sup>\) * top \ L)" + using inf_sup_distrib2 mult_left_dist_sup mult_assoc by auto + also have "... \ (y * x\<^sup>\ * bot \ L) \ y * n(x\<^sup>\) * L" + using n_vector_meet_L sup_right_isotone by auto + also have "... \ y * x\<^sup>\ * bot \ y * n(x\<^sup>\) * L" + using sup_left_isotone by auto + finally show ?thesis + . +qed + +text \AACP Theorem 8.10\ + +lemma n_mult_omega_L_star_zero: + "y * x\<^sup>\ * bot \ n(y * x\<^sup>\) * L = y * x\<^sup>\ * bot \ y * n(x\<^sup>\) * L" + apply (rule order.antisym) + apply (simp add: n_mult_omega_L_below_zero) + by (smt sup_assoc sup_commute sup_bot_left sup_right_isotone mult_assoc mult_left_dist_sup n_n_L_split_n_L) + +text \AACP Theorem 8.11\ + +lemma n_mult_omega_L_star: + "y * x\<^sup>\ \ n(y * x\<^sup>\) * L = y * x\<^sup>\ \ y * n(x\<^sup>\) * L" + by (metis zero_right_mult_decreasing n_mult_omega_L_star_zero sup_relative_same_increasing) + +lemma n_mult_omega_L_below: + "n(y * x\<^sup>\) * L \ y * x\<^sup>\ \ y * n(x\<^sup>\) * L" + using sup_right_divisibility n_mult_omega_L_star by blast + +lemma n_omega_L_below_zero: + "n(x\<^sup>\) * L \ x * x\<^sup>\ * bot \ x * n(x\<^sup>\) * L" + by (metis omega_unfold n_mult_omega_L_below_zero) + +lemma n_omega_L_below: + "n(x\<^sup>\) * L \ x\<^sup>\ \ x * n(x\<^sup>\) * L" + by (metis omega_unfold n_mult_omega_L_below sup_left_isotone star.left_plus_below_circ order_trans) + +lemma n_omega_L_star_zero: + "x * x\<^sup>\ * bot \ n(x\<^sup>\) * L = x * x\<^sup>\ * bot \ x * n(x\<^sup>\) * L" + by (metis n_mult_omega_L_star_zero omega_unfold) + +text \AACP Theorem 8.8\ + +lemma n_omega_L_star: + "x\<^sup>\ \ n(x\<^sup>\) * L = x\<^sup>\ \ x * n(x\<^sup>\) * L" + by (metis star.circ_mult_upper_bound star.left_plus_below_circ bot_least n_omega_L_star_zero sup_relative_same_increasing) + +text \AACP Theorem 8.9\ + +lemma n_omega_L_star_zero_star: + "x\<^sup>\ * bot \ n(x\<^sup>\) * L = x\<^sup>\ * bot \ x\<^sup>\ * n(x\<^sup>\) * L" + by (metis n_mult_omega_L_star_zero star_mult_omega mult_assoc star.circ_transitive_equal) + +text \AACP Theorem 8.8\ + +lemma n_omega_L_star_star: + "x\<^sup>\ \ n(x\<^sup>\) * L = x\<^sup>\ \ x\<^sup>\ * n(x\<^sup>\) * L" + by (metis zero_right_mult_decreasing n_omega_L_star_zero_star sup_relative_same_increasing) + +lemma n_Omega_left_unfold: + "1 \ x * x\<^sup>\ = x\<^sup>\" + by (smt Omega_def sup_assoc sup_commute mult_assoc mult_left_dist_sup n_omega_L_star star.circ_left_unfold) + +lemma n_Omega_left_slide: + "(x * y)\<^sup>\ * x \ x * (y * x)\<^sup>\" +proof - + have "(x * y)\<^sup>\ * x \ x * y * n((x * y)\<^sup>\) * L \ (x * y)\<^sup>\ * x" + by (smt Omega_def sup_commute sup_left_isotone mult_assoc mult_right_dist_sup mult_right_isotone n_L_below_L n_omega_L_star) + also have "... \ x * (y * bot \ n(y * (x * y)\<^sup>\) * L) \ (x * y)\<^sup>\ * x" + by (metis mult_right_isotone n_n_L_split_n_L sup_commute sup_right_isotone mult_assoc) + also have "... = x * (y * x)\<^sup>\" + by (smt (verit, del_insts) le_supI1 star_slide Omega_def sup_assoc sup_commute le_iff_sup mult_assoc mult_isotone mult_left_dist_sup omega_slide star.circ_increasing star.circ_slide bot_least) + finally show ?thesis + . +qed + +lemma n_Omega_sup_1: + "(x \ y)\<^sup>\ = x\<^sup>\ * (y * x\<^sup>\)\<^sup>\" +proof - + have 1: "(x \ y)\<^sup>\ = n((x\<^sup>\ * y)\<^sup>\) * L \ n((x\<^sup>\ * y)\<^sup>\ * x\<^sup>\) * L \ (x\<^sup>\ * y)\<^sup>\ * x\<^sup>\" + by (simp add: Omega_def omega_decompose semiring.distrib_right star.circ_sup_9 n_dist_omega_star) + have "n((x\<^sup>\ * y)\<^sup>\) * L \ (x\<^sup>\ * y)\<^sup>\ \ x\<^sup>\ * (y * n((x\<^sup>\ * y)\<^sup>\) * L)" + by (metis n_omega_L_below mult_assoc) + also have "... \ (x\<^sup>\ * y)\<^sup>\ \ x\<^sup>\ * y * bot \ x\<^sup>\ * n((y * x\<^sup>\)\<^sup>\) * L" + by (smt sup_assoc sup_right_isotone mult_assoc mult_left_dist_sup mult_right_isotone n_n_L_split_n_L omega_slide) + also have "... = (x\<^sup>\ * y)\<^sup>\ \ x\<^sup>\ * n((y * x\<^sup>\)\<^sup>\) * L" + by (metis sup_commute le_iff_sup star.circ_sub_dist_1 zero_right_mult_decreasing) + also have "... \ x\<^sup>\ * (y * x\<^sup>\)\<^sup>\ \ x\<^sup>\ * n((y * x\<^sup>\)\<^sup>\) * L" + by (metis star_outer_increasing star_slide star_star_absorb sup_left_isotone) + also have "... \ x\<^sup>\ * (y * x\<^sup>\)\<^sup>\" + by (metis Omega_def sup_commute mult_assoc mult_left_dist_sup mult_right_isotone n_Omega_isotone n_star_below_Omega) + also have "... \ x\<^sup>\ * (y * x\<^sup>\)\<^sup>\" + by (simp add: mult_left_isotone n_star_below_Omega) + finally have 2: "n((x\<^sup>\ * y)\<^sup>\) * L \ x\<^sup>\ * (y * x\<^sup>\)\<^sup>\" + . + have "n((x\<^sup>\ * y)\<^sup>\ * x\<^sup>\) * L \ n(x\<^sup>\) * L \ x\<^sup>\ * (y * x\<^sup>\)\<^sup>\ \ x\<^sup>\ * (y * x\<^sup>\)\<^sup>\ * y * n(x\<^sup>\) * L" + by (smt sup_assoc sup_commute mult_left_one mult_right_dist_sup n_mult_omega_L_below star.circ_mult star.circ_slide) + also have "... = n(x\<^sup>\) * L * (y * x\<^sup>\)\<^sup>\ \ x\<^sup>\ * (y * x\<^sup>\)\<^sup>\" + by (smt Omega_def sup_assoc mult_L_sup_star mult_assoc mult_left_dist_sup L_mult_star) + also have "... \ x\<^sup>\ * (y * x\<^sup>\)\<^sup>\" + by (simp add: Omega_def mult_isotone) + finally have 3: "n((x\<^sup>\ * y)\<^sup>\ * x\<^sup>\) * L \ x\<^sup>\ * (y * x\<^sup>\)\<^sup>\" + . + have "(x\<^sup>\ * y)\<^sup>\ * x\<^sup>\ \ x\<^sup>\ * (y * x\<^sup>\)\<^sup>\" + by (metis star_slide mult_isotone mult_right_isotone n_star_below_Omega order_trans star_isotone) + hence 4: "(x \ y)\<^sup>\ \ x\<^sup>\ * (y * x\<^sup>\)\<^sup>\" + using 1 2 3 by simp + have 5: "x\<^sup>\ * (y * x\<^sup>\)\<^sup>\ \ n(x\<^sup>\) * L \ x\<^sup>\ * n((y * x\<^sup>\)\<^sup>\) * L \ x\<^sup>\ * (y * x\<^sup>\)\<^sup>\" + by (smt Omega_def sup_assoc sup_left_isotone mult_assoc mult_left_dist_sup mult_right_dist_sup mult_right_isotone n_L_below_L) + have "n(x\<^sup>\) * L \ n((x\<^sup>\ * y)\<^sup>\ * x\<^sup>\) * L" + by (metis sup_commute sup_ge1 mult_left_isotone n_isotone star.circ_loop_fixpoint) + hence 6: "n(x\<^sup>\) * L \ (x \ y)\<^sup>\" + using 1 order_lesseq_imp by fastforce + have "x\<^sup>\ * n((y * x\<^sup>\)\<^sup>\) * L \ x\<^sup>\ * n((y * x\<^sup>\)\<^sup>\ \ (y * x\<^sup>\)\<^sup>\ * y * n(x\<^sup>\) * L) * L" + by (metis Omega_def mult_L_sup_omega_below mult_assoc mult_left_dist_sup mult_left_isotone mult_right_isotone n_isotone) + also have "... \ x\<^sup>\ * bot \ n(x\<^sup>\ * ((y * x\<^sup>\)\<^sup>\ \ (y * x\<^sup>\)\<^sup>\ * y * n(x\<^sup>\) * L)) * L" + by (simp add: n_n_L_split_n_L) + also have "... \ x\<^sup>\ \ n((x\<^sup>\ * y)\<^sup>\ \ x\<^sup>\ * (y * x\<^sup>\)\<^sup>\ * y * n(x\<^sup>\) * L) * L" + using omega_slide semiring.distrib_left sup_mono zero_right_mult_decreasing mult_assoc by fastforce + also have "... \ x\<^sup>\ \ n((x\<^sup>\ * y)\<^sup>\ \ (x\<^sup>\ * y)\<^sup>\ * n(x\<^sup>\) * L) * L" + by (smt sup_right_divisibility sup_right_isotone mult_left_isotone n_isotone star.circ_mult) + also have "... \ x\<^sup>\ \ n((x \ y)\<^sup>\) * L" + by (metis sup_right_isotone mult_assoc mult_left_isotone mult_right_isotone n_L_decreasing n_isotone omega_decompose) + also have "... \ (x \ y)\<^sup>\" + by (simp add: Omega_def le_supI1 star_isotone sup_commute) + finally have 7: "x\<^sup>\ * n((y * x\<^sup>\)\<^sup>\) * L \ (x \ y)\<^sup>\" + . + have "x\<^sup>\ * (y * x\<^sup>\)\<^sup>\ \ (x\<^sup>\ * y)\<^sup>\ * x\<^sup>\ \ (x\<^sup>\ * y)\<^sup>\ * n(x\<^sup>\) * L" + by (smt Omega_def sup_right_isotone mult_L_sup_star mult_assoc mult_left_dist_sup mult_left_isotone star.left_plus_below_circ star_slide) + also have "... \ (x\<^sup>\ * y)\<^sup>\ * x\<^sup>\ \ n((x\<^sup>\ * y)\<^sup>\ * x\<^sup>\) * L" + by (simp add: n_mult_omega_L_star) + also have "... \ (x \ y)\<^sup>\" + by (smt Omega_def sup_commute sup_right_isotone mult_left_isotone n_right_upper_bound omega_decompose star.circ_sup) + finally have "n(x\<^sup>\) * L \ x\<^sup>\ * n((y * x\<^sup>\)\<^sup>\) * L \ x\<^sup>\ * (y * x\<^sup>\)\<^sup>\ \ (x \ y)\<^sup>\" + using 6 7 by simp + hence "x\<^sup>\ * (y * x\<^sup>\)\<^sup>\ \ (x \ y)\<^sup>\" + using 5 order.trans by blast + thus ?thesis + using 4 order.antisym by blast +qed + +end + +sublocale n_omega_algebra < nL_omega: left_zero_conway_semiring where circ = Omega + apply unfold_locales + apply (simp add: n_Omega_left_unfold) + apply (simp add: n_Omega_left_slide) + by (simp add: n_Omega_sup_1) + +(* circ_plus_same does not hold in the non-strict model using Omega *) + +context n_omega_algebra +begin + +text \AACP Theorem 8.16\ + +lemma omega_apx_isotone: + assumes "x \ y" + shows "x\<^sup>\ \ y\<^sup>\" +proof - + have 1: "x \ y \ L \ C y \ x \ n(x) * top" + using assms apx_def by auto + have "n(x) * top \ x * (x\<^sup>\ \ n(x\<^sup>\) * top) \ n(x) * top \ x\<^sup>\ \ n(x\<^sup>\) * top" + by (metis le_supI n_split_top sup.cobounded1 sup_assoc mult_assoc mult_left_dist_sup sup_right_isotone omega_unfold) + also have "... \ x\<^sup>\ \ n(x\<^sup>\) * top" + by (metis sup_commute sup_right_isotone mult_left_isotone n_below_n_omega sup_assoc sup_idem) + finally have 2: "x\<^sup>\ * n(x) * top \ x\<^sup>\ \ n(x\<^sup>\) * top" + using star_left_induct mult_assoc by auto + have "C (y\<^sup>\) = (C y)\<^sup>\" + by (simp add: C_omega_export) + also have "... \ (x \ n(x) * top)\<^sup>\" + using 1 omega_isotone by blast + also have "... = (x\<^sup>\ * n(x) * top)\<^sup>\ \ (x\<^sup>\ * n(x) * top)\<^sup>\ * x\<^sup>\" + by (simp add: omega_decompose mult_assoc) + also have "... \ x\<^sup>\ * n(x) * top \ (x\<^sup>\ * n(x) * top)\<^sup>\ * x\<^sup>\" + using mult_top_omega sup_left_isotone by blast + also have "... = x\<^sup>\ * n(x) * top \ (1 \ x\<^sup>\ * n(x) * top * (x\<^sup>\ * n(x) * top)\<^sup>\) * x\<^sup>\" + by (simp add: star_left_unfold_equal) + also have "... \ x\<^sup>\ \ x\<^sup>\ * n(x) * top" + by (smt sup_mono sup_least mult_assoc mult_left_one mult_right_dist_sup mult_right_isotone order_refl top_greatest sup.cobounded2) + also have "... \ x\<^sup>\ \ n(x\<^sup>\) * top" + using 2 by simp + finally have 3: "C (y\<^sup>\) \ x\<^sup>\ \ n(x\<^sup>\) * top" + . + have "x\<^sup>\ \ (y \ L)\<^sup>\" + using 1 omega_isotone by simp + also have "... = (y\<^sup>\ * L)\<^sup>\ \ (y\<^sup>\ * L)\<^sup>\ * y\<^sup>\" + by (simp add: omega_decompose) + also have "... = y\<^sup>\ * L * (y\<^sup>\ * L)\<^sup>\ \ (y\<^sup>\ * L)\<^sup>\ * y\<^sup>\" + using omega_unfold by auto + also have "... \ y\<^sup>\ * L \ (y\<^sup>\ * L)\<^sup>\ * y\<^sup>\" + by (metis sup_left_isotone n_L_below_L mult_assoc mult_right_isotone) + also have "... = y\<^sup>\ * L \ (1 \ y\<^sup>\ * L * (y\<^sup>\ * L)\<^sup>\) * y\<^sup>\" + by (simp add: star_left_unfold_equal) + also have "... \ y\<^sup>\ * L \ y\<^sup>\" + by (simp add: mult_L_star_mult_below star_left_unfold_equal sup_commute) + also have "... \ y\<^sup>\ * bot \ L \ y\<^sup>\" + using n_L_split_L sup_left_isotone by auto + finally have "x\<^sup>\ \ y\<^sup>\ \ L" + by (simp add: star_bot_below_omega sup.absorb1 sup.left_commute sup_commute) + thus "x\<^sup>\ \ y\<^sup>\" + using 3 by (simp add: apx_def) +qed + +lemma combined_apx_left_isotone: + "x \ y \ n(x\<^sup>\) * L \ x\<^sup>\ * z \ n(y\<^sup>\) * L \ y\<^sup>\ * z" + by (simp add: mult_apx_isotone n_L_apx_isotone star.circ_apx_isotone sup_apx_isotone omega_apx_isotone) + +lemma combined_apx_left_isotone_2: + "x \ y \ (x\<^sup>\ \ L) \ x\<^sup>\ * z \ (y\<^sup>\ \ L) \ y\<^sup>\ * z" + by (metis sup_apx_isotone mult_apx_left_isotone omega_apx_isotone star.circ_apx_isotone meet_L_apx_isotone) + +lemma combined_apx_right_isotone: + "y \ z \ n(x\<^sup>\) * L \ x\<^sup>\ * y \ n(x\<^sup>\) * L \ x\<^sup>\ * z" + by (simp add: mult_apx_isotone sup_apx_left_isotone sup_commute) + +lemma combined_apx_right_isotone_2: + "y \ z \ (x\<^sup>\ \ L) \ x\<^sup>\ * y \ (x\<^sup>\ \ L) \ x\<^sup>\ * z" + by (simp add: mult_apx_right_isotone sup_apx_right_isotone) + +lemma combined_apx_isotone: + "x \ y \ w \ z \ n(x\<^sup>\) * L \ x\<^sup>\ * w \ n(y\<^sup>\) * L \ y\<^sup>\ * z" + by (simp add: mult_apx_isotone n_L_apx_isotone star.circ_apx_isotone sup_apx_isotone omega_apx_isotone) + +lemma combined_apx_isotone_2: + "x \ y \ w \ z \ (x\<^sup>\ \ L) \ x\<^sup>\ * w \ (y\<^sup>\ \ L) \ y\<^sup>\ * z" + by (meson combined_apx_left_isotone_2 combined_apx_right_isotone_2 apx.order.trans) + +lemma n_split_nu_mu: + "C (y\<^sup>\ \ y\<^sup>\ * z) \ y\<^sup>\ * z \ n(y\<^sup>\ \ y\<^sup>\ * z) * top" +proof - + have "C (y\<^sup>\ \ y\<^sup>\ * z) \ C (y\<^sup>\) \ y\<^sup>\ * z" + by (simp add: inf_sup_distrib1 le_supI1 sup_commute) + also have "... \ y\<^sup>\ * bot \ n(y\<^sup>\) * top \ y\<^sup>\ * z" + using n_split_omega_sup_zero sup_left_isotone by blast + also have "... \ y\<^sup>\ * z \ n(y\<^sup>\ \ y\<^sup>\ * z) * top" + using le_supI1 mult_left_isotone mult_right_isotone n_left_upper_bound sup_right_isotone by force + finally show ?thesis + . +qed + +lemma n_split_nu_mu_2: + "C (y\<^sup>\ \ y\<^sup>\ * z) \ y\<^sup>\ * z \ ((y\<^sup>\ \ y\<^sup>\ * z) \ L) \ n(y\<^sup>\ \ y\<^sup>\ * z) * top" +proof - + have "C (y\<^sup>\ \ y\<^sup>\ * z) \ C (y\<^sup>\) \ y\<^sup>\ * z" + using inf.sup_left_isotone sup_inf_distrib2 by auto + also have "... \ y\<^sup>\ * bot \ n(y\<^sup>\) * top \ y\<^sup>\ * z" + using n_split_omega_sup_zero sup_left_isotone by blast + also have "... \ y\<^sup>\ * z \ n(y\<^sup>\ \ y\<^sup>\ * z) * top" + using le_supI1 mult_left_isotone mult_right_isotone n_left_upper_bound semiring.add_left_mono by auto + finally show ?thesis + using order_lesseq_imp semiring.add_right_mono sup.cobounded1 by blast +qed + +lemma loop_exists: + "C (\ (\x . y * x \ z)) \ \ (\x . y * x \ z) \ n(\ (\x . y * x \ z)) * top" + using omega_loop_nu star_loop_mu n_split_nu_mu by auto + +lemma loop_exists_2: + "C (\ (\x . y * x \ z)) \ \ (\x . y * x \ z) \ (\ (\x . y * x \ z) \ L) \ n(\ (\x . y * x \ z)) * top" + by (simp add: omega_loop_nu star_loop_mu n_split_nu_mu_2) + +lemma loop_apx_least_fixpoint: + "apx.is_least_fixpoint (\x . y * x \ z) (\ (\x . y * x \ z) \ n(\ (\x . y * x \ z)) * L)" +proof - + have "kappa_mu_nu_L (\x . y * x \ z)" + by (metis affine_apx_isotone loop_exists affine_has_greatest_fixpoint affine_has_least_fixpoint affine_isotone nu_below_mu_nu_L_def nu_below_mu_nu_L_kappa_mu_nu_L) + thus ?thesis + using apx.least_fixpoint_char kappa_mu_nu_L_def by force +qed + +lemma loop_apx_least_fixpoint_2: + "apx.is_least_fixpoint (\x . y * x \ z) (\ (\x . y * x \ z) \ (\ (\x . y * x \ z) \ L))" +proof - + have "kappa_mu_nu (\x . y * x \ z)" + by (metis affine_apx_isotone affine_has_greatest_fixpoint affine_has_least_fixpoint affine_isotone loop_exists_2 nu_below_mu_nu_def nu_below_mu_nu_kappa_mu_nu) + thus ?thesis + using apx.least_fixpoint_char kappa_mu_nu_def by force +qed + +lemma loop_has_apx_least_fixpoint: + "apx.has_least_fixpoint (\x . y * x \ z)" + using apx.least_fixpoint_char loop_apx_least_fixpoint by blast + +lemma loop_semantics: + "\ (\x . y * x \ z) = \ (\x . y * x \ z) \ n(\ (\x . y * x \ z)) * L" + using apx.least_fixpoint_char loop_apx_least_fixpoint by force + +lemma loop_semantics_2: + "\ (\x . y * x \ z) = \ (\x . y * x \ z) \ (\ (\x . y * x \ z) \ L)" + using apx.least_fixpoint_char loop_apx_least_fixpoint_2 by force + +text \AACP Theorem 8.15\ + +lemma loop_semantics_kappa_mu_nu: + "\ (\x . y * x \ z) = n(y\<^sup>\) * L \ y\<^sup>\ * z" +proof - + have "\ (\x . y * x \ z) = y\<^sup>\ * z \ n(y\<^sup>\ \ y\<^sup>\ * z) * L" + using apx.least_fixpoint_char omega_loop_nu star_loop_mu loop_apx_least_fixpoint by auto + thus ?thesis + by (smt n_dist_omega_star sup_assoc mult_right_dist_sup sup_commute le_iff_sup n_L_decreasing) +qed + +text \AACP Theorem 8.15\ + +lemma loop_semantics_kappa_mu_nu_2: + "\ (\x . y * x \ z) = (y\<^sup>\ \ L) \ y\<^sup>\ * z" +proof - + have "\ (\x . y * x \ z) = y\<^sup>\ * z \ ((y\<^sup>\ \ y\<^sup>\ * z) \ L)" + using apx.least_fixpoint_char omega_loop_nu star_loop_mu loop_apx_least_fixpoint_2 by auto + thus ?thesis + by (metis sup_absorb2 sup_ge2 sup_inf_distrib1 sup_monoid.add_commute) +qed + +text \AACP Theorem 8.16\ + +lemma loop_semantics_apx_left_isotone: + "w \ y \ \ (\x . w * x \ z) \ \ (\x . y * x \ z)" + by (metis loop_semantics_kappa_mu_nu_2 combined_apx_left_isotone_2) + +text \AACP Theorem 8.16\ + +lemma loop_semantics_apx_right_isotone: + "w \ z \ \ (\x . y * x \ w) \ \ (\x . y * x \ z)" + by (metis loop_semantics_kappa_mu_nu_2 combined_apx_right_isotone_2) + +lemma loop_semantics_apx_isotone: + "v \ y \ w \ z \ \ (\x . v * x \ w) \ \ (\x . y * x \ z)" + using apx_transitive_2 loop_semantics_apx_left_isotone loop_semantics_apx_right_isotone by blast + +end + +end + diff --git a/thys/Correctness_Algebras/N_Omega_Binary_Iterings.thy b/thys/Correctness_Algebras/N_Omega_Binary_Iterings.thy new file mode 100644 --- /dev/null +++ b/thys/Correctness_Algebras/N_Omega_Binary_Iterings.thy @@ -0,0 +1,477 @@ +(* Title: N-Omega Binary Iterings + Author: Walter Guttmann + Maintainer: Walter Guttmann +*) + +section \N-Omega Binary Iterings\ + +theory N_Omega_Binary_Iterings + +imports N_Omega_Algebras Binary_Iterings_Strict + +begin + +sublocale extended_binary_itering < left_zero_conway_semiring where circ = "(\x . x \ 1)" + apply unfold_locales + using while_left_unfold apply force + apply (metis mult_1_right while_one_mult_below while_slide) + by (simp add: while_one_while while_sumstar_2) + +class binary_itering_apx = bounded_binary_itering + n_algebra_apx +begin + +lemma C_while_import: + "C (x \ z) = C (C x \ z)" +proof - + have 1: "C x * (x \ z) \ C x \ (C x * z)" + using C_mult_propagate while_simulate by force + have "C (x \ z) = C z \ C x * (x \ z)" + by (metis inf_sup_distrib1 n_L_T_meet_mult while_left_unfold) + also have "... \ C x \ z" + using 1 by (metis C_decreasing sup_mono while_right_unfold) + finally have "C (x \ z) \ C (C x \ z)" + by simp + thus ?thesis + by (metis order.antisym inf.boundedI inf.cobounded1 inf.coboundedI2 inf.sup_monoid.add_commute while_absorb_2 while_increasing) +qed + +lemma C_while_preserve: + "C (x \ z) = C (x \ C z)" +proof - + have "C x * (x \ z) \ C x \ (C x * z)" + using C_mult_propagate while_simulate by auto + also have "... \ x \ (x * C z)" + using C_decreasing n_L_T_meet_mult_propagate while_isotone by blast + finally have 1: "C x * (x \ z) \ x \ (x * C z)" + . + have "C (x \ z) = C z \ C x * (x \ z)" + by (metis inf_sup_distrib1 n_L_T_meet_mult while_left_unfold) + also have "... \ x \ C z" + using 1 by (meson order.trans le_supI while_increasing while_right_plus_below) + finally have "C (x \ z) \ C (x \ C z)" + by simp + thus ?thesis + by (meson order.antisym inf.boundedI inf.cobounded1 inf.coboundedI2 inf.eq_refl while_isotone) +qed + +lemma C_while_import_preserve: + "C (x \ z) = C (C x \ C z)" + using C_while_import C_while_preserve by auto + +lemma while_L_L: + "L \ L = L" + by (metis n_L_top_L while_mult_star_exchange while_right_top) + +lemma while_L_below_sup: + "L \ x \ x \ L" + by (metis while_left_unfold sup_right_isotone n_L_below_L) + +lemma while_L_split: + "x \ L \ (x \ y) \ L" +proof - + have "x \ L \ (x \ bot) \ L" + by (metis sup_commute sup_bot_left mult_1_right n_L_split_L while_right_unfold while_simulate_left_plus while_zero) + thus ?thesis + by (metis sup_commute sup_right_isotone order_trans while_right_isotone bot_least) +qed + +lemma while_n_while_top_split: + "x \ (n(x \ y) * top) \ (x \ bot) \ n(x \ y) * top" +proof - + have "x * n(x \ y) * top \ x * bot \ n(x * (x \ y)) * top" + by (simp add: n_n_top_split_n_top) + also have "... \ n(x \ y) * top \ x * bot" + by (metis sup_commute sup_right_isotone mult_left_isotone n_isotone while_left_plus_below) + finally have "x \ (n(x \ y) * top) \ n(x \ y) * top \ (x \ (x * bot))" + by (metis mult_assoc mult_1_right while_simulate_left mult_left_zero while_left_top) + also have "... \ (x \ bot) \ n(x \ y) * top" + using sup_left_isotone while_right_plus_below by auto + finally show ?thesis + . +qed + +lemma circ_apx_right_isotone: + assumes "x \ y" + shows "z \ x \ z \ y" +proof - + have 1: "x \ y \ L \ C y \ x \ n(x) * top" + using assms apx_def by auto + hence "z \ x \ (z \ y) \ (z \ L)" + by (metis while_left_dist_sup while_right_isotone) + hence 2: "z \ x \ (z \ y) \ L" + by (meson le_supI order_lesseq_imp sup.cobounded1 while_L_split) + have "z \ (n(z \ x) * top) \ (z \ bot) \ n(z \ x) * top" + by (simp add: while_n_while_top_split) + also have "... \ (z \ x) \ n(z \ x) * top" + using sup_left_isotone while_right_isotone by force + finally have 3: "z \ (n(x) * top) \ (z \ x) \ n(z \ x) * top" + by (metis mult_left_isotone n_isotone order_trans while_increasing while_right_isotone) + have "C (z \ y) \ z \ C y" + by (metis C_while_preserve inf.cobounded2) + also have "... \ (z \ x) \ (z \ (n(x) * top))" + using 1 by (metis while_left_dist_sup while_right_isotone) + also have "... \ (z \ x) \ n(z \ x) * top" + using 3 by simp + finally show ?thesis + using 2 apx_def by auto +qed + +end + +class extended_binary_itering_apx = binary_itering_apx + bounded_extended_binary_itering + + assumes n_below_while_zero: "n(x) \ n(x \ bot)" +begin + +lemma circ_apx_right_isotone: + assumes "x \ y" + shows "x \ z \ y \ z" +proof - + have 1: "x \ y \ L \ C y \ x \ n(x) * top" + using assms apx_def by auto + hence "x \ z \ ((y \ 1) * L) \ (y \ z)" + by (metis while_left_isotone while_sumstar_3) + also have "... \ (y \ z) \ (y \ 1) * L" + by (metis while_productstar sup_right_isotone mult_right_isotone n_L_below_L while_slide) + also have "... \ (y \ z) \ L" + by (meson order.trans le_supI sup.cobounded1 while_L_split while_one_mult_below) + finally have 2: "x \ z \ (y \ z) \ L" + . + have "C (y \ z) \ C y \ z" + by (metis C_while_import inf.sup_right_divisibility) + also have "... \ ((x \ 1) * n(x) * top) \ (x \ z)" + using 1 by (metis while_left_isotone mult_assoc while_sumstar_3) + also have "... \ (x \ z) \ (x \ 1) * n(x) * top" + by (metis while_productstar sup_left_top sup_right_isotone mult_assoc mult_left_sub_dist_sup_right while_slide) + also have "... \ (x \ z) \ (x \ (n(x) * top))" + using sup_right_isotone while_one_mult_below mult_assoc by auto + also have "... \ (x \ z) \ (x \ (n(x \ z) * top))" + by (metis n_below_while_zero bot_least while_right_isotone n_isotone mult_left_isotone sup_right_isotone order_trans) + also have "... \ (x \ z) \ n(x \ z) * top" + by (metis sup_assoc sup_right_isotone while_n_while_top_split sup_bot_right while_left_dist_sup) + finally show ?thesis + using 2 apx_def by auto +qed + +(* +lemma while_top: "top \ x = L \ top * x" oops +lemma while_one_top: "1 \ x = L \ x" oops +lemma while_unfold_below_1: "x = y * x \ x \ y \ 1" oops + +lemma while_square_1: "x \ 1 = (x * x) \ (x \ 1)" oops +lemma while_absorb_below_one: "y * x \ x \ y \ x \ 1 \ x" oops +lemma while_mult_L: "(x * L) \ z = z \ x * L" oops +lemma tarski_top_omega_below_2: "x * L \ (x * L) \ bot" oops +lemma tarski_top_omega_2: "x * L = (x * L) \ bot" oops +lemma while_separate_right_plus: "y * x \ x * (x \ (1 \ y)) \ 1 \ y \ (x \ z) \ x \ (y \ z)" oops +lemma "y \ (x \ 1) \ x \ (y \ 1) \ (x \ y) \ 1 = x \ (y \ 1)" oops +lemma "y * x \ (1 \ x) * (y \ 1) \ (x \ y) \ 1 = x \ (y \ 1)" oops +*) + +end + +class n_omega_algebra_binary = n_omega_algebra + while + + assumes while_def: "x \ y = n(x\<^sup>\) * L \ x\<^sup>\ * y" +begin + +lemma while_omega_inf_L_star: + "x \ y = (x\<^sup>\ \ L) \ x\<^sup>\ * y" + by (metis loop_semantics_kappa_mu_nu loop_semantics_kappa_mu_nu_2 while_def) + +lemma while_one_mult_while_below_1: + "(y \ 1) * (y \ v) \ y \ v" +proof - + have "(y \ 1) * (y \ v) \ y \ (y \ v)" + by (smt sup_left_isotone mult_assoc mult_right_dist_sup mult_right_isotone n_L_below_L while_def mult_left_one) + also have "... = n(y\<^sup>\) * L \ y\<^sup>\ * n(y\<^sup>\) * L \ y\<^sup>\ * y\<^sup>\ * v" + by (simp add: mult_left_dist_sup sup_assoc while_def mult_assoc) + also have "... = n(y\<^sup>\) * L \ (y\<^sup>\ * y\<^sup>\ * bot \ y\<^sup>\ * n(y\<^sup>\) * L) \ y\<^sup>\ * y\<^sup>\ * v" + by (metis mult_left_dist_sup star.circ_transitive_equal sup_bot_left mult_assoc) + also have "... = n(y\<^sup>\) * L \ (y\<^sup>\ * y\<^sup>\ * bot \ n(y\<^sup>\ * y\<^sup>\) * L) \ y\<^sup>\ * y\<^sup>\ * v" + by (simp add: n_mult_omega_L_star_zero) + also have "... = n(y\<^sup>\) * L \ n(y\<^sup>\ * y\<^sup>\) * L \ y\<^sup>\ * y\<^sup>\ * v" + by (smt (z3) mult_left_dist_sup sup.left_commute sup_bot_left sup_commute) + finally show ?thesis + by (simp add: star.circ_transitive_equal star_mult_omega while_def) +qed + +lemma star_below_while: + "x\<^sup>\ * y \ x \ y" + by (simp add: while_def) + +subclass bounded_binary_itering +proof unfold_locales + fix x y z + have "z \ x * ((y * x) \ (y * z)) = x * (y * x)\<^sup>\ * y * z \ x * n((y * x)\<^sup>\) * L \ z" + using mult_left_dist_sup sup_commute while_def mult_assoc by auto + also have "... = x * (y * x)\<^sup>\ * y * z \ n(x * (y * x)\<^sup>\) * L \ z" + by (metis mult_assoc mult_right_isotone bot_least n_mult_omega_L_star_zero sup_relative_same_increasing) + also have "... = (x * y)\<^sup>\ * z \ n(x * (y * x)\<^sup>\) * L" + by (smt sup_assoc sup_commute mult_assoc star.circ_loop_fixpoint star_slide) + also have "... = (x * y) \ z" + by (simp add: omega_slide sup_monoid.add_commute while_def) + finally show "(x * y) \ z = z \ x * ((y * x) \ (y * z))" + by simp +next + fix x y z + have "(x \ y) \ (x \ z) = n((n(x\<^sup>\) * L \ x\<^sup>\ * y)\<^sup>\) * L \ (n(x\<^sup>\) * L \ x\<^sup>\ * y)\<^sup>\ * (x \ z)" + by (simp add: while_def) + also have "... = n((x\<^sup>\ * y)\<^sup>\ \ (x\<^sup>\ * y)\<^sup>\ * n(x\<^sup>\) * L) * L \ ((x\<^sup>\ * y)\<^sup>\ \ (x\<^sup>\ * y)\<^sup>\ * n(x\<^sup>\) * L) * (x \ z)" + using mult_L_sup_omega mult_L_sup_star by force + also have "... = n((x\<^sup>\ * y)\<^sup>\) * L \ n((x\<^sup>\ * y)\<^sup>\ * n(x\<^sup>\) * L) * L \ (x\<^sup>\ * y)\<^sup>\ * (x \ z) \ (x\<^sup>\ * y)\<^sup>\ * n(x\<^sup>\) * L * (x \ z)" + by (simp add: mult_right_dist_sup n_dist_omega_star sup_assoc mult_assoc) + also have "... = n((x\<^sup>\ * y)\<^sup>\) * L \ n((x\<^sup>\ * y)\<^sup>\ * n(x\<^sup>\) * L) * L \ (x\<^sup>\ * y)\<^sup>\ * bot \ (x\<^sup>\ * y)\<^sup>\ * (x \ z) \ (x\<^sup>\ * y)\<^sup>\ * n(x\<^sup>\) * L * (x \ z)" + by (smt sup_assoc sup_bot_left mult_left_dist_sup) + also have "... = n((x\<^sup>\ * y)\<^sup>\) * L \ ((x\<^sup>\ * y)\<^sup>\ * n(x\<^sup>\) * L * (x \ z) \ (x\<^sup>\ * y)\<^sup>\ * n(x\<^sup>\) * L \ (x\<^sup>\ * y)\<^sup>\ * (x \ z))" + by (smt n_n_L_split_n_n_L_L sup_commute sup_assoc) + also have "... = n((x\<^sup>\ * y)\<^sup>\) * L \ ((x\<^sup>\ * y)\<^sup>\ * n(x\<^sup>\) * L \ (x\<^sup>\ * y)\<^sup>\ * (x \ z))" + by (smt mult_L_omega omega_sub_vector le_iff_sup) + also have "... = n((x\<^sup>\ * y)\<^sup>\) * L \ (x\<^sup>\ * y)\<^sup>\ * (x \ z)" + using mult_left_sub_dist_sup_left sup_absorb2 while_def mult_assoc by auto + also have "... = (x\<^sup>\ * y)\<^sup>\ * x\<^sup>\ * z \ (x\<^sup>\ * y)\<^sup>\ * n(x\<^sup>\) * L \ n((x\<^sup>\ * y)\<^sup>\) * L" + by (simp add: mult_left_dist_sup sup_commute while_def mult_assoc) + also have "... = (x\<^sup>\ * y)\<^sup>\ * x\<^sup>\ * z \ n((x\<^sup>\ * y)\<^sup>\ * x\<^sup>\) * L \ n((x\<^sup>\ * y)\<^sup>\) * L" + by (metis sup_bot_right mult_left_dist_sup sup_assoc n_mult_omega_L_star_zero) + also have "... = (x \ y) \ z" + using n_dist_omega_star omega_decompose semiring.combine_common_factor star.circ_sup_9 sup_commute while_def by force + finally show "(x \ y) \ z = (x \ y) \ (x \ z)" + by simp +next + fix x y z + show "x \ (y \ z) = (x \ y) \ (x \ z)" + using semiring.distrib_left sup_assoc sup_commute while_def by force +next + fix x y z + show "(x \ y) * z \ x \ (y * z)" + by (smt sup_left_isotone mult_assoc mult_right_dist_sup mult_right_isotone n_L_below_L while_def) +next + fix v w x y z + show "x * z \ z * (y \ 1) \ w \ x \ (z * v) \ z * (y \ v) \ (x \ (w * (y \ v)))" + proof + assume 1: "x * z \ z * (y \ 1) \ w" + have "z * v \ x * (z * (y \ v) \ x\<^sup>\ * (w * (y \ v))) \ z * v \ x * z * (y \ v) \ x\<^sup>\ * (w * (y \ v))" + by (metis sup_assoc sup_right_isotone mult_assoc mult_left_dist_sup mult_left_isotone star.left_plus_below_circ) + also have "... \ z * v \ z * (y \ 1) * (y \ v) \ w * (y \ v) \ x\<^sup>\ * (w * (y \ v))" + using 1 by (metis sup_assoc sup_left_isotone sup_right_isotone mult_left_isotone mult_right_dist_sup) + also have "... \ z * v \ z * (y \ v) \ x\<^sup>\ * (w * (y \ v))" + by (smt (verit, ccfv_threshold) sup_ge2 le_iff_sup mult_assoc mult_left_dist_sup star.circ_loop_fixpoint while_one_mult_while_below_1 le_supE le_supI) + also have "... = z * (y \ v) \ x\<^sup>\ * (w * (y \ v))" + by (metis le_iff_sup le_supE mult_right_isotone star.circ_loop_fixpoint star_below_while) + finally have "x\<^sup>\ * z * v \ z * (y \ v) \ x\<^sup>\ * (w * (y \ v))" + using star_left_induct mult_assoc by auto + thus "x \ (z * v) \ z * (y \ v) \ (x \ (w * (y \ v)))" + by (smt sup_assoc sup_commute sup_right_isotone mult_assoc while_def) + qed +next + fix v w x y z + show "z * x \ y * (y \ z) \ w \ z * (x \ v) \ y \ (z * v \ w * (x \ v))" + proof + assume "z * x \ y * (y \ z) \ w" + hence 1: "z * x \ y * y\<^sup>\ * z \ (y * n(y\<^sup>\) * L \ w)" + by (simp add: mult_left_dist_sup sup.left_commute sup_commute while_def mult_assoc) + hence "z * x\<^sup>\ \ y\<^sup>\ * (z \ (y * n(y\<^sup>\) * L \ w) * x\<^sup>\)" + by (simp add: star_circ_simulate_right_plus) + also have "... = y\<^sup>\ * z \ y\<^sup>\ * y * n(y\<^sup>\) * L \ y\<^sup>\ * w * x\<^sup>\" + by (simp add: L_mult_star semiring.distrib_left semiring.distrib_right sup_left_commute sup_monoid.add_commute mult_assoc) + also have "... = y\<^sup>\ * z \ n(y\<^sup>\ * y * y\<^sup>\) * L \ y\<^sup>\ * w * x\<^sup>\" + by (metis sup_relative_same_increasing mult_isotone n_mult_omega_L_star_zero star.left_plus_below_circ star.right_plus_circ bot_least) + also have "... = n(y\<^sup>\) * L \ y\<^sup>\ * z \ y\<^sup>\ * w * x\<^sup>\" + using omega_unfold star_mult_omega sup_commute mult_assoc by force + finally have "z * x\<^sup>\ * v \ n(y\<^sup>\) * L * v \ y\<^sup>\ * z * v \ y\<^sup>\ * w * x\<^sup>\ * v" + by (smt le_iff_sup mult_right_dist_sup) + also have "... \ n(y\<^sup>\) * L \ y\<^sup>\ * (z * v \ w * x\<^sup>\ * v)" + by (metis n_L_below_L mult_assoc mult_right_isotone sup_left_isotone mult_left_dist_sup sup_assoc) + also have "... \ n(y\<^sup>\) * L \ y\<^sup>\ * (z * v \ w * (x \ v))" + using mult_right_isotone semiring.add_left_mono mult_assoc star_below_while by auto + finally have 2: "z * x\<^sup>\ * v \ y \ (z * v \ w * (x \ v))" + by (simp add: while_def) + have 3: "y\<^sup>\ * y * y\<^sup>\ * bot \ y\<^sup>\ * w * x\<^sup>\" + by (metis sup_commute sup_bot_left mult_assoc mult_left_sub_dist_sup_left star.circ_loop_fixpoint star.circ_transitive_equal) + have "z * x\<^sup>\ \ y * y\<^sup>\ * z * x\<^sup>\ \ (y * n(y\<^sup>\) * L \ w) * x\<^sup>\" + using 1 by (metis mult_assoc mult_left_isotone mult_right_dist_sup omega_unfold) + hence "z * x\<^sup>\ \ y\<^sup>\ \ y\<^sup>\ * y * n(y\<^sup>\) * L * x\<^sup>\ \ y\<^sup>\ * w * x\<^sup>\" + by (smt sup_assoc sup_commute left_plus_omega mult_assoc mult_left_dist_sup mult_right_dist_sup omega_induct star.left_plus_circ) + also have "... \ y\<^sup>\ \ y\<^sup>\ * y * n(y\<^sup>\) * L \ y\<^sup>\ * w * x\<^sup>\" + by (metis sup_left_isotone sup_right_isotone mult_assoc mult_right_isotone n_L_below_L) + also have "... = y\<^sup>\ \ n(y\<^sup>\ * y * y\<^sup>\) * L \ y\<^sup>\ * w * x\<^sup>\" + using 3 by (smt sup_assoc sup_commute sup_relative_same_increasing n_mult_omega_L_star_zero) + also have "... = y\<^sup>\ \ y\<^sup>\ * w * x\<^sup>\" + by (metis mult_assoc omega_unfold star_mult_omega sup_commute le_iff_sup n_L_decreasing) + finally have "n(z * x\<^sup>\) * L \ n(y\<^sup>\) * L \ n(y\<^sup>\ * w * x\<^sup>\) * L" + by (metis mult_assoc mult_left_isotone mult_right_dist_sup n_dist_omega_star n_isotone) + also have "... \ n(y\<^sup>\) * L \ y\<^sup>\ * (w * (n(x\<^sup>\) * L \ x\<^sup>\ * bot))" + by (smt sup_commute sup_right_isotone mult_assoc mult_left_dist_sup n_mult_omega_L_below_zero) + also have "... \ n(y\<^sup>\) * L \ y\<^sup>\ * (w * (n(x\<^sup>\) * L \ x\<^sup>\ * v))" + by (metis sup_right_isotone mult_right_isotone bot_least) + also have "... \ n(y\<^sup>\) * L \ y\<^sup>\ * (z * v \ w * (n(x\<^sup>\) * L \ x\<^sup>\ * v))" + using mult_left_sub_dist_sup_right sup_right_isotone by auto + finally have 4: "n(z * x\<^sup>\) * L \ y \ (z * v \ w * (x \ v))" + using while_def by auto + have "z * (x \ v) = z * n(x\<^sup>\) * L \ z * x\<^sup>\ * v" + by (simp add: mult_left_dist_sup while_def mult_assoc) + also have "... = n(z * x\<^sup>\) * L \ z * x\<^sup>\ * v" + by (metis sup_commute sup_relative_same_increasing mult_right_isotone n_mult_omega_L_star_zero bot_least) + finally show "z * (x \ v) \ y \ (z * v \ w * (x \ v))" + using 2 4 by simp + qed +qed + +lemma while_top: + "top \ x = L \ top * x" + by (metis n_top_L star.circ_top star_omega_top while_def) + +lemma while_one_top: + "1 \ x = L \ x" + by (smt mult_left_one n_top_L omega_one star_one while_def) + +lemma while_finite_associative: + "x\<^sup>\ = bot \ (x \ y) * z = x \ (y * z)" + by (metis sup_bot_left mult_assoc n_zero_L_zero while_def) + +lemma while_while_one: + "y \ (x \ 1) = n(y\<^sup>\) * L \ y\<^sup>\ * n(x\<^sup>\) * L \ y\<^sup>\ * x\<^sup>\" + by (simp add: mult_left_dist_sup sup_assoc while_def mult_assoc) + +text \AACP Theorem 8.17\ + +subclass bounded_extended_binary_itering +proof unfold_locales + fix w x y z + have "w * (x \ y * z) = n(w * n(x\<^sup>\) * L) * L \ w * x\<^sup>\ * y * z" + by (smt sup_assoc sup_commute sup_bot_left mult_assoc mult_left_dist_sup n_n_L_split_n_n_L_L while_def) + also have "... \ n((w * n(x\<^sup>\) * L)\<^sup>\) * L \ w * x\<^sup>\ * y * z" + by (simp add: mult_L_omega) + also have "... \ n((w * (x \ y))\<^sup>\) * L \ w * x\<^sup>\ * y * z" + by (smt sup_left_isotone sup_ge1 mult_assoc mult_left_isotone mult_right_isotone n_isotone omega_isotone while_def) + also have "... \ n((w * (x \ y))\<^sup>\) * L \ w * (x \ y) * z" + by (metis star_below_while mult_assoc mult_left_isotone mult_right_isotone sup_right_isotone) + also have "... \ n((w * (x \ y))\<^sup>\) * L \ (w * (x \ y))\<^sup>\ * (w * (x \ y) * z)" + using sup.boundedI sup.cobounded1 while_def while_increasing by auto + finally show "w * (x \ y * z) \ (w * (x \ y)) \ (w * (x \ y) * z)" + using while_def by auto +qed + +subclass extended_binary_itering_apx + apply unfold_locales + by (metis n_below_n_omega n_left_upper_bound n_n_L order_trans while_def) + +lemma while_simulate_4_plus: + assumes "y * x \ x * (x \ (1 \ y))" + shows "y * x * x\<^sup>\ \ x * (x \ (1 \ y))" +proof - + have "x * (x \ (1 \ y)) = x * n(x\<^sup>\) * L \ x * x\<^sup>\ * (1 \ y)" + by (simp add: mult_left_dist_sup while_def mult_assoc) + also have "... = n(x * x\<^sup>\) * L \ x * x\<^sup>\ * (1 \ y)" + by (smt n_mult_omega_L_star_zero sup_relative_same_increasing sup_commute sup_bot_right mult_left_sub_dist_sup_right) + finally have 1: "x * (x \ (1 \ y)) = n(x\<^sup>\) * L \ x * x\<^sup>\ \ x * x\<^sup>\ * y" + using mult_left_dist_sup omega_unfold sup_assoc by force + hence "x * x\<^sup>\ * y * x \ x * x\<^sup>\ * n(x\<^sup>\) * L \ x * x\<^sup>\ * x\<^sup>\ * x \ x * x\<^sup>\ * x * x\<^sup>\ * y" + by (metis assms mult_assoc mult_right_isotone mult_left_dist_sup star_plus) + also have "... = n(x * x\<^sup>\ * x\<^sup>\) * L \ x * x\<^sup>\ * x\<^sup>\ * x \ x * x\<^sup>\ * x * x\<^sup>\ * y" + by (smt (z3) sup_commute n_mult_omega_L_star omega_unfold semiring.distrib_left star_plus mult_assoc) + also have "... = n(x\<^sup>\) * L \ x * x\<^sup>\ * x \ x * x * x\<^sup>\ * y" + using omega_unfold star.circ_plus_same star.circ_transitive_equal star_mult_omega mult_assoc by auto + also have "... \ n(x\<^sup>\) * L \ x * x\<^sup>\ \ x * x\<^sup>\ * y" + by (smt sup_assoc sup_ge2 le_iff_sup mult_assoc mult_right_dist_sup star.circ_increasing star.circ_plus_same star.circ_transitive_equal) + finally have 2: "x * x\<^sup>\ * y * x \ n(x\<^sup>\) * L \ x * x\<^sup>\ \ x * x\<^sup>\ * y" + . + have "(n(x\<^sup>\) * L \ x * x\<^sup>\ \ x * x\<^sup>\ * y) * x \ n(x\<^sup>\) * L \ x * x\<^sup>\ * x \ x * x\<^sup>\ * y * x" + by (metis mult_right_dist_sup n_L_below_L mult_assoc mult_right_isotone sup_left_isotone) + also have "... \ n(x\<^sup>\) * L \ x * x\<^sup>\ \ x * x\<^sup>\ * y * x" + by (smt sup_commute sup_left_isotone mult_assoc mult_right_isotone star.left_plus_below_circ star_plus) + also have "... \ n(x\<^sup>\) * L \ x * x\<^sup>\ \ x * x\<^sup>\ * y" + using 2 by simp + finally show ?thesis + using 1 assms star_right_induct by force +qed + +lemma while_simulate_4_omega: + assumes "y * x \ x * (x \ (1 \ y))" + shows "y * x\<^sup>\ \ x\<^sup>\" +proof - + have "x * (x \ (1 \ y)) = x * n(x\<^sup>\) * L \ x * x\<^sup>\ * (1 \ y)" + using mult_left_dist_sup while_def mult_assoc by auto + also have "... = n(x * x\<^sup>\) * L \ x * x\<^sup>\ * (1 \ y)" + by (smt (z3) mult_1_right mult_left_sub_dist_sup_left n_mult_omega_L_star sup_commute sup_relative_same_increasing) + finally have "x * (x \ (1 \ y)) = n(x\<^sup>\) * L \ x * x\<^sup>\ \ x * x\<^sup>\ * y" + using mult_left_dist_sup omega_unfold sup_assoc by force + hence "y * x\<^sup>\ \ n(x\<^sup>\) * L * x\<^sup>\ \ x * x\<^sup>\ * x\<^sup>\ \ x * x\<^sup>\ * y * x\<^sup>\" + by (smt assms le_iff_sup mult_assoc mult_right_dist_sup omega_unfold) + also have "... \ x * x\<^sup>\ * (y * x\<^sup>\) \ x\<^sup>\" + by (metis sup_left_isotone mult_L_omega omega_sub_vector mult_assoc omega_unfold star_mult_omega n_L_decreasing le_iff_sup sup_commute) + finally have "y * x\<^sup>\ \ (x * x\<^sup>\)\<^sup>\ \ (x * x\<^sup>\)\<^sup>\ * x\<^sup>\" + by (simp add: omega_induct sup_monoid.add_commute) + thus ?thesis + by (metis sup_idem left_plus_omega star_mult_omega) +qed + +lemma while_square_1: + "x \ 1 = (x * x) \ (x \ 1)" + by (metis mult_1_right omega_square star_square_2 while_def) + +lemma while_absorb_below_one: + "y * x \ x \ y \ x \ 1 \ x" + by (metis star_left_induct_mult sup_mono n_galois n_sub_nL while_def while_one_top) + +lemma while_mult_L: + "(x * L) \ z = z \ x * L" + by (metis sup_bot_right mult_left_zero while_denest_5 while_one_top while_productstar while_sumstar) + +lemma tarski_top_omega_below_2: + "x * L \ (x * L) \ bot" + by (simp add: while_mult_L) + +lemma tarski_top_omega_2: + "x * L = (x * L) \ bot" + by (simp add: while_mult_L) + +(* +lemma while_sub_mult_one: "x * (1 \ y) \ 1 \ x" nitpick [expect=genuine,card=3] oops +lemma while_unfold_below: "x = z \ y * x \ x \ y \ z" nitpick [expect=genuine,card=2] oops +lemma while_loop_is_greatest_postfixpoint: "is_greatest_postfixpoint (\x . y * x \ z) (y \ z)" nitpick [expect=genuine,card=2] oops +lemma while_loop_is_greatest_fixpoint: "is_greatest_fixpoint (\x . y * x \ z) (y \ z)" nitpick [expect=genuine,card=2] oops +lemma while_denest_3: "(x \ w) \ x\<^sup>\ = (x \ w)\<^sup>\" nitpick [expect=genuine,card=2] oops +lemma while_mult_top: "(x * top) \ z = z \ x * top" nitpick [expect=genuine,card=2] oops +lemma tarski_below_top_omega: "x \ (x * L)\<^sup>\" nitpick [expect=genuine,card=2] oops +lemma tarski_mult_omega_omega: "(x * y\<^sup>\)\<^sup>\ = x * y\<^sup>\" nitpick [expect=genuine,card=3] oops +lemma tarski_below_top_omega_2: "x \ (x * L) \ bot" nitpick [expect=genuine,card=2] oops +lemma "1 = (x * bot) \ 1" nitpick [expect=genuine,card=3] oops +lemma tarski: "x = bot \ top * x * top = top" nitpick [expect=genuine,card=3] oops +lemma "(x \ y) \ z = ((x \ 1) * y) \ ((x \ 1) * z)" nitpick [expect=genuine,card=2] oops +lemma while_top_2: "top \ z = top * z" nitpick [expect=genuine,card=2] oops +lemma while_mult_top_2: "(x * top) \ z = z \ x * top * z" nitpick [expect=genuine,card=2] oops +lemma while_one_mult: "(x \ 1) * x = x \ x" nitpick [expect=genuine,card=4] oops +lemma "(x \ 1) * y = x \ y" nitpick [expect=genuine,card=2] oops +lemma while_associative: "(x \ y) * z = x \ (y * z)" nitpick [expect=genuine,card=2] oops +lemma while_back_loop_is_fixpoint: "is_fixpoint (\x . x * y \ z) (z * (y \ 1))" nitpick [expect=genuine,card=4] oops +lemma "1 \ x * bot = x \ 1" nitpick [expect=genuine,card=3] oops +lemma "x = x * (x \ 1)" nitpick [expect=genuine,card=3] oops +lemma "x * (x \ 1) = x \ 1" nitpick [expect=genuine,card=2] oops +lemma "x \ 1 = x \ (1 \ 1)" nitpick [expect=genuine,card=3] oops +lemma "(x \ y) \ 1 = (x \ (y \ 1)) \ 1" nitpick [expect=genuine,card=3] oops +lemma "z \ y * x = x \ y \ z \ x" nitpick [expect=genuine,card=2] oops +lemma "y * x = x \ y \ x \ x" nitpick [expect=genuine,card=2] oops +lemma "z \ x * y = x \ z * (y \ 1) \ x" nitpick [expect=genuine,card=3] oops +lemma "x * y = x \ x * (y \ 1) \ x" nitpick [expect=genuine,card=3] oops +lemma "x * z = z * y \ x \ z \ z * (y \ 1)" nitpick [expect=genuine,card=2] oops + +lemma while_unfold_below_1: "x = y * x \ x \ y \ 1" nitpick [expect=genuine,card=3] oops +lemma "x\<^sup>\ \ x\<^sup>\ * x\<^sup>\" oops +lemma tarski_omega_idempotent: "x\<^sup>\\<^sup>\ = x\<^sup>\" oops +*) + +end + +class n_omega_algebra_binary_strict = n_omega_algebra_binary + circ + + assumes L_left_zero: "L * x = L" + assumes circ_def: "x\<^sup>\ = n(x\<^sup>\) * L \ x\<^sup>\" +begin + +subclass strict_binary_itering + apply unfold_locales + apply (metis while_def mult_assoc L_left_zero mult_right_dist_sup) + by (metis circ_def while_def mult_1_right) + +end + +end + diff --git a/thys/Correctness_Algebras/N_Relation_Algebras.thy b/thys/Correctness_Algebras/N_Relation_Algebras.thy new file mode 100644 --- /dev/null +++ b/thys/Correctness_Algebras/N_Relation_Algebras.thy @@ -0,0 +1,182 @@ +(* Title: N-Relation-Algebras + Author: Walter Guttmann + Maintainer: Walter Guttmann +*) + +section \N-Relation-Algebras\ + +theory N_Relation_Algebras + +imports Stone_Relation_Algebras.Relation_Algebras N_Omega_Algebras + +begin + +context bounded_distrib_allegory +begin + +subclass lattice_ordered_pre_left_semiring .. + +end + +text \Theorem 37\ + +sublocale relation_algebra < n_algebra where sup = sup and bot = bot and top = top and inf = inf and n = N and L = top + apply unfold_locales + using N_comp_top comp_inf.semiring.distrib_left inf.sup_monoid.add_commute inf_vector_comp apply simp + apply (metis N_comp compl_sup double_compl mult_assoc mult_right_dist_sup top_mult_top N_comp_N) + apply (metis brouwer.p_antitone inf.sup_monoid.add_commute inf.sup_right_isotone mult_left_isotone p_antitone_sup) + apply simp + using N_vector_top apply force + apply simp + apply (simp add: brouwer.p_antitone_iff top_right_mult_increasing) + apply simp + apply (metis N_comp_top conv_complement_sub double_compl le_supI2 le_iff_sup mult_assoc mult_left_isotone schroeder_3) + by simp + +sublocale relation_algebra < n_algebra_apx where sup = sup and bot = bot and top = top and inf = inf and n = N and L = top and apx = greater_eq + apply unfold_locales + using n_less_eq_char by force + +no_notation + inverse_divide (infixl "'/" 70) + +notation + divide (infixl "'/" 70) + +class left_residuated_relation_algebra = relation_algebra + inverse + + assumes lres_def: "x / y = -(-x * y\<^sup>T)" +begin + +text \Theorem 32.1\ + +subclass residuated_pre_left_semiring + apply unfold_locales + by (metis compl_le_swap1 lres_def schroeder_4) + +end + +context left_residuated_relation_algebra +begin + +text \Theorem 32.3\ + +lemma lres_mult_lres_lres: + "x / (z * y) = (x / y) / z" + by (metis conv_dist_comp double_compl lres_def mult_assoc) + +text \Theorem 32.5\ + +lemma lres_dist_inf: + "(x \ y) / z = (x / z) \ (y / z)" + by (metis compl_inf compl_sup lres_def mult_right_dist_sup) + +text \Theorem 32.6\ + +lemma lres_add_export_vector: + assumes "vector x" + shows "(x \ y) / z = x \ (y / z)" +proof - + have "(x \ y) / z = -((-x \ -y) * z\<^sup>T)" + by (simp add: lres_def) + also have "... = -(-x \ (-y * z\<^sup>T))" + using assms vector_complement_closed vector_inf_comp by auto + also have "... = x \ (y / z)" + by (simp add: lres_def) + finally show ?thesis + . +qed + +text \Theorem 32.7\ + +lemma lres_top_vector: + "vector (x / top)" + using equivalence_top_closed lres_def vector_complement_closed vector_mult_closed vector_top_closed by auto + +text \Theorem 32.10\ + +lemma lres_top_export_inf_mult: + "((x / top) \ y) * z = (x / top) \ (y * z)" + by (simp add: vector_inf_comp lres_top_vector) + +lemma N_lres: + "N(x) = x / top \ 1" + using lres_def by auto + +end + +class complete_relation_algebra = relation_algebra + complete_lattice +begin + +definition mu :: "('a \ 'a) \ 'a" where "mu f \ Inf { y . f y \ y }" +definition nu :: "('a \ 'a) \ 'a" where "nu f \ Sup { y . y \ f y }" + +lemma mu_lower_bound: + "f x \ x \ mu f \ x" + by (auto simp add: mu_def intro: Inf_lower) + +lemma mu_greatest_lower_bound: + "(\y . f y \ y \ x \ y) \ x \ mu f" + using lfp_def lfp_greatest mu_def by auto + +lemma mu_unfold_1: + "isotone f \ f (mu f) \ mu f" + by (metis mu_greatest_lower_bound order_trans mu_lower_bound isotone_def) + +lemma mu_unfold_2: + "isotone f \ mu f \ f (mu f)" + by (simp add: mu_lower_bound mu_unfold_1 ord.isotone_def) + +lemma mu_unfold: + "isotone f \ mu f = f (mu f)" + by (simp add: order.antisym mu_unfold_1 mu_unfold_2) + +lemma mu_const: + "mu (\x . y) = y" + by (simp add: isotone_def mu_unfold) + +lemma mu_lpfp: + "isotone f \ is_least_prefixpoint f (mu f)" + by (simp add: is_least_prefixpoint_def mu_lower_bound mu_unfold_1) + +lemma mu_lfp: + "isotone f \ is_least_fixpoint f (mu f)" + by (metis is_least_fixpoint_def mu_lower_bound mu_unfold order_refl) + +lemma mu_pmu: + "isotone f \ p\ f = mu f" + using least_prefixpoint_same mu_lpfp by force + +lemma mu_mu: + "isotone f \ \ f = mu f" + using least_fixpoint_same mu_lfp by fastforce + +end + +class omega_relation_algebra = relation_algebra + star + omega + + assumes ra_star_left_unfold : "1 \ y * y\<^sup>\ \ y\<^sup>\" + assumes ra_star_left_induct : "z \ y * x \ x \ y\<^sup>\ * z \ x" + assumes ra_star_right_induct: "z \ x * y \ x \ z * y\<^sup>\ \ x" + assumes ra_omega_unfold: "y\<^sup>\ = y * y\<^sup>\" + assumes ra_omega_induct: "x \ z \ y * x \ x \ y\<^sup>\ \ y\<^sup>\ * z" +begin + +subclass bounded_omega_algebra + apply unfold_locales + using ra_star_left_unfold apply blast + using ra_star_left_induct apply blast + using ra_star_right_induct apply blast + using ra_omega_unfold apply blast + using ra_omega_induct by blast + +end + +text \Theorem 38\ + +sublocale omega_relation_algebra < n_omega_algebra where sup = sup and bot = bot and top = top and inf = inf and n = N and L = top and apx = greater_eq and Omega = "\x . N(x\<^sup>\) * top \ x\<^sup>\" + apply unfold_locales + apply simp + using n_split_omega_mult omega_vector star_mult_omega apply force + by simp + +end + diff --git a/thys/Correctness_Algebras/N_Semirings.thy b/thys/Correctness_Algebras/N_Semirings.thy new file mode 100644 --- /dev/null +++ b/thys/Correctness_Algebras/N_Semirings.thy @@ -0,0 +1,776 @@ +(* Title: N-Semirings + Author: Walter Guttmann + Maintainer: Walter Guttmann +*) + +section \N-Semirings\ + +theory N_Semirings + +imports Test_Iterings Omega_Algebras + +begin + +class n_semiring = bounded_idempotent_left_zero_semiring + n + L + + assumes n_bot : "n(bot) = bot" + assumes n_top : "n(top) = 1" + assumes n_dist_sup : "n(x \ y) = n(x) \ n(y)" + assumes n_export : "n(n(x) * y) = n(x) * n(y)" + assumes n_sub_mult_bot: "n(x) = n(x * bot) * n(x)" + assumes n_L_split : "x * n(y) * L = x * bot \ n(x * y) * L" + assumes n_split : "x \ x * bot \ n(x * L) * top" +begin + +lemma n_sub_one: + "n(x) \ 1" + by (metis sup_left_top sup_ge2 n_dist_sup n_top) + +text \Theorem 15\ + +lemma n_isotone: + "x \ y \ n(x) \ n(y)" + by (metis le_iff_sup n_dist_sup) + +lemma n_mult_idempotent: + "n(x) * n(x) = n(x)" + by (metis mult_assoc mult_1_right n_export n_sub_mult_bot n_top) + +text \Theorem 15.3\ + +lemma n_mult_bot: + "n(x) = n(x * bot)" + by (metis sup_commute sup_left_top sup_bot_right mult_left_dist_sup mult_1_right n_dist_sup n_sub_mult_bot n_top) + +lemma n_mult_left_upper_bound: + "n(x) \ n(x * y)" + by (metis mult_right_isotone n_isotone n_mult_bot bot_least) + +lemma n_mult_right_bot: + "n(x) * bot = bot" + by (metis sup_left_top sup_bot_left mult_left_one mult_1_right n_export n_dist_sup n_sub_mult_bot n_top n_bot) + +text \Theorem 15.9\ + +lemma n_mult_n: + "n(x * n(y)) = n(x)" + by (metis mult_assoc n_mult_right_bot n_mult_bot) + +lemma n_mult_left_absorb_sup: + "n(x) * (n(x) \ n(y)) = n(x)" + by (metis sup_left_top mult_left_dist_sup mult_1_right n_dist_sup n_mult_idempotent n_top) + +lemma n_mult_right_absorb_sup: + "(n(x) \ n(y)) * n(y) = n(y)" + by (metis sup_commute sup_left_top mult_left_one mult_right_dist_sup n_dist_sup n_mult_idempotent n_top) + +lemma n_sup_left_absorb_mult: + "n(x) \ n(x) * n(y) = n(x)" + using mult_left_dist_sup n_mult_idempotent n_mult_left_absorb_sup by auto + +lemma n_sup_right_absorb_mult: + "n(x) * n(y) \ n(y) = n(y)" + using mult_right_dist_sup n_mult_idempotent n_mult_right_absorb_sup by auto + +lemma n_mult_commutative: + "n(x) * n(y) = n(y) * n(x)" + by (smt sup_commute mult_left_dist_sup mult_right_dist_sup n_sup_left_absorb_mult n_sup_right_absorb_mult n_export n_mult_idempotent) + +lemma n_sup_left_dist_mult: + "n(x) \ n(y) * n(z) = (n(x) \ n(y)) * (n(x) \ n(z))" + by (metis sup_assoc mult_left_dist_sup mult_right_dist_sup n_sup_right_absorb_mult n_mult_commutative n_mult_left_absorb_sup) + +lemma n_sup_right_dist_mult: + "n(x) * n(y) \ n(z) = (n(x) \ n(z)) * (n(y) \ n(z))" + by (simp add: sup_commute n_sup_left_dist_mult) + +lemma n_order: + "n(x) \ n(y) \ n(x) * n(y) = n(x)" + by (metis le_iff_sup n_sup_right_absorb_mult n_mult_left_absorb_sup) + +lemma n_mult_left_lower_bound: + "n(x) * n(y) \ n(x)" + by (simp add: sup.orderI n_sup_left_absorb_mult) + +lemma n_mult_right_lower_bound: + "n(x) * n(y) \ n(y)" + by (simp add: le_iff_sup n_sup_right_absorb_mult) + +lemma n_mult_least_upper_bound: + "n(x) \ n(y) \ n(x) \ n(z) \ n(x) \ n(y) * n(z)" + by (metis order.trans mult_left_isotone n_mult_commutative n_mult_right_lower_bound n_order) + +lemma n_mult_left_divisibility: + "n(x) \ n(y) \ (\z . n(x) = n(y) * n(z))" + by (metis n_mult_commutative n_mult_left_lower_bound n_order) + +lemma n_mult_right_divisibility: + "n(x) \ n(y) \ (\z . n(x) = n(z) * n(y))" + by (simp add: n_mult_commutative n_mult_left_divisibility) + +text \Theorem 15.1\ + +lemma n_one: + "n(1) = bot" + by (metis mult_left_one n_mult_bot n_bot) + +lemma n_split_equal: + "x \ n(x * L) * top = x * bot \ n(x * L) * top" + using n_split order_trans sup.cobounded1 sup_same_context zero_right_mult_decreasing by blast + +lemma n_split_top: + "x * top \ x * bot \ n(x * L) * top" + by (metis mult_left_isotone n_split vector_bot_closed vector_mult_closed vector_sup_closed vector_top_closed) + +text \Theorem 15.2\ + +lemma n_L: + "n(L) = 1" + by (metis sup_bot_left order.antisym mult_left_one n_export n_isotone n_mult_commutative n_split_top n_sub_one n_top) + +text \Theorem 15.5\ + +lemma n_split_L: + "x * L = x * bot \ n(x * L) * L" + by (metis mult_1_right n_L n_L_split) + +lemma n_n_L: + "n(n(x) * L) = n(x)" + by (simp add: n_export n_L) + +lemma n_L_decreasing: + "n(x) * L \ x" + by (metis mult_left_zero n_L_split order_trans sup.orderI zero_right_mult_decreasing mult_assoc n_mult_bot) + +text \Theorem 15.10\ + +lemma n_galois: + "n(x) \ n(y) \ n(x) * L \ y" + by (metis order.trans mult_left_isotone n_L_decreasing n_isotone n_n_L) + +text \Theorem 15.6\ + +lemma split_L: + "x * L \ x * bot \ L" + by (metis sup_commute sup_left_isotone n_galois n_L n_split_L n_sub_one) + +text \Theorem 15.7\ + +lemma L_left_zero: + "L * x = L" + by (metis order.antisym mult.left_neutral mult_left_zero zero_right_mult_decreasing n_L n_L_decreasing n_mult_bot mult.assoc) + +text \Theorem 15.8\ + +lemma n_mult: + "n(x * n(y) * L) = n(x * y)" + using n_L_split n_dist_sup sup.absorb2 n_mult_left_upper_bound n_mult_bot n_n_L by auto + +lemma n_mult_top: + "n(x * n(y) * top) = n(x * y)" + by (metis mult_1_right n_mult n_top) + +text \Theorem 15.4\ + +lemma n_top_L: + "n(x * top) = n(x * L)" + by (metis mult_1_right n_L n_mult_top) + +lemma n_top_split: + "x * n(y) * top \ x * bot \ n(x * y) * top" + by (metis mult_assoc n_mult n_mult_right_bot n_split_top) + +lemma n_mult_right_upper_bound: + "n(x * y) \ n(z) \ n(x) \ n(z) \ x * n(y) * L \ x * bot \ n(z) * L" + apply (rule iffI) + apply (metis sup_right_isotone order.eq_iff mult_isotone n_L_split n_mult_left_upper_bound order_trans) + by (smt (verit, ccfv_threshold) n_dist_sup n_export sup.absorb_iff2 n_mult n_mult_commutative n_mult_bot n_n_L) + +lemma n_preserves_equation: + "n(y) * x \ x * n(y) \ n(y) * x = n(y) * x * n(y)" + using eq_refl test_preserves_equation n_mult_idempotent n_sub_one by auto + +definition ni :: "'a \ 'a" + where "ni x = n(x) * L" + +lemma ni_bot: + "ni(bot) = bot" + by (simp add: n_bot ni_def) + +lemma ni_one: + "ni(1) = bot" + by (simp add: n_one ni_def) + +lemma ni_L: + "ni(L) = L" + by (simp add: n_L ni_def) + +lemma ni_top: + "ni(top) = L" + by (simp add: n_top ni_def) + +lemma ni_dist_sup: + "ni(x \ y) = ni(x) \ ni(y)" + by (simp add: mult_right_dist_sup n_dist_sup ni_def) + +lemma ni_mult_bot: + "ni(x) = ni(x * bot)" + using n_mult_bot ni_def by auto + +lemma ni_split: + "x * ni(y) = x * bot \ ni(x * y)" + using n_L_split mult_assoc ni_def by auto + +lemma ni_decreasing: + "ni(x) \ x" + by (simp add: n_L_decreasing ni_def) + +lemma ni_isotone: + "x \ y \ ni(x) \ ni(y)" + using mult_left_isotone n_isotone ni_def by auto + +lemma ni_mult_left_upper_bound: + "ni(x) \ ni(x * y)" + using mult_left_isotone n_mult_left_upper_bound ni_def by force + +lemma ni_idempotent: + "ni(ni(x)) = ni(x)" + by (simp add: n_n_L ni_def) + +lemma ni_below_L: + "ni(x) \ L" + using n_L n_galois n_sub_one ni_def by auto + +lemma ni_left_zero: + "ni(x) * y = ni(x)" + by (simp add: L_left_zero mult_assoc ni_def) + +lemma ni_split_L: + "x * L = x * bot \ ni(x * L)" + using n_split_L ni_def by auto + +lemma ni_top_L: + "ni(x * top) = ni(x * L)" + by (simp add: n_top_L ni_def) + +lemma ni_galois: + "ni(x) \ ni(y) \ ni(x) \ y" + by (metis n_galois n_n_L ni_def) + +lemma ni_mult: + "ni(x * ni(y)) = ni(x * y)" + using mult_assoc n_mult ni_def by auto + +lemma ni_n_order: + "ni(x) \ ni(y) \ n(x) \ n(y)" + using n_galois ni_def ni_galois by auto + +lemma ni_n_equal: + "ni(x) = ni(y) \ n(x) = n(y)" + by (metis n_n_L ni_def) + +lemma ni_mult_right_upper_bound: + "ni(x * y) \ ni(z) \ ni(x) \ ni(z) \ x * ni(y) \ x * bot \ ni(z)" + using mult_assoc n_mult_right_upper_bound ni_def ni_n_order by auto + +lemma n_ni: + "n(ni(x)) = n(x)" + by (simp add: n_n_L ni_def) + +lemma ni_n: + "ni(n(x)) = bot" + by (metis n_mult_right_bot ni_mult_bot ni_bot) + +lemma ni_n_galois: + "n(x) \ n(y) \ ni(x) \ y" + by (simp add: n_galois ni_def) + +lemma n_mult_ni: + "n(x * ni(y)) = n(x * y)" + using ni_mult ni_n_equal by auto + +lemma ni_mult_n: + "ni(x * n(y)) = ni(x)" + by (simp add: n_mult_n ni_def) + +lemma ni_export: + "ni(n(x) * y) = n(x) * ni(y)" + by (simp add: n_mult_right_bot ni_split) + +lemma ni_mult_top: + "ni(x * n(y) * top) = ni(x * y)" + by (simp add: n_mult_top ni_def) + +lemma ni_n_bot: + "ni(x) = bot \ n(x) = bot" + using n_bot ni_n_equal ni_bot by force + +lemma ni_n_L: + "ni(x) = L \ n(x) = 1" + using n_L ni_L ni_n_equal by force + +(* independence of axioms, checked in n_semiring without the respective axiom: +lemma n_bot : "n(bot) = bot" nitpick [expect=genuine,card=2] oops +lemma n_top : "n(top) = 1" nitpick [expect=genuine,card=3] oops +lemma n_dist_sup : "n(x \ y) = n(x) \ n(y)" nitpick [expect=genuine,card=5] oops +lemma n_export : "n(n(x) * y) = n(x) * n(y)" nitpick [expect=genuine,card=6] oops +lemma n_sub_mult_bot: "n(x) = n(x * bot) * n(x)" nitpick [expect=genuine,card=2] oops +lemma n_L_split : "x * n(y) * L = x * bot \ n(x * y) * L" nitpick [expect=genuine,card=4] oops +lemma n_split : "x \ x * bot \ n(x * L) * top" nitpick [expect=genuine,card=3] oops +*) + +end + +typedef (overloaded) 'a nImage = "{ x::'a::n_semiring . (\y::'a . x = n(y)) }" + by auto + +lemma simp_nImage[simp]: + "\y . Rep_nImage x = n(y)" + using Rep_nImage by simp + +setup_lifting type_definition_nImage + +text \Theorem 15\ + +instantiation nImage :: (n_semiring) bounded_idempotent_semiring +begin + +lift_definition sup_nImage :: "'a nImage \ 'a nImage \ 'a nImage" is sup + by (metis n_dist_sup) + +lift_definition times_nImage :: "'a nImage \ 'a nImage \ 'a nImage" is times + by (metis n_export) + +lift_definition bot_nImage :: "'a nImage" is bot + by (metis n_bot) + +lift_definition one_nImage :: "'a nImage" is 1 + using n_L by auto + +lift_definition top_nImage :: "'a nImage" is 1 + using n_L by auto + +lift_definition less_eq_nImage :: "'a nImage \ 'a nImage \ bool" is less_eq . + +lift_definition less_nImage :: "'a nImage \ 'a nImage \ bool" is less . + +instance + apply intro_classes + apply (simp add: less_eq_nImage.rep_eq less_le_not_le less_nImage.rep_eq) + apply (simp add: less_eq_nImage.rep_eq) + using less_eq_nImage.rep_eq apply force + apply (simp add: less_eq_nImage.rep_eq Rep_nImage_inject) + apply (simp add: sup_nImage.rep_eq less_eq_nImage.rep_eq) + apply (simp add: less_eq_nImage.rep_eq sup_nImage.rep_eq) + apply (simp add: sup_nImage.rep_eq less_eq_nImage.rep_eq) + apply (simp add: bot_nImage.rep_eq less_eq_nImage.rep_eq) + apply (simp add: sup_nImage.rep_eq times_nImage.rep_eq less_eq_nImage.rep_eq mult_left_dist_sup) + apply (metis (mono_tags, lifting) sup_nImage.rep_eq times_nImage.rep_eq Rep_nImage_inverse mult_right_dist_sup) + apply (smt (z3) times_nImage.rep_eq Rep_nImage_inverse bot_nImage.rep_eq mult_left_zero) + using Rep_nImage_inject one_nImage.rep_eq times_nImage.rep_eq apply fastforce + apply (simp add: one_nImage.rep_eq times_nImage.rep_eq less_eq_nImage.rep_eq) + apply (smt (verit, del_insts) sup_nImage.rep_eq Rep_nImage Rep_nImage_inject mem_Collect_eq n_sub_one sup.absorb2 top_nImage.rep_eq) + apply (simp add: less_eq_nImage.rep_eq mult.assoc times_nImage.rep_eq) + using Rep_nImage_inject mult.assoc times_nImage.rep_eq apply fastforce + using Rep_nImage_inject one_nImage.rep_eq times_nImage.rep_eq apply fastforce + apply (metis (mono_tags, lifting) sup_nImage.rep_eq times_nImage.rep_eq Rep_nImage_inject mult_left_dist_sup) + by (smt (z3) Rep_nImage_inject bot_nImage.rep_eq n_mult_right_bot simp_nImage times_nImage.rep_eq) + +end + +text \Theorem 15\ + +instantiation nImage :: (n_semiring) bounded_distrib_lattice +begin + +lift_definition inf_nImage :: "'a nImage \ 'a nImage \ 'a nImage" is times + by (metis n_export) + +instance + apply intro_classes + apply (metis (mono_tags) inf_nImage.rep_eq less_eq_nImage.rep_eq n_mult_left_lower_bound simp_nImage) + apply (metis (mono_tags) inf_nImage.rep_eq less_eq_nImage.rep_eq n_mult_right_lower_bound simp_nImage) + apply (smt (z3) inf_nImage_def le_iff_sup less_eq_nImage.rep_eq mult_right_dist_sup n_mult_left_absorb_sup simp_nImage times_nImage.rep_eq times_nImage_def) + apply simp + by (smt (z3) Rep_nImage_inject inf_nImage.rep_eq n_sup_right_dist_mult simp_nImage sup.commute sup_nImage.rep_eq) + +end + +class n_itering = bounded_itering + n_semiring +begin + +lemma mult_L_circ: + "(x * L)\<^sup>\ = 1 \ x * L" + by (metis L_left_zero circ_mult mult_assoc) + +lemma mult_L_circ_mult: + "(x * L)\<^sup>\ * y = y \ x * L" + by (metis L_left_zero mult_L_circ mult_assoc mult_left_one mult_right_dist_sup) + +lemma circ_L: + "L\<^sup>\ = L \ 1" + by (metis L_left_zero sup_commute circ_left_unfold) + +lemma circ_n_L: + "x\<^sup>\ * n(x) * L = x\<^sup>\ * bot" + by (metis sup_bot_left circ_left_unfold circ_plus_same mult_left_zero n_L_split n_dist_sup n_mult_bot n_one ni_def ni_split) + +lemma n_circ_left_unfold: + "n(x\<^sup>\) = n(x * x\<^sup>\)" + by (metis circ_n_L circ_plus_same n_mult n_mult_bot) + +lemma ni_circ: + "ni(x)\<^sup>\ = 1 \ ni(x)" + by (simp add: mult_L_circ ni_def) + +lemma circ_ni: + "x\<^sup>\ * ni(x) = x\<^sup>\ * bot" + using circ_n_L ni_def mult_assoc by auto + +lemma ni_circ_left_unfold: + "ni(x\<^sup>\) = ni(x * x\<^sup>\)" + by (simp add: ni_def n_circ_left_unfold) + +lemma n_circ_import: + "n(y) * x \ x * n(y) \ n(y) * x\<^sup>\ = n(y) * (n(y) * x)\<^sup>\" + by (simp add: circ_import n_mult_idempotent n_sub_one) + +end + +class n_omega_itering = left_omega_conway_semiring + n_itering + + assumes circ_circ: "x\<^sup>\\<^sup>\ = L \ x\<^sup>\" +begin + +lemma L_below_one_circ: + "L \ 1\<^sup>\" + by (metis sup_left_divisibility circ_circ circ_one) + +lemma circ_below_L_sup_star: + "x\<^sup>\ \ L \ x\<^sup>\" + by (metis circ_circ circ_increasing) + +lemma L_sup_circ_sup_star: + "L \ x\<^sup>\ = L \ x\<^sup>\" + by (metis circ_circ circ_star star_circ) + +lemma circ_one_L: + "1\<^sup>\ = L \ 1" + using circ_circ circ_one star_one by auto + +lemma one_circ_zero: + "L = 1\<^sup>\ * bot" + by (metis L_left_zero circ_L circ_ni circ_one_L circ_plus_same ni_L) + +lemma circ_not_simulate: + "(\x y z . x * z \ z * y \ x\<^sup>\ * z \ z * y\<^sup>\) \ 1 = bot" + by (metis L_left_zero circ_one_L order.eq_iff mult_left_one mult_left_zero mult_right_sub_dist_sup_left n_L n_bot bot_least) + +lemma star_circ_L: + "x\<^sup>\\<^sup>\ = L \ x\<^sup>\" + by (simp add: circ_circ star_circ) + +lemma circ_circ_2: + "x\<^sup>\\<^sup>\ = L \ x\<^sup>\" + by (simp add: L_sup_circ_sup_star circ_circ) + +lemma circ_sup_6: + "L \ (x \ y)\<^sup>\ = (x\<^sup>\ * y\<^sup>\)\<^sup>\" + by (metis circ_circ_2 sup_assoc sup_commute circ_sup_1 circ_circ_sup circ_decompose_4) + +lemma circ_sup_7: + "(x\<^sup>\ * y\<^sup>\)\<^sup>\ = L \ (x \ y)\<^sup>\" + using L_sup_circ_sup_star circ_sup_6 by auto + +end + +class n_omega_algebra_2 = bounded_left_zero_omega_algebra + n_semiring + Omega + + assumes Omega_def: "x\<^sup>\ = n(x\<^sup>\) * L \ x\<^sup>\" +begin + +lemma mult_L_star: + "(x * L)\<^sup>\ = 1 \ x * L" + by (simp add: L_left_zero transitive_star mult_assoc) + +lemma mult_L_omega: + "(x * L)\<^sup>\ = x * L" + by (metis L_left_zero omega_slide) + +lemma mult_L_sup_star: + "(x * L \ y)\<^sup>\ = y\<^sup>\ \ y\<^sup>\ * x * L" + by (metis L_left_zero star.mult_zero_sup_circ_2 sup_commute mult_assoc) + +lemma mult_L_sup_omega: + "(x * L \ y)\<^sup>\ = y\<^sup>\ \ y\<^sup>\ * x * L" + by (metis L_left_zero mult_bot_add_omega sup_commute mult_assoc) + +lemma mult_L_sup_circ: + "(x * L \ y)\<^sup>\ = n(y\<^sup>\) * L \ y\<^sup>\ \ y\<^sup>\ * x * L" + by (smt sup_assoc sup_commute Omega_def le_iff_sup mult_L_sup_omega mult_L_sup_star mult_right_dist_sup n_L_decreasing n_dist_sup) + +lemma circ_sup_n: + "(x\<^sup>\ * y)\<^sup>\ * x\<^sup>\ = n((x\<^sup>\ * y)\<^sup>\) * L \ ((x\<^sup>\ * y)\<^sup>\ * x\<^sup>\ \ (x\<^sup>\ * y)\<^sup>\ * n(x\<^sup>\) * L)" + by (smt L_left_zero sup_assoc sup_commute Omega_def mult_L_sup_circ mult_assoc mult_left_dist_sup mult_right_dist_sup) + +text \Theorem 20.6\ + +lemma n_omega_induct: + "n(y) \ n(x * y \ z) \ n(y) \ n(x\<^sup>\ \ x\<^sup>\ * z)" + by (smt sup_commute mult_assoc n_dist_sup n_galois n_mult omega_induct) + +lemma n_Omega_left_unfold: + "1 \ x * x\<^sup>\ = x\<^sup>\" +proof - + have "1 \ x * x\<^sup>\ = 1 \ x * n(x\<^sup>\) * L \ x * x\<^sup>\" + by (simp add: Omega_def semiring.distrib_left sup_assoc mult_assoc) + also have "... = n(x * x\<^sup>\) * L \ (1 \ x * x\<^sup>\)" + by (metis sup_assoc sup_commute sup_bot_left mult_left_dist_sup n_L_split) + also have "... = n(x\<^sup>\) * L \ x\<^sup>\" + using omega_unfold star_left_unfold_equal by auto + also have "... = x\<^sup>\" + by (simp add: Omega_def) + finally show ?thesis + . +qed + +lemma n_Omega_circ_sup: + "(x \ y)\<^sup>\ = (x\<^sup>\ * y)\<^sup>\ * x\<^sup>\" +proof - + have "(x\<^sup>\ * y)\<^sup>\ * x\<^sup>\ = n((x\<^sup>\ * y)\<^sup>\) * L \ ((x\<^sup>\ * y)\<^sup>\ * x\<^sup>\ \ (x\<^sup>\ * y)\<^sup>\ * n(x\<^sup>\) * L)" + by (simp add: circ_sup_n) + also have "... = n((x\<^sup>\ * y)\<^sup>\) * L \ n((x\<^sup>\ * y)\<^sup>\ * x\<^sup>\) * L \ (x\<^sup>\ * y)\<^sup>\ * bot \ (x\<^sup>\ * y)\<^sup>\ * x\<^sup>\" + using n_L_split sup.left_commute sup_commute by auto + also have "... = n((x\<^sup>\ * y)\<^sup>\ \ (x\<^sup>\ * y)\<^sup>\ * x\<^sup>\) * L \ (x\<^sup>\ * y)\<^sup>\ * x\<^sup>\" + by (smt sup_assoc sup_bot_left mult_left_dist_sup mult_right_dist_sup n_dist_sup) + also have "... = (x \ y)\<^sup>\" + by (simp add: Omega_def omega_decompose star.circ_sup_9) + finally show ?thesis + .. +qed + +lemma n_Omega_circ_simulate_right_sup: + assumes "z * x \ y * y\<^sup>\ * z \ w" + shows "z * x\<^sup>\ \ y\<^sup>\ * (z \ w * x\<^sup>\)" +proof - + have "z * x \ y * y\<^sup>\ * z \ w" + by (simp add: assms) + also have "... = y * n(y\<^sup>\) * L \ y * y\<^sup>\ * z \ w" + using L_left_zero Omega_def mult_right_dist_sup semiring.distrib_left mult_assoc by auto + finally have 1: "z * x \ n(y\<^sup>\) * L \ y * y\<^sup>\ * z \ w" + by (metis sup_assoc sup_commute sup_bot_left mult_assoc mult_left_dist_sup n_L_split omega_unfold) + hence "(n(y\<^sup>\) * L \ y\<^sup>\ * z \ y\<^sup>\ * w * n(x\<^sup>\) * L \ y\<^sup>\ * w * x\<^sup>\) * x \ n(y\<^sup>\) * L \ y\<^sup>\ * (n(y\<^sup>\) * L \ y * y\<^sup>\ * z \ w) \ y\<^sup>\ * w * n(x\<^sup>\) * L \ y\<^sup>\ * w * x\<^sup>\" + by (smt L_left_zero sup_assoc sup_ge1 sup_ge2 le_iff_sup mult_assoc mult_left_dist_sup mult_right_dist_sup star.circ_back_loop_fixpoint) + also have "... = n(y\<^sup>\) * L \ y\<^sup>\ * n(y\<^sup>\) * L \ y\<^sup>\ * y * y\<^sup>\ * z \ y\<^sup>\ * w \ y\<^sup>\ * w * n(x\<^sup>\) * L \ y\<^sup>\ * w * x\<^sup>\" + using semiring.distrib_left sup_assoc mult_assoc by auto + also have "... = n(y\<^sup>\) * L \ y\<^sup>\ * n(y\<^sup>\) * L \ y\<^sup>\ * y * y\<^sup>\ * z \ y\<^sup>\ * w * n(x\<^sup>\) * L \ y\<^sup>\ * w * x\<^sup>\" + by (smt (verit, ccfv_SIG) le_supI1 order.refl semiring.add_mono star.circ_back_loop_prefixpoint sup.bounded_iff sup.coboundedI1 sup.mono sup_left_divisibility sup_right_divisibility sup_same_context) + also have "... = n(y\<^sup>\) * L \ y\<^sup>\ * y * y\<^sup>\ * z \ y\<^sup>\ * w * n(x\<^sup>\) * L \ y\<^sup>\ * w * x\<^sup>\" + by (smt sup_assoc sup_commute sup_idem mult_assoc mult_left_dist_sup n_L_split star_mult_omega) + also have "... \ n(y\<^sup>\) * L \ y\<^sup>\ * z \ y\<^sup>\ * w * n(x\<^sup>\) * L \ y\<^sup>\ * w * x\<^sup>\" + by (meson mult_left_isotone order_refl semiring.add_left_mono star.circ_mult_upper_bound star.right_plus_below_circ sup_left_isotone) + finally have 2: "z * x\<^sup>\ \ n(y\<^sup>\) * L \ y\<^sup>\ * z \ y\<^sup>\ * w * n(x\<^sup>\) * L \ y\<^sup>\ * w * x\<^sup>\" + by (smt le_supI1 le_sup_iff sup_ge1 star.circ_loop_fixpoint star_right_induct) + have "z * x * x\<^sup>\ \ n(y\<^sup>\) * L \ y * y\<^sup>\ * z * x\<^sup>\ \ w * x\<^sup>\" + using 1 by (smt (verit, del_insts) L_left_zero mult_assoc mult_left_isotone mult_right_dist_sup) + hence "n(z * x * x\<^sup>\) \ n(y * y\<^sup>\ * z * x\<^sup>\ \ n(y\<^sup>\) * L \ w * x\<^sup>\)" + by (simp add: n_isotone sup_commute) + hence "n(z * x\<^sup>\) \ n(y\<^sup>\ \ y\<^sup>\ * w * x\<^sup>\)" + by (smt (verit, del_insts) sup_assoc sup_commute left_plus_omega le_iff_sup mult_assoc mult_left_dist_sup n_L_decreasing n_omega_induct omega_unfold star.left_plus_circ star_mult_omega) + hence "n(z * x\<^sup>\) * L \ n(y\<^sup>\) * L \ y\<^sup>\ * w * n(x\<^sup>\) * L" + by (metis n_dist_sup n_galois n_mult n_n_L) + hence "z * n(x\<^sup>\) * L \ z * bot \ n(y\<^sup>\) * L \ y\<^sup>\ * w * n(x\<^sup>\) * L" + using n_L_split semiring.add_left_mono sup_assoc by auto + also have "... \ n(y\<^sup>\) * L \ y\<^sup>\ * z \ y\<^sup>\ * w * n(x\<^sup>\) * L" + by (smt (z3) order.trans mult_1_left mult_right_sub_dist_sup_left semiring.add_right_mono star_left_unfold_equal sup_commute zero_right_mult_decreasing) + finally have "z * n(x\<^sup>\) * L \ n(y\<^sup>\) * L \ y\<^sup>\ * z \ y\<^sup>\ * w * n(x\<^sup>\) * L \ y\<^sup>\ * w * x\<^sup>\" + using le_supI1 by blast + thus ?thesis + using 2 by (smt L_left_zero Omega_def sup_assoc le_iff_sup mult_assoc mult_left_dist_sup mult_right_dist_sup) +qed + +lemma n_Omega_circ_simulate_left_sup: + assumes "x * z \ z * y\<^sup>\ \ w" + shows "x\<^sup>\ * z \ (z \ x\<^sup>\ * w) * y\<^sup>\" +proof - + have "x * (z * n(y\<^sup>\) * L \ z * y\<^sup>\ \ n(x\<^sup>\) * L \ x\<^sup>\ * w * n(y\<^sup>\) * L \ x\<^sup>\ * w * y\<^sup>\) = x * z * n(y\<^sup>\) * L \ x * z * y\<^sup>\ \ n(x\<^sup>\) * L \ x * x\<^sup>\ * w * n(y\<^sup>\) * L \ x * x\<^sup>\ * w * y\<^sup>\" + by (smt sup_assoc sup_commute mult_assoc mult_left_dist_sup n_L_split omega_unfold) + also have "... \ (z * n(y\<^sup>\) * L \ z * y\<^sup>\ \ w) * n(y\<^sup>\) * L \ (z * n(y\<^sup>\) * L \ z * y\<^sup>\ \ w) * y\<^sup>\ \ n(x\<^sup>\) * L \ x\<^sup>\ * w * n(y\<^sup>\) * L \ x\<^sup>\ * w * y\<^sup>\" + by (smt assms Omega_def sup_assoc sup_ge2 le_iff_sup mult_assoc mult_left_dist_sup mult_right_dist_sup star.circ_loop_fixpoint) + also have "... = z * n(y\<^sup>\) * L \ z * y\<^sup>\ * n(y\<^sup>\) * L \ w * n(y\<^sup>\) * L \ z * y\<^sup>\ \ w * y\<^sup>\ \ n(x\<^sup>\) * L \ x\<^sup>\ * w * n(y\<^sup>\) * L \ x\<^sup>\ * w * y\<^sup>\" + by (smt L_left_zero sup_assoc sup_commute sup_idem mult_assoc mult_right_dist_sup star.circ_transitive_equal) + also have "... = z * n(y\<^sup>\) * L \ w * n(y\<^sup>\) * L \ z * y\<^sup>\ \ w * y\<^sup>\ \ n(x\<^sup>\) * L \ x\<^sup>\ * w * n(y\<^sup>\) * L \ x\<^sup>\ * w * y\<^sup>\" + by (smt sup_assoc sup_commute sup_idem le_iff_sup mult_assoc n_L_split star_mult_omega zero_right_mult_decreasing) + finally have "x * (z * n(y\<^sup>\) * L \ z * y\<^sup>\ \ n(x\<^sup>\) * L \ x\<^sup>\ * w * n(y\<^sup>\) * L \ x\<^sup>\ * w * y\<^sup>\) \ z * n(y\<^sup>\) * L \ z * y\<^sup>\ \ n(x\<^sup>\) * L \ x\<^sup>\ * w * n(y\<^sup>\) * L \ x\<^sup>\ * w * y\<^sup>\" + by (smt sup_assoc sup_commute sup_idem mult_assoc star.circ_loop_fixpoint) + thus "x\<^sup>\ * z \ (z \ x\<^sup>\ * w) * y\<^sup>\" + by (smt (verit, del_insts) L_left_zero Omega_def sup_assoc le_supI1 le_sup_iff sup_ge1 mult_assoc mult_left_dist_sup mult_right_dist_sup star.circ_back_loop_fixpoint star_left_induct) +qed + +end + +text \Theorem 2.6 and Theorem 19\ + +sublocale n_omega_algebra_2 < nL_omega: itering where circ = Omega + apply unfold_locales + apply (simp add: n_Omega_circ_sup) + apply (smt L_left_zero sup_assoc sup_commute sup_bot_left Omega_def mult_assoc mult_left_dist_sup mult_right_dist_sup n_L_split omega_slide star.circ_mult) + apply (simp add: n_Omega_circ_simulate_right_sup) + using n_Omega_circ_simulate_left_sup by auto + +sublocale n_omega_algebra_2 < nL_omega: n_omega_itering where circ = Omega + apply unfold_locales + by (smt Omega_def sup_assoc sup_commute le_iff_sup mult_L_sup_star mult_left_one n_L_split n_top ni_below_L ni_def star_involutive star_mult_omega star_omega_top zero_right_mult_decreasing) + +sublocale n_omega_algebra_2 < nL_omega: left_zero_kleene_conway_semiring where circ = Omega .. + +sublocale n_omega_algebra_2 < nL_star: left_omega_conway_semiring where circ = star .. + +context n_omega_algebra_2 +begin + +lemma circ_sup_8: + "n((x\<^sup>\ * y)\<^sup>\ * x\<^sup>\) * L \ (x\<^sup>\ * y)\<^sup>\ * x\<^sup>\" + by (metis sup_ge1 nL_omega.circ_sup_4 Omega_def mult_left_isotone n_isotone omega_sum_unfold_3 order_trans) + +lemma n_split_omega_omega: + "x\<^sup>\ \ x\<^sup>\ * bot \ n(x\<^sup>\) * top" + by (metis n_split n_top_L omega_vector) + +text \Theorem 20.1\ + +lemma n_below_n_star: + "n(x) \ n(x\<^sup>\)" + by (simp add: n_isotone star.circ_increasing) + +text \Theorem 20.2\ + +lemma n_star_below_n_omega: + "n(x\<^sup>\) \ n(x\<^sup>\)" + by (metis n_mult_left_upper_bound star_mult_omega) + +lemma n_below_n_omega: + "n(x) \ n(x\<^sup>\)" + using order.trans n_below_n_star n_star_below_n_omega by blast + +text \Theorem 20.4\ + +lemma star_n_L: + "x\<^sup>\ * n(x) * L = x\<^sup>\ * bot" + by (metis sup_bot_left mult_left_zero n_L_split n_dist_sup n_mult_bot n_one ni_def ni_split star_left_unfold_equal star_plus) + +lemma star_L_split: + assumes "y \ z" + and "x * z * L \ x * bot \ z * L" + shows "x\<^sup>\ * y * L \ x\<^sup>\ * bot \ z * L" +proof - + have "x * (x\<^sup>\ * bot \ z * L) \ x\<^sup>\ * bot \ x * z * L" + by (metis sup_bot_right order.eq_iff mult_assoc mult_left_dist_sup star.circ_loop_fixpoint) + also have "... \ x\<^sup>\ * bot \ x * bot \ z * L" + using assms(2) semiring.add_left_mono sup_assoc by auto + also have "... = x\<^sup>\ * bot \ z * L" + using mult_left_isotone star.circ_increasing sup.absorb_iff2 sup_commute by auto + finally have "y * L \ x * (x\<^sup>\ * bot \ z * L) \ x\<^sup>\ * bot \ z * L" + by (metis assms(1) le_sup_iff sup_ge2 mult_left_isotone order_trans) + thus ?thesis + by (simp add: star_left_induct mult_assoc) +qed + +lemma star_L_split_same: + "x * y * L \ x * bot \ y * L \ x\<^sup>\ * y * L = x\<^sup>\ * bot \ y * L" + apply (rule order.antisym) + apply (simp add: star_L_split) + by (metis bot_least le_supI mult_isotone nL_star.star_below_circ star.circ_loop_fixpoint sup.cobounded2 mult_assoc) + +lemma star_n_L_split_equal: + "n(x * y) \ n(y) \ x\<^sup>\ * n(y) * L = x\<^sup>\ * bot \ n(y) * L" + by (simp add: n_mult_right_upper_bound star_L_split_same) + +lemma n_star_mult: + "n(x * y) \ n(y) \ n(x\<^sup>\ * y) = n(x\<^sup>\) \ n(y)" + by (metis n_dist_sup n_mult n_mult_bot n_n_L star_n_L_split_equal) + +text \Theorem 20.3\ + +lemma n_omega_mult: + "n(x\<^sup>\ * y) = n(x\<^sup>\)" + by (simp add: n_isotone n_mult_left_upper_bound omega_sub_vector order.eq_iff) + +lemma n_star_left_unfold: + "n(x\<^sup>\) = n(x * x\<^sup>\)" + by (metis n_mult n_mult_bot star.circ_plus_same star_n_L) + +lemma ni_star_below_ni_omega: + "ni(x\<^sup>\) \ ni(x\<^sup>\)" + by (simp add: ni_n_order n_star_below_n_omega) + +lemma ni_below_ni_omega: + "ni(x) \ ni(x\<^sup>\)" + by (simp add: ni_n_order n_below_n_omega) + +lemma ni_star: + "ni(x)\<^sup>\ = 1 \ ni(x)" + by (simp add: mult_L_star ni_def) + +lemma ni_omega: + "ni(x)\<^sup>\ = ni(x)" + using mult_L_omega ni_def by auto + +lemma ni_omega_induct: + "ni(y) \ ni(x * y \ z) \ ni(y) \ ni(x\<^sup>\ \ x\<^sup>\ * z)" + using n_omega_induct ni_n_order by blast + +lemma star_ni: + "x\<^sup>\ * ni(x) = x\<^sup>\ * bot" + using ni_def mult_assoc star_n_L by auto + +lemma star_ni_split_equal: + "ni(x * y) \ ni(y) \ x\<^sup>\ * ni(y) = x\<^sup>\ * bot \ ni(y)" + using ni_def ni_mult_right_upper_bound mult_assoc star_L_split_same by auto + +lemma ni_star_mult: + "ni(x * y) \ ni(y) \ ni(x\<^sup>\ * y) = ni(x\<^sup>\) \ ni(y)" + using mult_right_dist_sup ni_def ni_n_order n_star_mult by auto + +lemma ni_omega_mult: + "ni(x\<^sup>\ * y) = ni(x\<^sup>\)" + by (simp add: ni_def n_omega_mult) + +lemma ni_star_left_unfold: + "ni(x\<^sup>\) = ni(x * x\<^sup>\)" + by (simp add: ni_def n_star_left_unfold) + +lemma n_star_import: + assumes "n(y) * x \ x * n(y)" + shows "n(y) * x\<^sup>\ = n(y) * (n(y) * x)\<^sup>\" +proof (rule order.antisym) + have "n(y) * (n(y) * x)\<^sup>\ * x \ n(y) * (n(y) * x)\<^sup>\" + by (smt assms mult_assoc mult_right_dist_sup mult_right_sub_dist_sup_left n_mult_idempotent n_preserves_equation star.circ_back_loop_fixpoint) + thus "n(y) * x\<^sup>\ \ n(y) * (n(y) * x)\<^sup>\" + using assms eq_refl n_mult_idempotent n_sub_one star.circ_import by auto +next + show "n(y) * (n(y) * x)\<^sup>\ \ n(y) * x\<^sup>\" + by (simp add: assms n_mult_idempotent n_sub_one star.circ_import) +qed + +lemma n_omega_export: + "n(y) * x \ x * n(y) \ n(y) * x\<^sup>\ = (n(y) * x)\<^sup>\" + apply (rule order.antisym) + apply (simp add: n_preserves_equation omega_simulation) + by (metis mult_right_isotone mult_1_right n_sub_one omega_isotone omega_slide) + +lemma n_omega_import: + "n(y) * x \ x * n(y) \ n(y) * x\<^sup>\ = n(y) * (n(y) * x)\<^sup>\" + by (simp add: n_mult_idempotent omega_import) + +text \Theorem 20.5\ + +lemma star_n_omega_top: + "x\<^sup>\ * n(x\<^sup>\) * top = x\<^sup>\ * bot \ n(x\<^sup>\) * top" + by (smt (verit, del_insts) le_supI le_sup_iff sup_right_divisibility order.antisym mult_assoc nL_star.circ_mult_omega nL_star.star_zero_below_circ_mult n_top_split star.circ_loop_fixpoint) + +(* +lemma n_star_induct_sup: "n(z \ x * y) \ n(y) \ n(x\<^sup>\ * z) \ n(y)" oops +*) + +end + +end + diff --git a/thys/Correctness_Algebras/N_Semirings_Boolean.thy b/thys/Correctness_Algebras/N_Semirings_Boolean.thy new file mode 100644 --- /dev/null +++ b/thys/Correctness_Algebras/N_Semirings_Boolean.thy @@ -0,0 +1,659 @@ +(* Title: Boolean N-Semirings + Author: Walter Guttmann + Maintainer: Walter Guttmann +*) + +section \Boolean N-Semirings\ + +theory N_Semirings_Boolean + +imports N_Semirings + +begin + +class an = + fixes an :: "'a \ 'a" + +class an_semiring = bounded_idempotent_left_zero_semiring + L + n + an + uminus + + assumes an_complement: "an(x) \ n(x) = 1" + assumes an_dist_sup : "an(x \ y) = an(x) * an(y)" + assumes an_export : "an(an(x) * y) = n(x) \ an(y)" + assumes an_mult_zero : "an(x) = an(x * bot)" + assumes an_L_split : "x * n(y) * L = x * bot \ n(x * y) * L" + assumes an_split : "an(x * L) * x \ x * bot" + assumes an_uminus : "-x = an(x * L)" +begin + +text \Theorem 21\ + +lemma n_an_def: + "n(x) = an(an(x) * L)" + by (metis an_dist_sup an_export an_split bot_least mult_right_isotone semiring.add_nonneg_eq_0_iff sup.orderE top_greatest vector_bot_closed) + +text \Theorem 21\ + +lemma an_complement_bot: + "an(x) * n(x) = bot" + by (metis an_dist_sup an_split bot_least le_iff_sup mult_left_zero sup_commute n_an_def) + +text \Theorem 21\ + +lemma an_n_def: + "an(x) = n(an(x) * L)" + by (smt (verit, ccfv_threshold) an_complement_bot an_complement mult.right_neutral mult_left_dist_sup mult_right_dist_sup sup_commute n_an_def) + +lemma an_case_split_left: + "an(z) * x \ y \ n(z) * x \ y \ x \ y" + by (metis le_sup_iff an_complement mult_left_one mult_right_dist_sup) + +lemma an_case_split_right: + "x * an(z) \ y \ x * n(z) \ y \ x \ y" + by (metis le_sup_iff an_complement mult_1_right mult_left_dist_sup) + +lemma split_sub: + "x * y \ z \ x * top" + by (simp add: le_supI2 mult_right_isotone) + +text \Theorem 21\ + +subclass n_semiring + apply unfold_locales + apply (metis an_dist_sup an_split mult_left_zero sup.absorb2 sup_bot_left sup_commute n_an_def) + apply (metis sup_left_top an_complement an_dist_sup an_export mult_assoc n_an_def) + apply (metis an_dist_sup an_export mult_assoc n_an_def) + apply (metis an_dist_sup an_export an_n_def mult_right_dist_sup n_an_def) + apply (metis sup_idem an_dist_sup an_mult_zero n_an_def) + apply (simp add: an_L_split) + by (meson an_case_split_left an_split le_supI1 split_sub) + +lemma n_complement_bot: + "n(x) * an(x) = bot" + by (metis an_complement_bot an_n_def n_an_def) + +lemma an_bot: + "an(bot) = 1" + by (metis sup_bot_right an_complement n_bot) + +lemma an_one: + "an(1) = 1" + by (metis sup_bot_right an_complement n_one) + +lemma an_L: + "an(L) = bot" + using an_one n_one n_an_def by auto + +lemma an_top: + "an(top) = bot" + by (metis mult_left_one n_complement_bot n_top) + +lemma an_export_n: + "an(n(x) * y) = an(x) \ an(y)" + by (metis an_export an_n_def n_an_def) + +lemma n_export_an: + "n(an(x) * y) = an(x) * n(y)" + by (metis an_n_def n_export) + +lemma n_an_mult_commutative: + "n(x) * an(y) = an(y) * n(x)" + by (metis sup_commute an_dist_sup n_an_def) + +lemma an_mult_commutative: + "an(x) * an(y) = an(y) * an(x)" + by (metis sup_commute an_dist_sup) + +lemma an_mult_idempotent: + "an(x) * an(x) = an(x)" + by (metis sup_idem an_dist_sup) + +lemma an_sub_one: + "an(x) \ 1" + using an_complement sup.cobounded1 by fastforce + +text \Theorem 21\ + +lemma an_antitone: + "x \ y \ an(y) \ an(x)" + by (metis an_n_def an_dist_sup n_order sup.absorb1) + +lemma an_mult_left_upper_bound: + "an(x * y) \ an(x)" + by (metis an_antitone an_mult_zero mult_right_isotone bot_least) + +lemma an_mult_right_zero: + "an(x) * bot = bot" + by (metis an_n_def n_mult_right_bot) + +lemma n_mult_an: + "n(x * an(y)) = n(x)" + by (metis an_n_def n_mult_n) + +lemma an_mult_n: + "an(x * n(y)) = an(x)" + by (metis an_n_def n_an_def n_mult_n) + +lemma an_mult_an: + "an(x * an(y)) = an(x)" + by (metis an_mult_n an_n_def) + +lemma an_mult_left_absorb_sup: + "an(x) * (an(x) \ an(y)) = an(x)" + by (metis an_n_def n_mult_left_absorb_sup) + +lemma an_mult_right_absorb_sup: + "(an(x) \ an(y)) * an(y) = an(y)" + by (metis an_n_def n_mult_right_absorb_sup) + +lemma an_sup_left_absorb_mult: + "an(x) \ an(x) * an(y) = an(x)" + using an_case_split_right sup_absorb1 by blast + +lemma an_sup_right_absorb_mult: + "an(x) * an(y) \ an(y) = an(y)" + using an_case_split_left sup_absorb2 by blast + +lemma an_sup_left_dist_mult: + "an(x) \ an(y) * an(z) = (an(x) \ an(y)) * (an(x) \ an(z))" + by (metis an_n_def n_sup_left_dist_mult) + +lemma an_sup_right_dist_mult: + "an(x) * an(y) \ an(z) = (an(x) \ an(z)) * (an(y) \ an(z))" + by (simp add: an_sup_left_dist_mult sup_commute) + +lemma an_n_order: + "an(x) \ an(y) \ n(y) \ n(x)" + by (smt (verit) an_n_def an_dist_sup le_iff_sup n_dist_sup n_mult_right_absorb_sup sup.orderE n_an_def) + +lemma an_order: + "an(x) \ an(y) \ an(x) * an(y) = an(x)" + by (metis an_n_def n_order) + +lemma an_mult_left_lower_bound: + "an(x) * an(y) \ an(x)" + using an_case_split_right by blast + +lemma an_mult_right_lower_bound: + "an(x) * an(y) \ an(y)" + by (simp add: an_sup_right_absorb_mult le_iff_sup) + +lemma an_n_mult_left_lower_bound: + "an(x) * n(y) \ an(x)" + using an_case_split_right by blast + +lemma an_n_mult_right_lower_bound: + "an(x) * n(y) \ n(y)" + using an_case_split_left by auto + +lemma n_an_mult_left_lower_bound: + "n(x) * an(y) \ n(x)" + using an_case_split_right by auto + +lemma n_an_mult_right_lower_bound: + "n(x) * an(y) \ an(y)" + using an_case_split_left by blast + +lemma an_mult_least_upper_bound: + "an(x) \ an(y) \ an(x) \ an(z) \ an(x) \ an(y) * an(z)" + by (metis an_mult_idempotent an_mult_left_lower_bound an_mult_right_lower_bound order.trans mult_isotone) + +lemma an_mult_left_divisibility: + "an(x) \ an(y) \ (\z . an(x) = an(y) * an(z))" + by (metis an_mult_commutative an_mult_left_lower_bound an_order) + +lemma an_mult_right_divisibility: + "an(x) \ an(y) \ (\z . an(x) = an(z) * an(y))" + by (simp add: an_mult_commutative an_mult_left_divisibility) + +lemma an_split_top: + "an(x * L) * x * top \ x * bot" + by (metis an_split mult_assoc mult_left_isotone mult_left_zero) + +lemma an_n_L: + "an(n(x) * L) = an(x)" + using an_n_def n_an_def by auto + +lemma an_galois: + "an(y) \ an(x) \ n(x) * L \ y" + by (simp add: an_n_order n_galois) + +lemma an_mult: + "an(x * n(y) * L) = an(x * y)" + by (metis an_n_L n_mult) + +lemma n_mult_top: + "an(x * n(y) * top) = an(x * y)" + by (metis an_n_L n_mult_top) + +lemma an_n_equal: + "an(x) = an(y) \ n(x) = n(y)" + by (metis an_n_L n_an_def) + +lemma an_top_L: + "an(x * top) = an(x * L)" + by (simp add: an_n_equal n_top_L) + +lemma an_case_split_left_equal: + "an(z) * x = an(z) * y \ n(z) * x = n(z) * y \ x = y" + using an_complement case_split_left_equal by blast + +lemma an_case_split_right_equal: + "x * an(z) = y * an(z) \ x * n(z) = y * n(z) \ x = y" + using an_complement case_split_right_equal by blast + +lemma an_equal_complement: + "n(x) \ an(y) = 1 \ n(x) * an(y) = bot \ an(x) = an(y)" + by (metis sup_commute an_complement an_dist_sup mult_left_one mult_right_dist_sup n_complement_bot) + +lemma n_equal_complement: + "n(x) \ an(y) = 1 \ n(x) * an(y) = bot \ n(x) = n(y)" + by (simp add: an_equal_complement an_n_equal) + +lemma an_shunting: + "an(z) * x \ y \ x \ y \ n(z) * top" + apply (rule iffI) + apply (meson an_case_split_left le_supI1 split_sub) + by (metis sup_bot_right an_case_split_left an_complement_bot mult_assoc mult_left_dist_sup mult_left_zero mult_right_isotone order_refl order_trans) + +lemma an_shunting_an: + "an(z) * an(x) \ an(y) \ an(x) \ n(z) \ an(y)" + apply (rule iffI) + apply (smt sup_ge1 sup_ge2 an_case_split_left n_an_mult_left_lower_bound order_trans) + by (metis sup_bot_left sup_ge2 an_case_split_left an_complement_bot mult_left_dist_sup mult_right_isotone order_trans) + +lemma an_L_zero: + "an(x * L) * x = an(x * L) * x * bot" + by (metis an_complement_bot n_split_equal sup_monoid.add_0_right vector_bot_closed mult_assoc n_export_an) + +lemma n_plus_complement_intro_n: + "n(x) \ an(x) * n(y) = n(x) \ n(y)" + by (metis sup_commute an_complement an_n_def mult_1_right n_sup_right_dist_mult n_an_mult_commutative) + +lemma n_plus_complement_intro_an: + "n(x) \ an(x) * an(y) = n(x) \ an(y)" + by (metis an_n_def n_plus_complement_intro_n) + +lemma an_plus_complement_intro_n: + "an(x) \ n(x) * n(y) = an(x) \ n(y)" + by (metis an_n_def n_an_def n_plus_complement_intro_n) + +lemma an_plus_complement_intro_an: + "an(x) \ n(x) * an(y) = an(x) \ an(y)" + by (metis an_n_def an_plus_complement_intro_n) + +lemma n_mult_complement_intro_n: + "n(x) * (an(x) \ n(y)) = n(x) * n(y)" + by (simp add: mult_left_dist_sup n_complement_bot) + +lemma n_mult_complement_intro_an: + "n(x) * (an(x) \ an(y)) = n(x) * an(y)" + by (simp add: semiring.distrib_left n_complement_bot) + +lemma an_mult_complement_intro_n: + "an(x) * (n(x) \ n(y)) = an(x) * n(y)" + by (simp add: an_complement_bot mult_left_dist_sup) + +lemma an_mult_complement_intro_an: + "an(x) * (n(x) \ an(y)) = an(x) * an(y)" + by (simp add: an_complement_bot semiring.distrib_left) + +lemma an_preserves_equation: + "an(y) * x \ x * an(y) \ an(y) * x = an(y) * x * an(y)" + by (metis an_n_def n_preserves_equation) + +lemma wnf_lemma_1: + "(n(p * L) * n(q * L) \ an(p * L) * an(r * L)) * n(p * L) = n(p * L) * n(q * L)" + by (smt sup_commute an_n_def n_sup_left_absorb_mult n_sup_right_dist_mult n_export n_mult_commutative n_mult_complement_intro_n) + +lemma wnf_lemma_2: + "(n(p * L) * n(q * L) \ an(r * L) * an(q * L)) * n(q * L) = n(p * L) * n(q * L)" + by (metis an_mult_commutative n_mult_commutative wnf_lemma_1) + +lemma wnf_lemma_3: + "(n(p * L) * n(r * L) \ an(p * L) * an(q * L)) * an(p * L) = an(p * L) * an(q * L)" + by (metis an_n_def sup_commute wnf_lemma_1 n_an_def) + +lemma wnf_lemma_4: + "(n(r * L) * n(q * L) \ an(p * L) * an(q * L)) * an(q * L) = an(p * L) * an(q * L)" + by (metis an_mult_commutative n_mult_commutative wnf_lemma_3) + +lemma wnf_lemma_5: + "n(p \ q) * (n(q) * x \ an(q) * y) = n(q) * x \ an(q) * n(p) * y" + by (smt sup_bot_right mult_assoc mult_left_dist_sup n_an_mult_commutative n_complement_bot n_dist_sup n_mult_right_absorb_sup) + +definition ani :: "'a \ 'a" + where "ani x \ an(x) * L" + +lemma ani_bot: + "ani(bot) = L" + using an_bot ani_def by auto + +lemma ani_one: + "ani(1) = L" + using an_one ani_def by auto + +lemma ani_L: + "ani(L) = bot" + by (simp add: an_L ani_def) + +lemma ani_top: + "ani(top) = bot" + by (simp add: an_top ani_def) + +lemma ani_complement: + "ani(x) \ ni(x) = L" + by (metis an_complement ani_def mult_right_dist_sup n_top ni_def ni_top) + +lemma ani_mult_zero: + "ani(x) = ani(x * bot)" + using ani_def an_mult_zero by auto + +lemma ani_antitone: + "y \ x \ ani(x) \ ani(y)" + by (simp add: an_antitone ani_def mult_left_isotone) + +lemma ani_mult_left_upper_bound: + "ani(x * y) \ ani(x)" + by (simp add: an_mult_left_upper_bound ani_def mult_left_isotone) + +lemma ani_involutive: + "ani(ani(x)) = ni(x)" + by (simp add: ani_def ni_def n_an_def) + +lemma ani_below_L: + "ani(x) \ L" + using an_case_split_left ani_def by auto + +lemma ani_left_zero: + "ani(x) * y = ani(x)" + by (simp add: ani_def L_left_zero mult_assoc) + +lemma ani_top_L: + "ani(x * top) = ani(x * L)" + by (simp add: an_top_L ani_def) + +lemma ani_ni_order: + "ani(x) \ ani(y) \ ni(y) \ ni(x)" + by (metis an_n_L ani_antitone ani_def ani_involutive ni_def) + +lemma ani_ni_equal: + "ani(x) = ani(y) \ ni(x) = ni(y)" + by (metis ani_ni_order order.antisym order_refl) + +lemma ni_ani: + "ni(ani(x)) = ani(x)" + using an_n_def ani_def ni_def by auto + +lemma ani_ni: + "ani(ni(x)) = ani(x)" + by (simp add: an_n_L ani_def ni_def) + +lemma ani_mult: + "ani(x * ni(y)) = ani(x * y)" + using ani_ni_equal ni_mult by blast + +lemma ani_an_order: + "ani(x) \ ani(y) \ an(x) \ an(y)" + using an_galois ani_ni_order ni_def ni_galois by auto + +lemma ani_an_equal: + "ani(x) = ani(y) \ an(x) = an(y)" + by (metis an_n_def ani_def) + +lemma n_mult_ani: + "n(x) * ani(x) = bot" + by (metis an_L ani_L ani_def mult_assoc n_complement_bot) + +lemma an_mult_ni: + "an(x) * ni(x) = bot" + by (metis an_n_def ani_def n_an_def n_mult_ani ni_def) + +lemma n_mult_ni: + "n(x) * ni(x) = ni(x)" + by (metis n_export n_order ni_def ni_export order_refl) + +lemma an_mult_ani: + "an(x) * ani(x) = ani(x)" + by (metis an_n_def ani_def n_mult_ni ni_def) + +lemma ani_ni_meet: + "x \ ani(y) \ x \ ni(y) \ x = bot" + by (metis an_case_split_left an_mult_ni bot_unique mult_right_isotone n_mult_ani) + +lemma ani_galois: + "ani(x) \ y \ ni(x \ y) = L" + apply (rule iffI) + apply (smt (z3) an_L an_mult_commutative an_mult_right_zero ani_def an_dist_sup ni_L ni_n_equal sup.absorb1 mult_assoc n_an_def n_complement_bot) + by (metis an_L an_galois an_mult_ni an_n_def an_shunting_an ani_def an_dist_sup an_export idempotent_bot_closed n_bot transitive_bot_closed) + +lemma an_ani: + "an(ani(x)) = n(x)" + by (simp add: ani_def n_an_def) + +lemma n_ani: + "n(ani(x)) = an(x)" + using an_n_def ani_def by auto + +lemma an_ni: + "an(ni(x)) = an(x)" + by (simp add: an_n_L ni_def) + +lemma ani_an: + "ani(an(x)) = L" + by (metis an_mult_right_zero an_mult_zero an_bot ani_def mult_left_one) + +lemma ani_n: + "ani(n(x)) = L" + by (simp add: ani_an n_an_def) + +lemma ni_an: + "ni(an(x)) = bot" + using an_L ani_an ani_def ni_n_bot n_an_def by force + +lemma ani_mult_n: + "ani(x * n(y)) = ani(x)" + by (simp add: an_mult_n ani_def) + +lemma ani_mult_an: + "ani(x * an(y)) = ani(x)" + by (simp add: an_mult_an ani_def) + +lemma ani_export_n: + "ani(n(x) * y) = ani(x) \ ani(y)" + by (simp add: an_export_n ani_def mult_right_dist_sup) + +lemma ani_export_an: + "ani(an(x) * y) = ni(x) \ ani(y)" + by (simp add: ani_def an_export ni_def semiring.distrib_right) + +lemma ni_export_an: + "ni(an(x) * y) = an(x) * ni(y)" + by (simp add: an_mult_right_zero ni_split) + +lemma ani_mult_top: + "ani(x * n(y) * top) = ani(x * y)" + using ani_def n_mult_top by auto + +lemma ani_an_bot: + "ani(x) = bot \ an(x) = bot" + using an_L ani_L ani_an_equal by force + +lemma ani_an_L: + "ani(x) = L \ an(x) = 1" + using an_bot ani_an_equal ani_bot by force + +text \Theorem 21\ + +subclass tests + apply unfold_locales + apply (simp add: mult_assoc) + apply (simp add: an_mult_commutative an_uminus) + apply (smt an_sup_left_dist_mult an_export_n an_n_L an_uminus n_an_def n_complement_bot n_export) + apply (metis an_dist_sup an_n_def an_uminus n_an_def) + using an_complement_bot an_uminus n_an_def apply fastforce + apply (simp add: an_bot an_uminus) + using an_export_n an_mult an_uminus n_an_def apply fastforce + using an_order an_uminus apply force + by (simp add: less_le_not_le) + +end + +class an_itering = n_itering + an_semiring + while + + assumes while_circ_def: "p \ y = (p * y)\<^sup>\ * -p" +begin + +subclass test_itering + apply unfold_locales + by (rule while_circ_def) + +lemma an_circ_left_unfold: + "an(x\<^sup>\) = an(x * x\<^sup>\)" + by (metis an_dist_sup an_one circ_left_unfold mult_left_one) + +lemma an_circ_x_n_circ: + "an(x\<^sup>\) * x * n(x\<^sup>\) \ x * bot" + by (metis an_circ_left_unfold an_mult an_split mult_assoc n_mult_right_bot) + +lemma an_circ_invariant: + "an(x\<^sup>\) * x \ x * an(x\<^sup>\)" +proof - + have 1: "an(x\<^sup>\) * x * an(x\<^sup>\) \ x * an(x\<^sup>\)" + by (metis an_case_split_left mult_assoc order_refl) + have "an(x\<^sup>\) * x * n(x\<^sup>\) \ x * an(x\<^sup>\)" + by (metis an_circ_x_n_circ order_trans mult_right_isotone bot_least) + thus ?thesis + using 1 an_case_split_right by blast +qed + +lemma ani_circ: + "ani(x)\<^sup>\ = 1 \ ani(x)" + by (simp add: ani_def mult_L_circ) + +lemma ani_circ_left_unfold: + "ani(x\<^sup>\) = ani(x * x\<^sup>\)" + by (simp add: an_circ_left_unfold ani_def) + +lemma an_circ_import: + "an(y) * x \ x * an(y) \ an(y) * x\<^sup>\ = an(y) * (an(y) * x)\<^sup>\" + by (metis an_n_def n_circ_import) + +lemma preserves_L: + "preserves L (-p)" + using L_left_zero preserves_equation_test mult_assoc by force + +end + +class an_omega_algebra = n_omega_algebra_2 + an_semiring + while + + assumes while_Omega_def: "p \ y = (p * y)\<^sup>\ * -p" +begin + +lemma an_split_omega_omega: + "an(x\<^sup>\) * x\<^sup>\ \ x\<^sup>\ * bot" + by (meson an_antitone an_split mult_left_isotone omega_sub_vector order_trans) + +lemma an_omega_below_an_star: + "an(x\<^sup>\) \ an(x\<^sup>\)" + by (simp add: an_n_order n_star_below_n_omega) + +lemma an_omega_below_an: + "an(x\<^sup>\) \ an(x)" + by (simp add: an_n_order n_below_n_omega) + +lemma an_omega_induct: + "an(x * y \ z) \ an(y) \ an(x\<^sup>\ \ x\<^sup>\ * z) \ an(y)" + by (simp add: an_n_order n_omega_induct) + +lemma an_star_mult: + "an(y) \ an(x * y) \ an(x\<^sup>\ * y) = an(x\<^sup>\) * an(y)" + by (metis an_dist_sup an_n_L an_n_order n_dist_sup n_star_mult) + +lemma an_omega_mult: + "an(x\<^sup>\ * y) = an(x\<^sup>\)" + by (simp add: an_n_equal n_omega_mult) + +lemma an_star_left_unfold: + "an(x\<^sup>\) = an(x * x\<^sup>\)" + by (simp add: an_n_equal n_star_left_unfold) + +lemma an_star_x_n_star: + "an(x\<^sup>\) * x * n(x\<^sup>\) \ x * bot" + by (metis an_n_L an_split n_mult n_mult_right_bot n_star_left_unfold mult_assoc) + +lemma an_star_invariant: + "an(x\<^sup>\) * x \ x * an(x\<^sup>\)" +proof - + have 1: "an(x\<^sup>\) * x * an(x\<^sup>\) \ x * an(x\<^sup>\)" + using an_case_split_left mult_assoc by auto + have "an(x\<^sup>\) * x * n(x\<^sup>\) \ x * an(x\<^sup>\)" + by (metis an_star_x_n_star order_trans mult_right_isotone bot_least) + thus ?thesis + using 1 an_case_split_right by auto +qed + +lemma n_an_star_unfold_invariant: + "n(an(x\<^sup>\) * x\<^sup>\) \ an(x) * n(x * an(x\<^sup>\) * x\<^sup>\)" +proof - + have "n(an(x\<^sup>\) * x\<^sup>\) \ an(x)" + using an_star_left_unfold an_case_split_right an_mult_left_upper_bound n_export_an by fastforce + thus ?thesis + by (smt an_star_invariant le_iff_sup mult_assoc mult_right_dist_sup n_isotone n_order omega_unfold) +qed + +lemma ani_omega_below_ani_star: + "ani(x\<^sup>\) \ ani(x\<^sup>\)" + by (simp add: an_omega_below_an_star ani_an_order) + +lemma ani_omega_below_ani: + "ani(x\<^sup>\) \ ani(x)" + by (simp add: an_omega_below_an ani_an_order) + +lemma ani_star: + "ani(x)\<^sup>\ = 1 \ ani(x)" + by (simp add: ani_def mult_L_star) + +lemma ani_omega: + "ani(x)\<^sup>\ = ani(x) * L" + by (simp add: L_left_zero ani_def mult_L_omega mult_assoc) + +lemma ani_omega_induct: + "ani(x * y \ z) \ ani(y) \ ani(x\<^sup>\ \ x\<^sup>\ * z) \ ani(y)" + by (simp add: an_omega_induct ani_an_order) + +lemma ani_omega_mult: + "ani(x\<^sup>\ * y) = ani(x\<^sup>\)" + by (simp add: an_omega_mult ani_def) + +lemma ani_star_left_unfold: + "ani(x\<^sup>\) = ani(x * x\<^sup>\)" + by (simp add: an_star_left_unfold ani_def) + +lemma an_star_import: + "an(y) * x \ x * an(y) \ an(y) * x\<^sup>\ = an(y) * (an(y) * x)\<^sup>\" + by (metis an_n_def n_star_import) + +lemma an_omega_export: + "an(y) * x \ x * an(y) \ an(y) * x\<^sup>\ = (an(y) * x)\<^sup>\" + by (metis an_n_def n_omega_export) + +lemma an_omega_import: + "an(y) * x \ x * an(y) \ an(y) * x\<^sup>\ = an(y) * (an(y) * x)\<^sup>\" + by (simp add: an_mult_idempotent omega_import) + +end + +text \Theorem 22\ + +sublocale an_omega_algebra < nL_omega: an_itering where circ = Omega + apply unfold_locales + by (rule while_Omega_def) + +context an_omega_algebra +begin + +lemma preserves_star: + "nL_omega.preserves x (-p) \ nL_omega.preserves (x\<^sup>\) (-p)" + by (simp add: nL_omega.preserves_def star.circ_simulate) + +end + +end + diff --git a/thys/Correctness_Algebras/N_Semirings_Modal.thy b/thys/Correctness_Algebras/N_Semirings_Modal.thy new file mode 100644 --- /dev/null +++ b/thys/Correctness_Algebras/N_Semirings_Modal.thy @@ -0,0 +1,823 @@ +(* Title: Modal N-Semirings + Author: Walter Guttmann + Maintainer: Walter Guttmann +*) + +section \Modal N-Semirings\ + +theory N_Semirings_Modal + +imports N_Semirings_Boolean + +begin + +class n_diamond_semiring = n_semiring + diamond + + assumes ndiamond_def: "|x>y = n(x * y * L)" +begin + +lemma diamond_x_bot: + "|x>bot = n(x)" + using n_mult_bot ndiamond_def mult_assoc by auto + +lemma diamond_x_1: + "|x>1 = n(x * L)" + by (simp add: ndiamond_def) + +lemma diamond_x_L: + "|x>L = n(x * L)" + by (simp add: L_left_zero ndiamond_def mult_assoc) + +lemma diamond_x_top: + "|x>top = n(x * L)" + by (metis mult_assoc n_top_L ndiamond_def top_mult_top) + +lemma diamond_x_n: + "|x>n(y) = n(x * y)" + by (simp add: n_mult ndiamond_def) + +lemma diamond_bot_y: + "|bot>y = bot" + by (simp add: n_bot ndiamond_def) + +lemma diamond_1_y: + "|1>y = n(y * L)" + by (simp add: ndiamond_def) + +lemma diamond_1_n: + "|1>n(y) = n(y)" + by (simp add: diamond_1_y n_n_L) + +lemma diamond_L_y: + "|L>y = 1" + by (simp add: L_left_zero n_L ndiamond_def) + +lemma diamond_top_y: + "|top>y = 1" + by (metis sup_left_top sup_right_top diamond_L_y mult_right_dist_sup n_dist_sup n_top ndiamond_def) + +lemma diamond_n_y: + "|n(x)>y = n(x) * n(y * L)" + by (simp add: n_export ndiamond_def mult_assoc) + +lemma diamond_n_bot: + "|n(x)>bot = bot" + by (simp add: n_bot n_mult_right_bot ndiamond_def) + +lemma diamond_n_1: + "|n(x)>1 = n(x)" + using diamond_1_n diamond_1_y diamond_x_1 by auto + +lemma diamond_n_n: + "|n(x)>n(y) = n(x) * n(y)" + by (simp add: diamond_x_n n_export) + +lemma diamond_n_n_same: + "|n(x)>n(x) = n(x)" + by (simp add: diamond_n_n n_mult_idempotent) + +text \Theorem 23.1\ + +lemma diamond_left_dist_sup: + "|x \ y>z = |x>z \ |y>z" + by (simp add: mult_right_dist_sup n_dist_sup ndiamond_def) + +text \Theorem 23.2\ + +lemma diamond_right_dist_sup: + "|x>(y \ z) = |x>y \ |x>z" + by (simp add: mult_left_dist_sup n_dist_sup ndiamond_def semiring.distrib_right) + +text \Theorem 23.3\ + +lemma diamond_associative: + "|x * y>z = |x>(y * z)" + by (simp add: ndiamond_def mult_assoc) + +text \Theorem 23.3\ + +lemma diamond_left_mult: + "|x * y>z = |x>|y>z" + using n_mult_ni ndiamond_def ni_def mult_assoc by auto + +lemma diamond_right_mult: + "|x>(y * z) = |x>|y>z" + using diamond_associative diamond_left_mult by force + +lemma diamond_n_export: + "|n(x) * y>z = n(x) * |y>z" + by (simp add: n_export ndiamond_def mult_assoc) + +lemma diamond_diamond_export: + "||x>y>z = |x>y * |z>1" + using diamond_n_y ndiamond_def by auto + +lemma diamond_left_isotone: + "x \ y \ |x>z \ |y>z" + by (metis diamond_left_dist_sup le_iff_sup) + +lemma diamond_right_isotone: + "y \ z \ |x>y \ |x>z" + by (metis diamond_right_dist_sup le_iff_sup) + +lemma diamond_isotone: + "w \ y \ x \ z \ |w>x \ |y>z" + by (meson diamond_left_isotone diamond_right_isotone order_trans) + +definition ndiamond_L :: "'a \ 'a \ 'a" ("\ _ \ _" [50,90] 95) + where "\x\y \ n(x * y) * L" + +lemma ndiamond_to_L: + "\x\y = |x>n(y) * L" + by (simp add: diamond_x_n ndiamond_L_def) + +lemma ndiamond_from_L: + "|x>y = n(\x\(y * L))" + by (simp add: n_n_L ndiamond_def mult_assoc ndiamond_L_def) + +lemma diamond_L_ni: + "\x\y = ni(x * y)" + by (simp add: ni_def ndiamond_L_def) + +lemma diamond_L_associative: + "\x * y\z = \x\(y * z)" + by (simp add: diamond_L_ni mult_assoc) + +lemma diamond_L_left_mult: + "\x * y\z = \x\\y\z" + using diamond_L_associative diamond_L_ni ni_mult by auto + +lemma diamond_L_right_mult: + "\x\(y * z) = \x\\y\z" + using diamond_L_associative diamond_L_left_mult by auto + +lemma diamond_L_left_dist_sup: + "\x \ y\z = \x\z \ \y\z" + by (simp add: diamond_L_ni mult_right_dist_sup ni_dist_sup) + +lemma diamond_L_x_ni: + "\x\ni(y) = ni(x * y)" + using n_mult_ni ni_def ndiamond_L_def by auto + +lemma diamond_L_left_isotone: + "x \ y \ \x\z \ \y\z" + using mult_left_isotone ni_def ni_isotone ndiamond_L_def by auto + +lemma diamond_L_right_isotone: + "y \ z \ \x\y \ \x\z" + using mult_right_isotone ni_def ni_isotone ndiamond_L_def by auto + +lemma diamond_L_isotone: + "w \ y \ x \ z \ \w\x \ \y\z" + using diamond_L_ni mult_isotone ni_isotone by force + +end + +class n_box_semiring = n_diamond_semiring + an_semiring + box + + assumes nbox_def: "|x]y = an(x * an(y * L) * L)" +begin + +text \Theorem 23.8\ + +lemma box_diamond: + "|x]y = an( |x>an(y * L) * L)" + by (simp add: an_n_L nbox_def ndiamond_def) + +text \Theorem 23.4\ + +lemma diamond_box: + "|x>y = an( |x]an(y * L) * L)" + using n_an_def n_mult nbox_def ndiamond_def mult_assoc by force + +lemma box_x_bot: + "|x]bot = an(x * L)" + by (simp add: an_bot nbox_def) + +lemma box_x_1: + "|x]1 = an(x)" + using an_L an_mult_an nbox_def mult_assoc by auto + +lemma box_x_L: + "|x]L = an(x)" + using box_x_1 L_left_zero nbox_def by auto + +lemma box_x_top: + "|x]top = an(x)" + by (metis box_diamond box_x_1 box_x_bot diamond_top_y) + +lemma box_x_n: + "|x]n(y) = an(x * an(y) * L)" + by (simp add: an_n_L nbox_def) + +lemma box_x_an: + "|x]an(y) = an(x * y)" + using an_mult n_an_def nbox_def by auto + +lemma box_bot_y: + "|bot]y = 1" + by (simp add: an_bot nbox_def) + +lemma box_1_y: + "|1]y = n(y * L)" + by (simp add: n_an_def nbox_def) + +lemma box_1_n: + "|1]n(y) = n(y)" + using box_1_y diamond_1_n diamond_1_y by auto + +lemma box_1_an: + "|1]an(y) = an(y)" + by (simp add: box_x_an) + +lemma box_L_y: + "|L]y = bot" + by (simp add: L_left_zero an_L nbox_def) + +lemma box_top_y: + "|top]y = bot" + by (simp add: box_diamond an_L diamond_top_y) + +lemma box_n_y: + "|n(x)]y = an(x) \ n(y * L)" + using an_export_n n_an_def nbox_def mult_assoc by auto + +lemma box_an_y: + "|an(x)]y = n(x) \ n(y * L)" + by (metis an_n_def box_n_y n_an_def) + +lemma box_n_bot: + "|n(x)]bot = an(x)" + by (simp add: box_x_bot an_n_L) + +lemma box_an_bot: + "|an(x)]bot = n(x)" + by (simp add: box_x_bot n_an_def) + +lemma box_n_1: + "|n(x)]1 = 1" + using box_x_1 ani_an_L ani_n by auto + +lemma box_an_1: + "|an(x)]1 = 1" + using box_x_1 ani_an ani_an_L by fastforce + +lemma box_n_n: + "|n(x)]n(y) = an(x) \ n(y)" + using box_1_n box_1_y box_n_y by auto + +lemma box_an_n: + "|an(x)]n(y) = n(x) \ n(y)" + using box_x_n an_dist_sup n_an_def n_dist_sup by auto + +lemma box_n_an: + "|n(x)]an(y) = an(x) \ an(y)" + by (simp add: box_x_an an_export_n) + +lemma box_an_an: + "|an(x)]an(y) = n(x) \ an(y)" + by (simp add: box_x_an an_export) + +lemma box_n_n_same: + "|n(x)]n(x) = 1" + by (simp add: box_n_n an_complement) + +lemma box_an_an_same: + "|an(x)]an(x) = 1" + using box_an_bot an_bot an_complement_bot nbox_def by auto + +text \Theorem 23.5\ + +lemma box_left_dist_sup: + "|x \ y]z = |x]z * |y]z" + using an_dist_sup nbox_def semiring.distrib_right by auto + +lemma box_right_dist_sup: + "|x](y \ z) = an(x * an(y * L) * an(z * L) * L)" + by (simp add: an_dist_sup mult_right_dist_sup nbox_def mult_assoc) + +lemma box_associative: + "|x * y]z = an(x * y * an(z * L) * L)" + by (simp add: nbox_def) + +text \Theorem 23.7\ + +lemma box_left_mult: + "|x * y]z = |x]|y]z" + using box_x_an nbox_def mult_assoc by auto + +lemma box_right_mult: + "|x](y * z) = an(x * an(y * z * L) * L)" + by (simp add: nbox_def) + +text \Theorem 23.6\ + +lemma box_right_mult_n_n: + "|x](n(y) * n(z)) = |x]n(y) * |x]n(z)" + by (smt an_dist_sup an_export_n an_n_L mult_assoc mult_left_dist_sup mult_right_dist_sup nbox_def) + +lemma box_right_mult_an_n: + "|x](an(y) * n(z)) = |x]an(y) * |x]n(z)" + by (metis an_n_def box_right_mult_n_n) + +lemma box_right_mult_n_an: + "|x](n(y) * an(z)) = |x]n(y) * |x]an(z)" + by (simp add: box_right_mult_an_n box_x_an box_x_n an_mult_commutative n_an_mult_commutative) + +lemma box_right_mult_an_an: + "|x](an(y) * an(z)) = |x]an(y) * |x]an(z)" + by (metis an_dist_sup box_x_an mult_left_dist_sup) + +lemma box_n_export: + "|n(x) * y]z = an(x) \ |y]z" + using box_left_mult box_n_an nbox_def by auto + +lemma box_an_export: + "|an(x) * y]z = n(x) \ |y]z" + using box_an_an box_left_mult nbox_def by auto + +lemma box_left_antitone: + "y \ x \ |x]z \ |y]z" + by (smt an_mult_commutative an_order box_diamond box_left_dist_sup le_iff_sup) + +lemma box_right_isotone: + "y \ z \ |x]y \ |x]z" + by (metis an_antitone mult_left_isotone mult_right_isotone nbox_def) + +lemma box_antitone_isotone: + "y \ w \ x \ z \ |w]x \ |y]z" + by (meson box_left_antitone box_right_isotone order.trans) + +definition nbox_L :: "'a \ 'a \ 'a" ("\ _ \ _" [50,90] 95) + where "\x\y \ an(x * an(y) * L) * L" + +lemma nbox_to_L: + "\x\y = |x]n(y) * L" + by (simp add: box_x_n nbox_L_def) + +lemma nbox_from_L: + "|x]y = n(\x\(y * L))" + using an_n_def nbox_def nbox_L_def by auto + +lemma diamond_x_an: + "|x>an(y) = n(x * an(y) * L)" + by (simp add: ndiamond_def) + +lemma diamond_1_an: + "|1>an(y) = an(y)" + using box_1_an box_1_y diamond_1_y by auto + +lemma diamond_an_y: + "|an(x)>y = an(x) * n(y * L)" + by (simp add: n_export_an ndiamond_def mult_assoc) + +lemma diamond_an_bot: + "|an(x)>bot = bot" + by (simp add: an_mult_right_zero n_bot ndiamond_def) + +lemma diamond_an_1: + "|an(x)>1 = an(x)" + using an_n_def diamond_x_1 by auto + +lemma diamond_an_n: + "|an(x)>n(y) = an(x) * n(y)" + by (simp add: diamond_x_n n_export_an) + +lemma diamond_n_an: + "|n(x)>an(y) = n(x) * an(y)" + using an_n_def diamond_n_y by auto + +lemma diamond_an_an: + "|an(x)>an(y) = an(x) * an(y)" + using diamond_an_y an_n_def by auto + +lemma diamond_an_an_same: + "|an(x)>an(x) = an(x)" + by (simp add: diamond_an_an an_mult_idempotent) + +lemma diamond_an_export: + "|an(x) * y>z = an(x) * |y>z" + using diamond_an_an diamond_box diamond_left_mult by auto + +lemma box_ani: + "|x]y = an(x * ani(y * L))" + by (simp add: ani_def nbox_def mult_assoc) + +lemma box_x_n_ani: + "|x]n(y) = an(x * ani(y))" + by (simp add: box_x_n ani_def mult_assoc) + +lemma box_L_ani: + "\x\y = ani(x * ani(y))" + using box_x_n_ani ani_def nbox_to_L by auto + +lemma box_L_left_mult: + "\x * y\z = \x\\y\z" + using an_mult n_an_def mult_assoc nbox_L_def by auto + +lemma diamond_x_an_ani: + "|x>an(y) = n(x * ani(y))" + by (simp add: ani_def ndiamond_def mult_assoc) + +lemma box_L_left_antitone: + "y \ x \ \x\z \ \y\z" + by (simp add: box_L_ani ani_antitone mult_left_isotone) + +lemma box_L_right_isotone: + "y \ z \ \x\y \ \x\z" + using ani_antitone ani_def mult_right_isotone mult_assoc nbox_L_def by auto + +lemma box_L_antitone_isotone: + "y \ w \ x \ z \ \w\x \ \y\z" + using ani_antitone ani_def mult_isotone mult_assoc nbox_L_def by force + +end + +class n_box_omega_algebra = n_box_semiring + an_omega_algebra +begin + +lemma diamond_omega: + "|x\<^sup>\>y = |x\<^sup>\>z" + by (simp add: n_omega_mult ndiamond_def mult_assoc) + +lemma box_omega: + "|x\<^sup>\]y = |x\<^sup>\]z" + by (metis box_diamond diamond_omega) + +lemma an_box_omega_induct: + "|x]an(y) * n(z * L) \ an(y) \ |x\<^sup>\ \ x\<^sup>\]z \ an(y)" + by (smt an_dist_sup an_omega_induct an_omega_mult box_left_dist_sup box_x_an mult_assoc n_an_def nbox_def) + +lemma n_box_omega_induct: + "|x]n(y) * n(z * L) \ n(y) \ |x\<^sup>\ \ x\<^sup>\]z \ n(y)" + by (simp add: an_box_omega_induct n_an_def) + +lemma an_box_omega_induct_an: + "|x]an(y) * an(z) \ an(y) \ |x\<^sup>\ \ x\<^sup>\]an(z) \ an(y)" + using an_box_omega_induct an_n_def by auto + +text \Theorem 23.13\ + +lemma n_box_omega_induct_n: + "|x]n(y) * n(z) \ n(y) \ |x\<^sup>\ \ x\<^sup>\]n(z) \ n(y)" + using an_box_omega_induct_an n_an_def by force + +lemma n_diamond_omega_induct: + "n(y) \ |x>n(y) \ n(z * L) \ n(y) \ |x\<^sup>\ \ x\<^sup>\>z" + using diamond_x_n mult_right_dist_sup n_dist_sup n_omega_induct n_omega_mult ndiamond_def mult_assoc by force + +lemma an_diamond_omega_induct: + "an(y) \ |x>an(y) \ n(z * L) \ an(y) \ |x\<^sup>\ \ x\<^sup>\>z" + by (metis n_diamond_omega_induct an_n_def) + +text \Theorem 23.9\ + +lemma n_diamond_omega_induct_n: + "n(y) \ |x>n(y) \ n(z) \ n(y) \ |x\<^sup>\ \ x\<^sup>\>n(z)" + using box_1_n box_1_y n_diamond_omega_induct by auto + +lemma an_diamond_omega_induct_an: + "an(y) \ |x>an(y) \ an(z) \ an(y) \ |x\<^sup>\ \ x\<^sup>\>an(z)" + using an_diamond_omega_induct an_n_def by auto + +lemma box_segerberg_an: + "|x\<^sup>\ \ x\<^sup>\]an(y) = an(y) * |x\<^sup>\ \ x\<^sup>\](n(y) \ |x]an(y))" +proof (rule order.antisym) + have "|x\<^sup>\ \ x\<^sup>\]an(y) \ |x\<^sup>\ \ x\<^sup>\]|x]an(y)" + by (smt box_left_dist_sup box_left_mult box_omega sup_right_isotone box_left_antitone mult_right_dist_sup star.right_plus_below_circ) + hence "|x\<^sup>\ \ x\<^sup>\]an(y) \ |x\<^sup>\ \ x\<^sup>\](n(y) \ |x]an(y))" + using box_right_isotone order_lesseq_imp sup.cobounded2 by blast + thus"|x\<^sup>\ \ x\<^sup>\]an(y) \ an(y) * |x\<^sup>\ \ x\<^sup>\](n(y) \ |x]an(y))" + by (metis le_sup_iff box_1_an box_left_antitone order_refl star_left_unfold_equal an_mult_least_upper_bound nbox_def) +next + have "an(y) * |x](n(y) \ |x\<^sup>\ \ x\<^sup>\]an(y)) * (n(y) \ |x]an(y)) = |x]( |x\<^sup>\ \ x\<^sup>\]an(y) * an(y)) * an(y)" + by (smt sup_bot_left an_export an_mult_commutative box_right_mult_an_an mult_assoc mult_right_dist_sup n_complement_bot nbox_def) + hence 1: "an(y) * |x](n(y) \ |x\<^sup>\ \ x\<^sup>\]an(y)) * (n(y) \ |x]an(y)) \ n(y) \ |x\<^sup>\ \ x\<^sup>\]an(y)" + by (smt sup_assoc sup_commute sup_ge2 box_1_an box_left_dist_sup box_left_mult mult_left_dist_sup omega_unfold star_left_unfold_equal star.circ_plus_one) + have "n(y) * |x](n(y) \ |x\<^sup>\ \ x\<^sup>\]an(y)) * (n(y) \ |x]an(y)) \ n(y) \ |x\<^sup>\ \ x\<^sup>\]an(y)" + by (smt sup_ge1 an_n_def mult_left_isotone n_an_mult_left_lower_bound n_mult_left_absorb_sup nbox_def order_trans) + thus "an(y) * |x\<^sup>\ \ x\<^sup>\](n(y) \ |x]an(y)) \ |x\<^sup>\ \ x\<^sup>\]an(y)" + using 1 by (smt an_case_split_left an_shunting_an mult_assoc n_box_omega_induct_n n_dist_sup nbox_def nbox_from_L) +qed + +text \Theorem 23.16\ + +lemma box_segerberg_n: + "|x\<^sup>\ \ x\<^sup>\]n(y) = n(y) * |x\<^sup>\ \ x\<^sup>\](an(y) \ |x]n(y))" + using box_segerberg_an an_n_def n_an_def by force + +lemma diamond_segerberg_an: + "|x\<^sup>\ \ x\<^sup>\>an(y) = an(y) \ |x\<^sup>\ \ x\<^sup>\>(n(y) * |x>an(y))" + by (smt an_export an_n_L box_diamond box_segerberg_an diamond_box mult_assoc n_an_def) + +text \Theorem 23.12\ + +lemma diamond_segerberg_n: + "|x\<^sup>\ \ x\<^sup>\>n(y) = n(y) \ |x\<^sup>\ \ x\<^sup>\>(an(y) * |x>n(y))" + using diamond_segerberg_an an_n_L n_an_def by auto + +text \Theorem 23.11\ + +lemma diamond_star_unfold_n: + "|x\<^sup>\>n(y) = n(y) \ |an(y) * x>|x\<^sup>\>n(y)" +proof - + have "|x\<^sup>\>n(y) = n(y) \ n(y) * |x * x\<^sup>\>n(y) \ |an(y) * x * x\<^sup>\>n(y)" + by (smt sup_assoc sup_commute sup_bot_right an_complement an_complement_bot diamond_an_n diamond_left_dist_sup diamond_n_export diamond_n_n_same mult_assoc mult_left_one mult_right_dist_sup star_left_unfold_equal) + thus ?thesis + by (metis diamond_left_mult diamond_x_n n_sup_left_absorb_mult) +qed + +lemma diamond_star_unfold_an: + "|x\<^sup>\>an(y) = an(y) \ |n(y) * x>|x\<^sup>\>an(y)" + by (metis an_n_def diamond_star_unfold_n n_an_def) + +text \Theorem 23.15\ + +lemma box_star_unfold_n: + "|x\<^sup>\]n(y) = n(y) * |n(y) * x]|x\<^sup>\]n(y)" + by (smt an_export an_n_L box_diamond diamond_box diamond_star_unfold_an n_an_def n_export) + +lemma box_star_unfold_an: + "|x\<^sup>\]an(y) = an(y) * |an(y) * x]|x\<^sup>\]an(y)" + by (metis an_n_def box_star_unfold_n) + +text \Theorem 23.10\ + +lemma diamond_omega_unfold_n: + "|x\<^sup>\ \ x\<^sup>\>n(y) = n(y) \ |an(y) * x>|x\<^sup>\ \ x\<^sup>\>n(y)" + by (smt sup_assoc sup_commute diamond_an_export diamond_left_dist_sup diamond_right_dist_sup diamond_star_unfold_n diamond_x_n n_omega_mult n_plus_complement_intro_n omega_unfold) + +lemma diamond_omega_unfold_an: + "|x\<^sup>\ \ x\<^sup>\>an(y) = an(y) \ |n(y) * x>|x\<^sup>\ \ x\<^sup>\>an(y)" + by (metis an_n_def diamond_omega_unfold_n n_an_def) + +text \Theorem 23.14\ + +lemma box_omega_unfold_n: + "|x\<^sup>\ \ x\<^sup>\]n(y) = n(y) * |n(y) * x]|x\<^sup>\ \ x\<^sup>\]n(y)" + by (smt an_export an_n_L box_diamond diamond_box diamond_omega_unfold_an n_an_def n_export) + +lemma box_omega_unfold_an: + "|x\<^sup>\ \ x\<^sup>\]an(y) = an(y) * |an(y) * x]|x\<^sup>\ \ x\<^sup>\]an(y)" + by (metis an_n_def box_omega_unfold_n) + +lemma box_cut_iteration_an: + "|x\<^sup>\ \ x\<^sup>\]an(y) = |(an(y) * x)\<^sup>\ \ (an(y) * x)\<^sup>\]an(y)" + apply (rule order.antisym) + apply (meson semiring.add_mono an_case_split_left box_left_antitone omega_isotone order_refl star.circ_isotone) + by (smt (z3) an_box_omega_induct_an an_mult_commutative box_omega_unfold_an nbox_def order_refl) + +lemma box_cut_iteration_n: + "|x\<^sup>\ \ x\<^sup>\]n(y) = |(n(y) * x)\<^sup>\ \ (n(y) * x)\<^sup>\]n(y)" + using box_cut_iteration_an n_an_def by auto + +lemma diamond_cut_iteration_an: + "|x\<^sup>\ \ x\<^sup>\>an(y) = |(n(y) * x)\<^sup>\ \ (n(y) * x)\<^sup>\>an(y)" + using box_cut_iteration_n diamond_box n_an_def by auto + +lemma diamond_cut_iteration_n: + "|x\<^sup>\ \ x\<^sup>\>n(y) = |(an(y) * x)\<^sup>\ \ (an(y) * x)\<^sup>\>n(y)" + using box_cut_iteration_an an_n_L diamond_box by auto + +lemma ni_diamond_omega_induct: + "ni(y) \ \x\ni(y) \ ni(z) \ ni(y) \ \x\<^sup>\ \ x\<^sup>\\z" + by (metis diamond_L_left_dist_sup diamond_L_x_ni diamond_L_ni ni_dist_sup ni_omega_induct ni_omega_mult) + +lemma ani_diamond_omega_induct: + "ani(y) \ \x\ani(y) \ ni(z) \ ani(y) \ \x\<^sup>\ \ x\<^sup>\\z" + by (metis ni_ani ni_diamond_omega_induct) + +lemma n_diamond_omega_L: + "|n(x\<^sup>\) * L>y = |x\<^sup>\>y" + using L_left_zero mult_1_right n_L n_export n_omega_mult ndiamond_def mult_assoc by auto + +lemma n_diamond_loop: + "|x\<^sup>\>y = |x\<^sup>\ \ x\<^sup>\>y" + by (metis Omega_def diamond_left_dist_sup n_diamond_omega_L) + +text \Theorem 24.1\ + +lemma cut_iteration_loop: + "|x\<^sup>\>n(y) = |(an(y) * x)\<^sup>\>n(y)" + using diamond_cut_iteration_n n_diamond_loop by auto + +lemma cut_iteration_while_loop: + "|x\<^sup>\>n(y) = |(an(y) * x)\<^sup>\ * n(y)>n(y)" + using cut_iteration_loop diamond_left_mult diamond_n_n_same by auto + +text \Theorem 24.1\ + +lemma cut_iteration_while_loop_2: + "|x\<^sup>\>n(y) = |an(y) \ x>n(y)" + by (metis cut_iteration_while_loop an_uminus n_an_def while_Omega_def) + +lemma modal_while: + assumes "-q * -p * L \ x * -p * L \ -p \ -q \ -r" + shows "-p \ |n((-q * x)\<^sup>\) * L \ (-q * x)\<^sup>\ * --q>(-r)" +proof - + have 1: "--q * -p \ |-q * x>(-p) \ --q * -r" + using assms mult_right_isotone sup.coboundedI2 tests_dual.sup_complement_intro by auto + have "-q * -p = n(-q * -q * -p * L)" + using an_uminus n_export_an mult_assoc mult_1_right n_L tests_dual.sup_idempotent by auto + also have "... \ n(-q * x * -p * L)" + by (metis assms n_isotone mult_right_isotone mult_assoc) + also have "... \ |-q * x>(-p) \ --q * -r" + by (simp add: ndiamond_def) + finally have "-p \ |-q * x>(-p) \ --q * -r" + using 1 by (smt sup_assoc le_iff_sup tests_dual.inf_cases sub_comm) + thus ?thesis + by (smt L_left_zero an_diamond_omega_induct_an an_uminus diamond_left_dist_sup mult_assoc n_n_L n_omega_mult ndiamond_def sub_mult_closed) +qed + +lemma modal_while_loop: + "-q * -p * L \ x * -p * L \ -p \ -q \ -r \ -p \ |(-q * x)\<^sup>\ * --q>(-r)" + by (metis L_left_zero Omega_def modal_while mult_assoc mult_right_dist_sup) + +text \Theorem 24.2\ + +lemma modal_while_loop_2: + "-q * -p * L \ x * -p * L \ -p \ -q \ -r \ -p \ |-q \ x>(-r)" + by (simp add: while_Omega_def modal_while_loop) + +lemma modal_while_2: + assumes "-p * L \ x * -p * L" + shows "-p \ |n((-q * x)\<^sup>\) * L \ (-q * x)\<^sup>\ * --q>(--q)" +proof - + have "-p \ |-q * x>(-p) \ --q" + by (smt (verit, del_insts) assms an_uminus tests_dual.double_negation n_an_def n_isotone ndiamond_def diamond_an_export sup_assoc sup_commute le_iff_sup tests_dual.inf_complement_intro) + thus ?thesis + by (smt L_left_zero an_diamond_omega_induct_an an_uminus diamond_left_dist_sup mult_assoc tests_dual.sup_idempotent n_n_L n_omega_mult ndiamond_def) +qed + +end + +class n_modal_omega_algebra = n_box_omega_algebra + + assumes n_star_induct: "n(x * y) \ n(y) \ n(x\<^sup>\ * y) \ n(y)" +begin + +lemma n_star_induct_sup: + "n(z \ x * y) \ n(y) \ n(x\<^sup>\ * z) \ n(y)" + by (metis an_dist_sup an_mult_least_upper_bound an_n_order n_mult_right_upper_bound n_star_induct star_L_split) + +lemma n_star_induct_star: + "n(x * y) \ n(y) \ n(x\<^sup>\) \ n(y)" + using n_star_induct n_star_mult by auto + +lemma n_star_induct_iff: + "n(x * y) \ n(y) \ n(x\<^sup>\ * y) \ n(y)" + by (metis mult_left_isotone n_isotone n_star_induct order_trans star.circ_increasing) + +lemma n_star_bot: + "n(x) = bot \ n(x\<^sup>\) = bot" + by (metis sup_bot_right le_iff_sup mult_1_right n_one n_star_induct_iff) + +lemma n_diamond_star_induct: + "|x>n(y) \ n(y) \ |x\<^sup>\>n(y) \ n(y)" + by (simp add: diamond_x_n n_star_induct) + +lemma n_diamond_star_induct_sup: + "|x>n(y) \ n(z) \ n(y) \ |x\<^sup>\>n(z) \ n(y)" + by (simp add: diamond_x_n n_dist_sup n_star_induct_sup) + +lemma n_diamond_star_induct_iff: + "|x>n(y) \ n(y) \ |x\<^sup>\>n(y) \ n(y)" + using diamond_x_n n_star_induct_iff by auto + +lemma an_star_induct: + "an(y) \ an(x * y) \ an(y) \ an(x\<^sup>\ * y)" + using an_n_order n_star_induct by auto + +lemma an_star_induct_sup: + "an(y) \ an(z \ x * y) \ an(y) \ an(x\<^sup>\ * z)" + using an_n_order n_star_induct_sup by auto + +lemma an_star_induct_star: + "an(y) \ an(x * y) \ an(y) \ an(x\<^sup>\)" + by (simp add: an_n_order n_star_induct_star) + +lemma an_star_induct_iff: + "an(y) \ an(x * y) \ an(y) \ an(x\<^sup>\ * y)" + using an_n_order n_star_induct_iff by auto + +lemma an_star_one: + "an(x) = 1 \ an(x\<^sup>\) = 1" + by (metis an_n_equal an_bot n_star_bot n_bot) + +lemma an_box_star_induct: + "an(y) \ |x]an(y) \ an(y) \ |x\<^sup>\]an(y)" + by (simp add: an_star_induct box_x_an) + +lemma an_box_star_induct_sup: + "an(y) \ |x]an(y) * an(z) \ an(y) \ |x\<^sup>\]an(z)" + by (simp add: an_star_induct_sup an_dist_sup an_mult_commutative box_x_an) + +lemma an_box_star_induct_iff: + "an(y) \ |x]an(y) \ an(y) \ |x\<^sup>\]an(y)" + using an_star_induct_iff box_x_an by auto + +lemma box_star_segerberg_an: + "|x\<^sup>\]an(y) = an(y) * |x\<^sup>\](n(y) \ |x]an(y))" +proof (rule order.antisym) + show "|x\<^sup>\]an(y) \ an(y) * |x\<^sup>\](n(y) \ |x]an(y))" + by (smt (verit) sup_ge2 box_1_an box_left_dist_sup box_left_mult box_right_isotone mult_right_isotone star.circ_right_unfold) +next + have "an(y) * |x\<^sup>\](n(y) \ |x]an(y)) \ an(y) * |x]an(y)" + by (metis sup_bot_left an_complement_bot box_an_an box_left_antitone box_x_an mult_left_dist_sup mult_left_one mult_right_isotone star.circ_reflexive) + thus "an(y) * |x\<^sup>\](n(y) \ |x]an(y)) \ |x\<^sup>\]an(y)" + by (smt an_box_star_induct_sup an_case_split_left an_dist_sup an_mult_least_upper_bound box_left_antitone box_left_mult box_right_mult_an_an star.left_plus_below_circ nbox_def) +qed + +lemma box_star_segerberg_n: + "|x\<^sup>\]n(y) = n(y) * |x\<^sup>\](an(y) \ |x]n(y))" + using box_star_segerberg_an an_n_def n_an_def by auto + +lemma diamond_segerberg_an: + "|x\<^sup>\>an(y) = an(y) \ |x\<^sup>\>(n(y) * |x>an(y))" + by (smt an_export an_n_L box_diamond box_star_segerberg_an diamond_box mult_assoc n_an_def) + +lemma diamond_star_segerberg_n: + "|x\<^sup>\>n(y) = n(y) \ |x\<^sup>\>(an(y) * |x>n(y))" + using an_n_def diamond_segerberg_an n_an_def by auto + +lemma box_cut_star_iteration_an: + "|x\<^sup>\]an(y) = |(an(y) * x)\<^sup>\]an(y)" + by (smt an_box_star_induct_sup an_mult_commutative an_mult_complement_intro_an order.antisym box_an_export box_star_unfold_an nbox_def order_refl) + +lemma box_cut_star_iteration_n: + "|x\<^sup>\]n(y) = |(n(y) * x)\<^sup>\]n(y)" + using box_cut_star_iteration_an n_an_def by auto + +lemma diamond_cut_star_iteration_an: + "|x\<^sup>\>an(y) = |(n(y) * x)\<^sup>\>an(y)" + using box_cut_star_iteration_an diamond_box n_an_def by auto + +lemma diamond_cut_star_iteration_n: + "|x\<^sup>\>n(y) = |(an(y) * x)\<^sup>\>n(y)" + using box_cut_star_iteration_an an_n_L diamond_box by auto + +lemma ni_star_induct: + "ni(x * y) \ ni(y) \ ni(x\<^sup>\ * y) \ ni(y)" + using n_star_induct ni_n_order by auto + +lemma ni_star_induct_sup: + "ni(z \ x * y) \ ni(y) \ ni(x\<^sup>\ * z) \ ni(y)" + by (simp add: ni_n_order n_star_induct_sup) + +lemma ni_star_induct_star: + "ni(x * y) \ ni(y) \ ni(x\<^sup>\) \ ni(y)" + using ni_n_order n_star_induct_star by auto + +lemma ni_star_induct_iff: + "ni(x * y) \ ni(y) \ ni(x\<^sup>\ * y) \ ni(y)" + using ni_n_order n_star_induct_iff by auto + +lemma ni_star_bot: + "ni(x) = bot \ ni(x\<^sup>\) = bot" + using ni_n_bot n_star_bot by auto + +lemma ni_diamond_star_induct: + "\x\ni(y) \ ni(y) \ \x\<^sup>\\ni(y) \ ni(y)" + by (simp add: diamond_L_x_ni ni_star_induct) + +lemma ni_diamond_star_induct_sup: + "\x\ni(y) \ ni(z) \ ni(y) \ \x\<^sup>\\ni(z) \ ni(y)" + by (simp add: diamond_L_x_ni ni_dist_sup ni_star_induct_sup) + +lemma ni_diamond_star_induct_iff: + "\x\ni(y) \ ni(y) \ \x\<^sup>\\ni(y) \ ni(y)" + using diamond_L_x_ni ni_star_induct_iff by auto + +lemma ani_star_induct: + "ani(y) \ ani(x * y) \ ani(y) \ ani(x\<^sup>\ * y)" + using an_star_induct ani_an_order by blast + +lemma ani_star_induct_sup: + "ani(y) \ ani(z \ x * y) \ ani(y) \ ani(x\<^sup>\ * z)" + by (simp add: an_star_induct_sup ani_an_order) + +lemma ani_star_induct_star: + "ani(y) \ ani(x * y) \ ani(y) \ ani(x\<^sup>\)" + using an_star_induct_star ani_an_order by auto + +lemma ani_star_induct_iff: + "ani(y) \ ani(x * y) \ ani(y) \ ani(x\<^sup>\ * y)" + using an_star_induct_iff ani_an_order by auto + +lemma ani_star_L: + "ani(x) = L \ ani(x\<^sup>\) = L" + using an_star_one ani_an_L by auto + +lemma ani_box_star_induct: + "ani(y) \ \x\ani(y) \ ani(y) \ \x\<^sup>\\ani(y)" + by (metis an_ani ani_def ani_star_induct_iff n_ani box_L_ani) + +lemma ani_box_star_induct_iff: + "ani(y) \ \x\ani(y) \ ani(y) \ \x\<^sup>\\ani(y)" + using ani_box_star_induct box_L_left_antitone order_lesseq_imp star.circ_increasing by blast + +lemma ani_box_star_induct_sup: + "ani(y) \ \x\ani(y) \ ani(y) \ ani(z) \ ani(y) \ \x\<^sup>\\ani(z)" + by (meson ani_box_star_induct_iff box_L_right_isotone order_trans) + +end + +end + diff --git a/thys/Correctness_Algebras/Omega_Algebras.thy b/thys/Correctness_Algebras/Omega_Algebras.thy new file mode 100644 --- /dev/null +++ b/thys/Correctness_Algebras/Omega_Algebras.thy @@ -0,0 +1,499 @@ +(* Title: Omega Algebras + Author: Walter Guttmann + Maintainer: Walter Guttmann +*) + +section \Omega Algebras\ + +theory Omega_Algebras + +imports Stone_Kleene_Relation_Algebras.Kleene_Algebras + +begin + +class omega = + fixes omega :: "'a \ 'a" ("_\<^sup>\" [100] 100) + +class left_omega_algebra = left_kleene_algebra + omega + + assumes omega_unfold: "y\<^sup>\ = y * y\<^sup>\" + assumes omega_induct: "x \ z \ y * x \ x \ y\<^sup>\ \ y\<^sup>\ * z" +begin + +text \Many lemmas in this class are taken from Georg Struth's Isabelle theories.\ + +lemma star_bot_below_omega: + "x\<^sup>\ * bot \ x\<^sup>\" + using omega_unfold star_left_induct_equal by auto + +lemma star_bot_below_omega_bot: + "x\<^sup>\ * bot \ x\<^sup>\ * bot" + by (metis omega_unfold star_left_induct_equal sup_monoid.add_0_left mult_assoc) + +lemma omega_induct_mult: + "y \ x * y \ y \ x\<^sup>\" + by (metis bot_least omega_induct sup.absorb1 sup.absorb2 star_bot_below_omega) + +lemma omega_sub_dist: + "x\<^sup>\ \ (x \ y)\<^sup>\" + by (metis eq_refl mult_isotone omega_unfold sup.cobounded1 omega_induct_mult) + +lemma omega_isotone: + "x \ y \ x\<^sup>\ \ y\<^sup>\" + using sup_left_divisibility omega_sub_dist by fastforce + +lemma omega_induct_equal: + "y = z \ x * y \ y \ x\<^sup>\ \ x\<^sup>\ * z" + by (simp add: omega_induct) + +lemma omega_bot: + "bot\<^sup>\ = bot" + by (metis mult_left_zero omega_unfold) + +lemma omega_one_greatest: + "x \ 1\<^sup>\" + by (simp add: omega_induct_mult) + +lemma star_mult_omega: + "x\<^sup>\ * x\<^sup>\ = x\<^sup>\" + by (metis order.antisym omega_unfold star.circ_loop_fixpoint star_left_induct_mult_equal sup.cobounded2) + +lemma omega_sub_vector: + "x\<^sup>\ * y \ x\<^sup>\" + by (metis mult_semi_associative omega_unfold omega_induct_mult) + +lemma omega_simulation: + "z * x \ y * z \ z * x\<^sup>\ \ y\<^sup>\" + by (smt (verit, ccfv_threshold) mult_isotone omega_unfold order_lesseq_imp mult_assoc omega_induct_mult) + +lemma omega_omega: + "x\<^sup>\\<^sup>\ \ x\<^sup>\" + by (metis omega_unfold omega_sub_vector) + +lemma left_plus_omega: + "(x * x\<^sup>\)\<^sup>\ = x\<^sup>\" + by (metis order.antisym mult_assoc omega_induct_mult omega_unfold order_refl star.left_plus_circ star_mult_omega) + +lemma omega_slide: + "x * (y * x)\<^sup>\ = (x * y)\<^sup>\" + by (metis order.antisym mult_assoc mult_right_isotone omega_simulation omega_unfold order_refl) + +lemma omega_simulation_2: + "y * x \ x * y \ (x * y)\<^sup>\ \ x\<^sup>\" + by (metis mult_right_isotone sup.absorb2 omega_induct_mult omega_slide omega_sub_dist) + +lemma wagner: + "(x \ y)\<^sup>\ = x * (x \ y)\<^sup>\ \ z \ (x \ y)\<^sup>\ = x\<^sup>\ \ x\<^sup>\ * z" + by (smt (verit, ccfv_SIG) order.refl star_left_induct sup.absorb2 sup_assoc sup_commute omega_induct_equal omega_sub_dist) + +lemma right_plus_omega: + "(x\<^sup>\ * x)\<^sup>\ = x\<^sup>\" + by (metis left_plus_omega omega_slide star_mult_omega) + +lemma omega_sub_dist_1: + "(x * y\<^sup>\)\<^sup>\ \ (x \ y)\<^sup>\" + by (metis left_plus_omega mult_isotone star.circ_sub_dist sup.cobounded1 sup_monoid.add_commute omega_isotone) + +lemma omega_sub_dist_2: + "(x\<^sup>\ * y)\<^sup>\ \ (x \ y)\<^sup>\" + by (metis mult_isotone star.circ_sub_dist sup.cobounded2 omega_isotone right_plus_omega) + +lemma omega_star: + "(x\<^sup>\)\<^sup>\ = 1 \ x\<^sup>\" + by (metis antisym_conv star.circ_mult_increasing star_left_unfold_equal omega_sub_vector) + +lemma omega_mult_omega_star: + "x\<^sup>\ * x\<^sup>\\<^sup>\ = x\<^sup>\" + by (simp add: order.antisym star.circ_mult_increasing omega_sub_vector) + +lemma omega_sum_unfold_1: + "(x \ y)\<^sup>\ = x\<^sup>\ \ x\<^sup>\ * y * (x \ y)\<^sup>\" + by (metis mult_right_dist_sup omega_unfold mult_assoc wagner) + +lemma omega_sum_unfold_2: + "(x \ y)\<^sup>\ \ (x\<^sup>\ * y)\<^sup>\ \ (x\<^sup>\ * y)\<^sup>\ * x\<^sup>\" + using omega_induct_equal omega_sum_unfold_1 by auto + +lemma omega_sum_unfold_3: + "(x\<^sup>\ * y)\<^sup>\ * x\<^sup>\ \ (x \ y)\<^sup>\" + using star_left_induct_equal omega_sum_unfold_1 by auto + +lemma omega_decompose: + "(x \ y)\<^sup>\ = (x\<^sup>\ * y)\<^sup>\ \ (x\<^sup>\ * y)\<^sup>\ * x\<^sup>\" + by (metis sup.absorb1 sup_same_context omega_sub_dist_2 omega_sum_unfold_2 omega_sum_unfold_3) + +lemma omega_loop_fixpoint: + "y * (y\<^sup>\ \ y\<^sup>\ * z) \ z = y\<^sup>\ \ y\<^sup>\ * z" + apply (rule order.antisym) + apply (smt (verit, ccfv_threshold) eq_refl mult_isotone mult_left_sub_dist_sup omega_induct omega_unfold star.circ_loop_fixpoint sup_assoc sup_commute sup_right_isotone) + by (smt (z3) mult_left_sub_dist_sup omega_unfold star.circ_loop_fixpoint sup.left_commute sup_commute sup_right_isotone) + +lemma omega_loop_greatest_fixpoint: + "y * x \ z = x \ x \ y\<^sup>\ \ y\<^sup>\ * z" + by (simp add: sup_commute omega_induct_equal) + +lemma omega_square: + "x\<^sup>\ = (x * x)\<^sup>\" + using order.antisym omega_unfold order_refl mult_assoc omega_induct_mult omega_simulation_2 by auto + +lemma mult_bot_omega: + "(x * bot)\<^sup>\ = x * bot" + by (metis mult_left_zero omega_slide) + +lemma mult_bot_add_omega: + "(x \ y * bot)\<^sup>\ = x\<^sup>\ \ x\<^sup>\ * y * bot" + by (metis mult_left_zero sup_commute mult_assoc mult_bot_omega omega_decompose omega_loop_fixpoint) + +lemma omega_mult_star: + "x\<^sup>\ * x\<^sup>\ = x\<^sup>\" + by (meson antisym_conv star.circ_back_loop_prefixpoint sup.boundedE omega_sub_vector) + +lemma omega_loop_is_greatest_fixpoint: + "is_greatest_fixpoint (\x . y * x \ z) (y\<^sup>\ \ y\<^sup>\ * z)" + by (simp add: is_greatest_fixpoint_def omega_loop_fixpoint omega_loop_greatest_fixpoint) + +lemma omega_loop_nu: + "\ (\x . y * x \ z) = y\<^sup>\ \ y\<^sup>\ * z" + by (metis greatest_fixpoint_same omega_loop_is_greatest_fixpoint) + +lemma omega_loop_bot_is_greatest_fixpoint: + "is_greatest_fixpoint (\x . y * x) (y\<^sup>\)" + using is_greatest_fixpoint_def omega_unfold omega_induct_mult by auto + +lemma omega_loop_bot_nu: + "\ (\x . y * x) = y\<^sup>\" + by (metis greatest_fixpoint_same omega_loop_bot_is_greatest_fixpoint) + +lemma affine_has_greatest_fixpoint: + "has_greatest_fixpoint (\x . y * x \ z)" + using has_greatest_fixpoint_def omega_loop_is_greatest_fixpoint by blast + +lemma omega_separate_unfold: + "(x\<^sup>\ * y)\<^sup>\ = y\<^sup>\ \ y\<^sup>\ * x * (x\<^sup>\ * y)\<^sup>\" + by (metis star.circ_loop_fixpoint sup_commute mult_assoc omega_slide omega_sum_unfold_1) + +lemma omega_bot_left_slide: + "(x * y)\<^sup>\ * ((x * y)\<^sup>\ * bot \ 1) * x \ x * (y * x)\<^sup>\ * ((y * x)\<^sup>\ * bot \ 1)" +proof - + have "x \ x * (y * x) * (y * x)\<^sup>\ * ((y * x)\<^sup>\ * bot \ 1) \ x * (y * x)\<^sup>\ * ((y * x)\<^sup>\ * bot \ 1)" + by (metis sup_commute mult_assoc mult_right_isotone star.circ_back_loop_prefixpoint star.mult_zero_sup_circ star.mult_zero_circ le_supE le_supI order.refl star.circ_increasing star.circ_mult_upper_bound) + hence "((x * y)\<^sup>\ * bot \ 1) * x \ x * y * (x * (y * x)\<^sup>\ * ((y * x)\<^sup>\ * bot \ 1)) \ x * (y * x)\<^sup>\ * ((y * x)\<^sup>\ * bot \ 1)" + by (smt (z3) sup.absorb_iff2 sup_assoc mult_assoc mult_left_one mult_left_sub_dist_sup_left mult_left_zero mult_right_dist_sup omega_slide star_mult_omega) + thus ?thesis + by (simp add: star_left_induct mult_assoc) +qed + +lemma omega_bot_add_1: + "(x \ y)\<^sup>\ * ((x \ y)\<^sup>\ * bot \ 1) = x\<^sup>\ * (x\<^sup>\ * bot \ 1) * (y * x\<^sup>\ * (x\<^sup>\ * bot \ 1))\<^sup>\ * ((y * x\<^sup>\ * (x\<^sup>\ * bot \ 1))\<^sup>\ * bot \ 1)" +proof (rule order.antisym) + have 1: "(x \ y) * x\<^sup>\ * (x\<^sup>\ * bot \ 1) * (y * x\<^sup>\ * (x\<^sup>\ * bot \ 1))\<^sup>\ * ((y * x\<^sup>\ * (x\<^sup>\ * bot \ 1))\<^sup>\ * bot \ 1) \ x\<^sup>\ * (x\<^sup>\ * bot \ 1) * (y * x\<^sup>\ * (x\<^sup>\ * bot \ 1))\<^sup>\ * ((y * x\<^sup>\ * (x\<^sup>\ * bot \ 1))\<^sup>\ * bot \ 1)" + by (smt (z3) eq_refl star.circ_mult_upper_bound star.circ_sub_dist_1 star.mult_zero_circ star.mult_zero_sup_circ star_sup_1 sup_assoc sup_commute mult_assoc) + have 2: "1 \ x\<^sup>\ * (x\<^sup>\ * bot \ 1) * (y * x\<^sup>\ * (x\<^sup>\ * bot \ 1))\<^sup>\ * ((y * x\<^sup>\ * (x\<^sup>\ * bot \ 1))\<^sup>\ * bot \ 1)" + using reflexive_mult_closed star.circ_reflexive sup_ge2 by auto + have "(y * x\<^sup>\)\<^sup>\ * bot \ (y * x\<^sup>\ * (x\<^sup>\ * bot \ 1))\<^sup>\ * bot" + by (metis mult_1_right mult_left_isotone mult_left_sub_dist_sup_right omega_isotone) + also have 3: "... \ (x\<^sup>\ * bot \ 1) * (y * x\<^sup>\ * (x\<^sup>\ * bot \ 1))\<^sup>\ * ((y * x\<^sup>\ * (x\<^sup>\ * bot \ 1))\<^sup>\ * bot \ 1)" + by (metis mult_isotone mult_left_one star.circ_reflexive sup_commute sup_ge2) + finally have 4: "(x\<^sup>\ * y)\<^sup>\ * bot \ x\<^sup>\ * (x\<^sup>\ * bot \ 1) * (y * x\<^sup>\ * (x\<^sup>\ * bot \ 1))\<^sup>\ * ((y * x\<^sup>\ * (x\<^sup>\ * bot \ 1))\<^sup>\ * bot \ 1)" + by (smt mult_assoc mult_right_isotone omega_slide) + have "y * (x\<^sup>\ * y)\<^sup>\ * x\<^sup>\ * bot \ y * (x\<^sup>\ * (x\<^sup>\ * bot \ y))\<^sup>\ * x\<^sup>\ * x\<^sup>\ * bot * (y * x\<^sup>\ * (x\<^sup>\ * bot \ 1))\<^sup>\ * bot" + using mult_isotone mult_left_sub_dist_sup_left mult_left_zero order.refl star_isotone sup_commute mult_assoc star_mult_omega by auto + also have "... \ y * (x\<^sup>\ * (x\<^sup>\ * bot \ y))\<^sup>\ * (x\<^sup>\ * (x\<^sup>\ * bot \ 1) * y)\<^sup>\ * bot" + by (smt mult_assoc mult_left_isotone mult_left_sub_dist_sup_left omega_slide) + also have "... = y * (x\<^sup>\ * (x\<^sup>\ * bot \ 1) * y)\<^sup>\ * bot" + using mult_left_one mult_left_zero mult_right_dist_sup mult_assoc star_mult_omega by auto + finally have "x\<^sup>\ * y * (x\<^sup>\ * y)\<^sup>\ * x\<^sup>\ * bot \ x\<^sup>\ * (x\<^sup>\ * bot \ 1) * (y * x\<^sup>\ * (x\<^sup>\ * bot \ 1))\<^sup>\ * ((y * x\<^sup>\ * (x\<^sup>\ * bot \ 1))\<^sup>\ * bot \ 1)" + using 3 by (smt mult_assoc mult_right_isotone omega_slide order_trans) + hence "(x\<^sup>\ * y)\<^sup>\ * x\<^sup>\ * bot \ x\<^sup>\ * (x\<^sup>\ * bot \ 1) * (y * x\<^sup>\ * (x\<^sup>\ * bot \ 1))\<^sup>\ * ((y * x\<^sup>\ * (x\<^sup>\ * bot \ 1))\<^sup>\ * bot \ 1)" + by (smt (verit, ccfv_threshold) sup_assoc sup_commute le_iff_sup mult_assoc mult_isotone mult_left_one mult_1_right mult_right_sub_dist_sup_left order_trans star.circ_loop_fixpoint star.circ_reflexive star.mult_zero_circ) + hence "(x \ y)\<^sup>\ * bot \ x\<^sup>\ * (x\<^sup>\ * bot \ 1) * (y * x\<^sup>\ * (x\<^sup>\ * bot \ 1))\<^sup>\ * ((y * x\<^sup>\ * (x\<^sup>\ * bot \ 1))\<^sup>\ * bot \ 1)" + using 4 by (smt (z3) mult_right_dist_sup sup.orderE sup_assoc sup_right_divisibility omega_decompose) + thus "(x \ y)\<^sup>\ * ((x \ y)\<^sup>\ * bot \ 1) \ x\<^sup>\ * (x\<^sup>\ * bot \ 1) * (y * x\<^sup>\ * (x\<^sup>\ * bot \ 1))\<^sup>\ * ((y * x\<^sup>\ * (x\<^sup>\ * bot \ 1))\<^sup>\ * bot \ 1)" + using 1 2 star_left_induct mult_assoc by force +next + have 5: "x\<^sup>\ * bot \ (x \ y)\<^sup>\ * ((x \ y)\<^sup>\ * bot \ 1)" + by (metis bot_least le_supI1 le_supI2 mult_isotone star.circ_loop_fixpoint sup.cobounded1 omega_isotone) + have 6: "(y * x\<^sup>\)\<^sup>\ * bot \ (x \ y)\<^sup>\ * ((x \ y)\<^sup>\ * bot \ 1)" + by (metis sup_commute mult_left_isotone omega_sub_dist_1 mult_assoc mult_left_sub_dist_sup_left order_trans star_mult_omega) + have 7: "(y * x\<^sup>\)\<^sup>\ \ (x \ y)\<^sup>\" + by (metis mult_left_one mult_right_sub_dist_sup_left star.circ_sup_1 star.circ_plus_one) + hence "(y * x\<^sup>\)\<^sup>\ * x\<^sup>\ * bot \ (x \ y)\<^sup>\ * ((x \ y)\<^sup>\ * bot \ 1)" + by (smt sup_assoc le_iff_sup mult_assoc mult_isotone mult_right_dist_sup omega_sub_dist) + hence "(x\<^sup>\ * bot \ y * x\<^sup>\)\<^sup>\ * bot \ (x \ y)\<^sup>\ * ((x \ y)\<^sup>\ * bot \ 1)" + using 6 by (smt sup_commute sup.bounded_iff mult_assoc mult_right_dist_sup mult_bot_add_omega omega_unfold omega_bot) + hence "(y * x\<^sup>\ * (x\<^sup>\ * bot \ 1))\<^sup>\ * bot \ y * x\<^sup>\ * (x \ y)\<^sup>\ * ((x \ y)\<^sup>\ * bot \ 1)" + by (smt mult_assoc mult_left_one mult_left_zero mult_right_dist_sup mult_right_isotone omega_slide) + also have "... \ (x \ y)\<^sup>\ * ((x \ y)\<^sup>\ * bot \ 1)" + using 7 by (metis mult_left_isotone order_refl star.circ_mult_upper_bound star_left_induct_mult_iff) + finally have "(y * x\<^sup>\ * (x\<^sup>\ * bot \ 1))\<^sup>\ * ((y * x\<^sup>\ * (x\<^sup>\ * bot \ 1))\<^sup>\ * bot \ 1) \ (x \ y)\<^sup>\ * ((x \ y)\<^sup>\ * bot \ 1)" + using 5 by (smt (z3) le_supE star.circ_mult_upper_bound star.circ_sub_dist_1 star.mult_zero_circ star.mult_zero_sup_circ star_involutive star_isotone sup_commute) + hence "(x\<^sup>\ * bot \ 1) * (y * x\<^sup>\ * (x\<^sup>\ * bot \ 1))\<^sup>\ * ((y * x\<^sup>\ * (x\<^sup>\ * bot \ 1))\<^sup>\ * bot \ 1) \ (x \ y)\<^sup>\ * ((x \ y)\<^sup>\ * bot \ 1)" + using 5 by (metis sup_commute mult_assoc star.circ_isotone star.circ_mult_upper_bound star.mult_zero_sup_circ star.mult_zero_circ star_involutive) + thus "x\<^sup>\ * (x\<^sup>\ * bot \ 1) * (y * x\<^sup>\ * (x\<^sup>\ * bot \ 1))\<^sup>\ * ((y * x\<^sup>\ * (x\<^sup>\ * bot \ 1))\<^sup>\ * bot \ 1) \ (x \ y)\<^sup>\ * ((x \ y)\<^sup>\ * bot \ 1)" + by (smt sup_assoc sup_commute mult_assoc star.circ_mult_upper_bound star.circ_sub_dist star.mult_zero_sup_circ star.mult_zero_circ) +qed + +lemma star_omega_greatest: + "x\<^sup>\\<^sup>\ = 1\<^sup>\" + by (metis sup_commute le_iff_sup omega_one_greatest omega_sub_dist star.circ_plus_one) + +lemma omega_vector_greatest: + "x\<^sup>\ * 1\<^sup>\ = x\<^sup>\" + by (metis order.antisym mult_isotone omega_mult_omega_star omega_one_greatest omega_sub_vector) + +lemma mult_greatest_omega: + "(x * 1\<^sup>\)\<^sup>\ \ x * 1\<^sup>\" + by (metis mult_right_isotone omega_slide omega_sub_vector) + +lemma omega_mult_star_2: + "x\<^sup>\ * y\<^sup>\ = x\<^sup>\" + by (meson order.antisym le_supE star.circ_back_loop_prefixpoint omega_sub_vector) + +lemma omega_import: + assumes "p \ p * p" + and "p * x \ x * p" + shows "p * x\<^sup>\ = p * (p * x)\<^sup>\" +proof - + have "p * x\<^sup>\ \ p * (p * x) * x\<^sup>\" + by (metis assms(1) mult_assoc mult_left_isotone omega_unfold) + also have "... \ p * x * p * x\<^sup>\" + by (metis assms(2) mult_assoc mult_left_isotone mult_right_isotone) + finally have "p * x\<^sup>\ \ (p * x)\<^sup>\" + by (simp add: mult_assoc omega_induct_mult) + hence "p * x\<^sup>\ \ p * (p * x)\<^sup>\" + by (metis assms(1) mult_assoc mult_left_isotone mult_right_isotone order_trans) + thus "p * x\<^sup>\ = p * (p * x)\<^sup>\" + by (metis assms(2) sup_left_divisibility order.antisym mult_right_isotone omega_induct_mult omega_slide omega_sub_dist) +qed + +(* +lemma omega_circ_simulate_right_plus: "z * x \ y * (y\<^sup>\ * bot \ y\<^sup>\) * z \ w \ z * (x\<^sup>\ * bot \ x\<^sup>\) \ (y\<^sup>\ * bot \ y\<^sup>\) * (z \ w * (x\<^sup>\ * bot \ x\<^sup>\))" nitpick [expect=genuine,card=4] oops +lemma omega_circ_simulate_left_plus: "x * z \ z * (y\<^sup>\ * bot \ y\<^sup>\) \ w \ (x\<^sup>\ * bot \ x\<^sup>\) * z \ (z \ (x\<^sup>\ * bot \ x\<^sup>\) * w) * (y\<^sup>\ * bot \ y\<^sup>\)" nitpick [expect=genuine,card=5] oops +*) + +end + +text \Theorem 50.2\ + +sublocale left_omega_algebra < comb0: left_conway_semiring where circ = "(\x . x\<^sup>\ * (x\<^sup>\ * bot \ 1))" + apply unfold_locales + apply (smt sup_assoc sup_commute le_iff_sup mult_assoc mult_left_sub_dist_sup_left omega_unfold star.circ_loop_fixpoint star_mult_omega) + using omega_bot_left_slide mult_assoc apply fastforce + using omega_bot_add_1 mult_assoc by simp + +class left_zero_omega_algebra = left_zero_kleene_algebra + left_omega_algebra +begin + +lemma star_omega_absorb: + "y\<^sup>\ * (y\<^sup>\ * x)\<^sup>\ * y\<^sup>\ = (y\<^sup>\ * x)\<^sup>\ * y\<^sup>\" +proof - + have "y\<^sup>\ * (y\<^sup>\ * x)\<^sup>\ * y\<^sup>\ = y\<^sup>\ * y\<^sup>\ * x * (y\<^sup>\ * x)\<^sup>\ * y\<^sup>\ \ y\<^sup>\ * y\<^sup>\" + by (metis sup_commute mult_assoc mult_right_dist_sup star.circ_back_loop_fixpoint star.circ_plus_same) + thus ?thesis + by (metis mult_assoc star.circ_loop_fixpoint star.circ_transitive_equal star_mult_omega) +qed + +lemma omega_circ_simulate_right_plus: + assumes "z * x \ y * (y\<^sup>\ * bot \ y\<^sup>\) * z \ w" + shows "z * (x\<^sup>\ * bot \ x\<^sup>\) \ (y\<^sup>\ * bot \ y\<^sup>\) * (z \ w * (x\<^sup>\ * bot \ x\<^sup>\))" +proof - + have 1: "z * x \ y\<^sup>\ * bot \ y * y\<^sup>\ * z \ w" + by (metis assms mult_assoc mult_left_dist_sup mult_left_zero mult_right_dist_sup omega_unfold) + hence "(y\<^sup>\ * bot \ y\<^sup>\ * z \ y\<^sup>\ * w * x\<^sup>\ * bot \ y\<^sup>\ * w * x\<^sup>\) * x \ y\<^sup>\ * bot \ y\<^sup>\ * (y\<^sup>\ * bot \ y * y\<^sup>\ * z \ w) \ y\<^sup>\ * w * x\<^sup>\ * bot \ y\<^sup>\ * w * x\<^sup>\" + by (smt sup_assoc sup_ge1 sup_ge2 le_iff_sup mult_assoc mult_left_dist_sup mult_left_zero mult_right_dist_sup star.circ_back_loop_fixpoint) + also have "... = y\<^sup>\ * bot \ y\<^sup>\ * y * y\<^sup>\ * z \ y\<^sup>\ * w * x\<^sup>\ * bot \ y\<^sup>\ * w * x\<^sup>\" + by (smt sup_assoc sup_ge2 le_iff_sup mult_assoc mult_left_dist_sup star.circ_back_loop_fixpoint star_mult_omega) + also have "... \ y\<^sup>\ * bot \ y\<^sup>\ * z \ y\<^sup>\ * w * x\<^sup>\ * bot \ y\<^sup>\ * w * x\<^sup>\" + by (smt sup_commute sup_left_isotone mult_left_isotone star.circ_increasing star.circ_plus_same star.circ_transitive_equal) + finally have "z \ (y\<^sup>\ * bot \ y\<^sup>\ * z \ y\<^sup>\ * w * x\<^sup>\ * bot \ y\<^sup>\ * w * x\<^sup>\) * x \ y\<^sup>\ * bot \ y\<^sup>\ * z \ y\<^sup>\ * w * x\<^sup>\ * bot \ y\<^sup>\ * w * x\<^sup>\" + by (metis (no_types, lifting) le_supE le_supI star.circ_loop_fixpoint sup.cobounded1) + hence 2: "z * x\<^sup>\ \ y\<^sup>\ * bot \ y\<^sup>\ * z \ y\<^sup>\ * w * x\<^sup>\ * bot \ y\<^sup>\ * w * x\<^sup>\" + by (simp add: star_right_induct) + have "z * x\<^sup>\ * bot \ (y\<^sup>\ * bot \ y * y\<^sup>\ * z \ w) * x\<^sup>\ * bot" + using 1 by (smt sup_left_divisibility mult_assoc mult_right_sub_dist_sup_left omega_unfold) + hence "z * x\<^sup>\ * bot \ y\<^sup>\ \ y\<^sup>\ * (y\<^sup>\ * bot \ w * x\<^sup>\ * bot)" + by (smt sup_assoc sup_commute left_plus_omega mult_assoc mult_left_zero mult_right_dist_sup omega_induct star.left_plus_circ) + thus "z * (x\<^sup>\ * bot \ x\<^sup>\) \ (y\<^sup>\ * bot \ y\<^sup>\) * (z \ w * (x\<^sup>\ * bot \ x\<^sup>\))" + using 2 by (smt sup_assoc sup_commute le_iff_sup mult_assoc mult_left_dist_sup mult_left_zero mult_right_dist_sup omega_unfold omega_bot star_mult_omega zero_right_mult_decreasing) +qed + +lemma omega_circ_simulate_left_plus: + assumes "x * z \ z * (y\<^sup>\ * bot \ y\<^sup>\) \ w" + shows "(x\<^sup>\ * bot \ x\<^sup>\) * z \ (z \ (x\<^sup>\ * bot \ x\<^sup>\) * w) * (y\<^sup>\ * bot \ y\<^sup>\)" +proof - + have "x * (z * y\<^sup>\ * bot \ z * y\<^sup>\ \ x\<^sup>\ * bot \ x\<^sup>\ * w * y\<^sup>\ * bot \ x\<^sup>\ * w * y\<^sup>\) = x * z * y\<^sup>\ * bot \ x * z * y\<^sup>\ \ x\<^sup>\ * bot \ x * x\<^sup>\ * w * y\<^sup>\ * bot \ x * x\<^sup>\ * w * y\<^sup>\" + by (smt mult_assoc mult_left_dist_sup omega_unfold) + also have "... \ x * z * y\<^sup>\ * bot \ x * z * y\<^sup>\ \ x\<^sup>\ * bot \ x\<^sup>\ * w * y\<^sup>\ * bot \ x\<^sup>\ * w * y\<^sup>\" + by (metis sup_mono sup_right_isotone mult_left_isotone star.left_plus_below_circ) + also have "... \ (z * y\<^sup>\ * bot \ z * y\<^sup>\ \ w) * y\<^sup>\ * bot \ (z * y\<^sup>\ * bot \ z * y\<^sup>\ \ w) * y\<^sup>\ \ x\<^sup>\ * bot \ x\<^sup>\ * w * y\<^sup>\ * bot \ x\<^sup>\ * w * y\<^sup>\" + by (metis assms sup_left_isotone mult_assoc mult_left_dist_sup mult_left_isotone) + also have "... = z * y\<^sup>\ * bot \ z * y\<^sup>\ * y\<^sup>\ * bot \ w * y\<^sup>\ * bot \ z * y\<^sup>\ * bot \ z * y\<^sup>\ * y\<^sup>\ \ w * y\<^sup>\ \ x\<^sup>\ * bot \ x\<^sup>\ * w * y\<^sup>\ * bot \ x\<^sup>\ * w * y\<^sup>\" + by (smt sup_assoc mult_assoc mult_left_zero mult_right_dist_sup) + also have "... = z * y\<^sup>\ * bot \ z * y\<^sup>\ \ x\<^sup>\ * bot \ x\<^sup>\ * w * y\<^sup>\ * bot \ x\<^sup>\ * w * y\<^sup>\" + by (smt (verit, ccfv_threshold) sup_assoc sup_commute sup_idem mult_assoc mult_right_dist_sup star.circ_loop_fixpoint star.circ_transitive_equal star_mult_omega) + finally have "x\<^sup>\ * z \ z * y\<^sup>\ * bot \ z * y\<^sup>\ \ x\<^sup>\ * bot \ x\<^sup>\ * w * y\<^sup>\ * bot \ x\<^sup>\ * w * y\<^sup>\" + by (smt (z3) le_supE sup_least sup_ge1 star.circ_back_loop_fixpoint star_left_induct) + hence "(x\<^sup>\ * bot \ x\<^sup>\) * z \ z * y\<^sup>\ * bot \ z * y\<^sup>\ \ x\<^sup>\ * bot \ x\<^sup>\ * w * y\<^sup>\ * bot \ x\<^sup>\ * w * y\<^sup>\" + by (smt (z3) sup.left_commute sup_commute sup_least sup_ge1 mult_assoc mult_left_zero mult_right_dist_sup) + thus "(x\<^sup>\ * bot \ x\<^sup>\) * z \ (z \ (x\<^sup>\ * bot \ x\<^sup>\) * w) * (y\<^sup>\ * bot \ y\<^sup>\)" + by (smt sup_assoc mult_assoc mult_left_dist_sup mult_left_zero mult_right_dist_sup) +qed + +lemma omega_translate: + "x\<^sup>\ * (x\<^sup>\ * bot \ 1) = x\<^sup>\ * bot \ x\<^sup>\" + by (metis mult_assoc mult_left_dist_sup mult_1_right star_mult_omega) + +lemma omega_circ_simulate_right: + assumes "z * x \ y * z \ w" + shows "z * (x\<^sup>\ * bot \ x\<^sup>\) \ (y\<^sup>\ * bot \ y\<^sup>\) * (z \ w * (x\<^sup>\ * bot \ x\<^sup>\))" +proof - + have "... \ y * (y\<^sup>\ * bot \ y\<^sup>\) * z \ w" + using comb0.circ_mult_increasing mult_isotone sup_left_isotone omega_translate by auto + thus "z * (x\<^sup>\ * bot \ x\<^sup>\) \ (y\<^sup>\ * bot \ y\<^sup>\) * (z \ w * (x\<^sup>\ * bot \ x\<^sup>\))" + using assms order_trans omega_circ_simulate_right_plus by blast +qed + +end + +sublocale left_zero_omega_algebra < comb1: left_conway_semiring_1 where circ = "(\x . x\<^sup>\ * (x\<^sup>\ * bot \ 1))" + apply unfold_locales + by (smt order.eq_iff mult_assoc mult_left_dist_sup mult_left_zero mult_right_dist_sup mult_1_right omega_slide star_slide) + +sublocale left_zero_omega_algebra < comb0: itering where circ = "(\x . x\<^sup>\ * (x\<^sup>\ * bot \ 1))" + apply unfold_locales + using comb1.circ_sup_9 apply blast + using comb1.circ_mult_1 apply blast + apply (metis omega_circ_simulate_right_plus omega_translate) + using omega_circ_simulate_left_plus omega_translate by auto + +text \Theorem 2.2\ + +sublocale left_zero_omega_algebra < comb2: itering where circ = "(\x . x\<^sup>\ * bot \ x\<^sup>\)" + apply unfold_locales + using comb1.circ_sup_9 omega_translate apply force + apply (metis comb1.circ_mult_1 omega_translate) + using omega_circ_simulate_right_plus apply blast + by (simp add: omega_circ_simulate_left_plus) + +class omega_algebra = kleene_algebra + left_zero_omega_algebra + +class left_omega_conway_semiring = left_omega_algebra + left_conway_semiring +begin + +subclass left_kleene_conway_semiring .. + +lemma circ_below_omega_star: + "x\<^sup>\ \ x\<^sup>\ \ x\<^sup>\" + by (metis circ_left_unfold mult_1_right omega_induct order_refl) + +lemma omega_mult_circ: + "x\<^sup>\ * x\<^sup>\ = x\<^sup>\" + by (metis circ_star omega_mult_star_2) + +lemma circ_mult_omega: + "x\<^sup>\ * x\<^sup>\ = x\<^sup>\" + by (metis order.antisym sup_right_divisibility circ_loop_fixpoint circ_plus_sub omega_simulation) + +lemma circ_omega_greatest: + "x\<^sup>\\<^sup>\ = 1\<^sup>\" + by (metis circ_star star_omega_greatest) + +lemma omega_circ: + "x\<^sup>\\<^sup>\ = 1 \ x\<^sup>\" + by (metis order.antisym circ_left_unfold mult_left_sub_dist_sup_left mult_1_right omega_sub_vector) + +end + +class bounded_left_omega_algebra = bounded_left_kleene_algebra + left_omega_algebra +begin + +lemma omega_one: + "1\<^sup>\ = top" + by (simp add: order.antisym omega_one_greatest) + +lemma star_omega_top: + "x\<^sup>\\<^sup>\ = top" + by (simp add: star_omega_greatest omega_one) + +lemma omega_vector: + "x\<^sup>\ * top = x\<^sup>\" + by (simp add: order.antisym omega_sub_vector top_right_mult_increasing) + +lemma mult_top_omega: + "(x * top)\<^sup>\ \ x * top" + using mult_greatest_omega omega_one by auto + +end + +sublocale bounded_left_omega_algebra < comb0: bounded_left_conway_semiring where circ = "(\x . x\<^sup>\ * (x\<^sup>\ * bot \ 1))" .. + +class bounded_left_zero_omega_algebra = bounded_left_zero_kleene_algebra + left_zero_omega_algebra +begin + +subclass bounded_left_omega_algebra .. + +end + +sublocale bounded_left_zero_omega_algebra < comb0: bounded_itering where circ = "(\x . x\<^sup>\ * (x\<^sup>\ * bot \ 1))" .. + +class bounded_omega_algebra = bounded_kleene_algebra + omega_algebra +begin + +subclass bounded_left_zero_omega_algebra .. + +end + +class bounded_left_omega_conway_semiring = bounded_left_omega_algebra + left_omega_conway_semiring +begin + +subclass left_kleene_conway_semiring .. + +subclass bounded_left_conway_semiring .. + +lemma circ_omega: + "x\<^sup>\\<^sup>\ = top" + by (simp add: circ_omega_greatest omega_one) + +end + +class top_left_omega_algebra = bounded_left_omega_algebra + + assumes top_left_bot: "top * x = top" +begin + +lemma omega_translate_3: + "x\<^sup>\ * (x\<^sup>\ * bot \ 1) = x\<^sup>\ * (x\<^sup>\ \ 1)" + by (metis omega_one omega_vector_greatest top_left_bot mult_assoc) + +end + +text \Theorem 50.2\ + +sublocale top_left_omega_algebra < comb4: left_conway_semiring where circ = "(\x . x\<^sup>\ * (x\<^sup>\ \ 1))" + apply unfold_locales + using comb0.circ_left_unfold omega_translate_3 apply force + using omega_bot_left_slide omega_translate_3 mult_assoc apply force + using comb0.circ_sup_1 omega_translate_3 by auto + +class top_left_bot_omega_algebra = bounded_left_zero_omega_algebra + + assumes top_left_bot: "top * x = top" +begin + +lemma omega_translate_2: + "x\<^sup>\ * bot \ x\<^sup>\ = x\<^sup>\ \ x\<^sup>\" + by (metis mult_assoc omega_mult_star_2 star.circ_top top_left_bot) + +end + +text \Theorem 2.3\ + +sublocale top_left_bot_omega_algebra < comb3: itering where circ = "(\x . x\<^sup>\ \ x\<^sup>\)" + apply unfold_locales + using comb2.circ_slide_1 comb2.circ_sup_1 omega_translate_2 apply force + apply (metis comb2.circ_mult_1 omega_translate_2) + using omega_circ_simulate_right_plus omega_translate_2 apply force + using omega_circ_simulate_left_plus omega_translate_2 by auto + +class Omega = + fixes Omega :: "'a \ 'a" ("_\<^sup>\" [100] 100) + +end + diff --git a/thys/Correctness_Algebras/Pre_Post.thy b/thys/Correctness_Algebras/Pre_Post.thy new file mode 100644 --- /dev/null +++ b/thys/Correctness_Algebras/Pre_Post.thy @@ -0,0 +1,568 @@ +(* Title: Pre-Post Specifications + Author: Walter Guttmann + Maintainer: Walter Guttmann +*) + +section \Pre-Post Specifications\ + +theory Pre_Post + +imports Preconditions + +begin + +class pre_post = + fixes pre_post :: "'a \ 'a \ 'a" (infix "\" 55) + +class pre_post_spec_greatest = bounded_idempotent_left_semiring + precondition + pre_post + + assumes pre_post_galois: "-p \ x\-q \ x \ -p\-q" +begin + +text \Theorem 42.1\ + +lemma post_pre_left_antitone: + "x \ y \ y\-q \ x\-q" + by (smt order_refl order_trans pre_closed pre_post_galois) + +lemma pre_left_sub_dist: + "x\y\-q \ x\-q" + by (simp add: post_pre_left_antitone) + +text \Theorem 42.2\ + +lemma pre_post_left_antitone: + "-p \ -q \ -q\-r \ -p\-r" + using order_lesseq_imp pre_post_galois by blast + +lemma pre_post_left_sub_dist: + "-p\-q\-r \ -p\-r" + by (metis sup.cobounded1 tests_dual.sba_dual.sub_sup_closed pre_post_left_antitone) + +lemma pre_post_left_sup_dist: + "-p\-r \ -p*-q\-r" + by (metis tests_dual.sba_dual.sub_inf_def pre_post_left_sub_dist tests_dual.inf_absorb) + +text \Theorem 42.5\ + +lemma pre_pre_post: + "x \ (x\-p)\-p" + by (metis order_refl pre_closed pre_post_galois) + +text \Theorem 42.6\ + +lemma pre_post_pre: + "-p \ (-p\-q)\-q" + by (simp add: pre_post_galois) + +text \Theorem 42.8\ + +lemma pre_post_zero_top: + "bot\-q = top" + by (metis order.eq_iff pre_post_galois sup.cobounded2 sup_monoid.add_0_right top_greatest tests_dual.top_double_complement) + +text \Theorem 42.7\ + +lemma pre_post_pre_one: + "(1\-q)\-q = 1" + by (metis order.eq_iff pre_below_one tests_dual.sba_dual.top_double_complement pre_post_pre) + +text \Theorem 42.3\ + +lemma pre_post_right_isotone: + "-p \ -q \ -r\-p \ -r\-q" + using order_lesseq_imp pre_iso pre_post_galois by blast + +lemma pre_post_right_sub_dist: + "-r\-p \ -r\-p\-q" + by (metis sup.cobounded1 tests_dual.sba_dual.sub_sup_closed pre_post_right_isotone) + +lemma pre_post_right_sup_dist: + "-r\-p*-q \ -r\-p" + by (metis tests_dual.sub_sup_closed pre_post_right_isotone tests_dual.upper_bound_left) + +text \Theorem 42.7\ + +lemma pre_post_reflexive: + "1 \ -p\-p" + using pre_one_increasing pre_post_galois by auto + +text \Theorem 42.9\ + +lemma pre_post_compose: + "-q \ -r \ (-p\-q)*(-r\-s) \ -p\-s" + using order_lesseq_imp pre_compose pre_post_galois by blast + +text \Theorem 42.10\ + +lemma pre_post_compose_1: + "(-p\-q)*(-q\-r) \ -p\-r" + by (simp add: pre_post_compose) + +text \Theorem 42.11\ + +lemma pre_post_compose_2: + "(-p\-p)*(-p\-q) = -p\-q" + by (meson case_split_left order.eq_iff le_supI1 pre_post_compose_1 pre_post_reflexive) + +text \Theorem 42.12\ + +lemma pre_post_compose_3: + "(-p\-q)*(-q\-q) = -p\-q" + by (meson order.eq_iff order.trans mult_right_isotone mult_sub_right_one pre_post_compose_1 pre_post_reflexive) + +text \Theorem 42.13\ + +lemma pre_post_compose_4: + "(-p\-p)*(-p\-p) = -p\-p" + by (simp add: pre_post_compose_3) + +text \Theorem 42.14\ + +lemma pre_post_one_one: + "x\1 = 1 \ x \ 1\1" + by (metis order.eq_iff one_def pre_below_one pre_post_galois) + +text \Theorem 42.4\ + +lemma post_pre_left_dist_sup: + "x\y\-q = (x\-q)*(y\-q)" + apply (rule order.antisym) + apply (metis mult_isotone pre_closed sup_commute tests_dual.sup_idempotent pre_left_sub_dist) + by (smt (z3) order.refl pre_closed pre_post_galois sup.boundedI tests_dual.sba_dual.greatest_lower_bound tests_dual.sub_sup_closed) + +(* +lemma pre_post_right_dist_sup: "-p\-q\-r = (-p\-q) \ (-p\-r)" nitpick [expect=genuine,card=4] oops +*) + +end + +class pre_post_spec_greatest_2 = pre_post_spec_greatest + precondition_test_test +begin + +subclass precondition_test_box + apply unfold_locales + by (smt (verit) sup_commute mult_1_right tests_dual.double_negation order.eq_iff mult_left_one mult_right_dist_sup one_def tests_dual.inf_complement tests_dual.inf_complement_intro pre_below_one pre_import pre_post_galois pre_test_test tests_dual.top_def bot_least) + +lemma pre_post_seq_sub_associative: + "(-p\-q)*-r \ -p\-q*-r" + by (smt (z3) pre_compose pre_post_galois pre_post_pre sub_comm test_below_pre_test_mult tests_dual.sub_sup_closed) + +lemma pre_post_right_import_mult: + "(-p\-q)*-r = (-p\-q*-r)*-r" + by (metis order.antisym mult_assoc tests_dual.sup_idempotent mult_left_isotone pre_post_right_sup_dist pre_post_seq_sub_associative) + +lemma seq_pre_post_sub_associative: + "-r*(-p\-q) \ --r\-p\-q" + by (smt (z3) pre_compose pre_post_galois pre_post_pre pre_test tests_dual.sba_dual.reflexive tests_dual.sba_dual.sub_sup_closed) + +lemma pre_post_left_import_sup: + "-r*(-p\-q) = -r*(--r\-p\-q)" + by (metis sup_commute order.antisym mult_assoc tests_dual.sup_idempotent mult_right_isotone pre_post_left_sub_dist seq_pre_post_sub_associative) + +lemma pre_post_import_same: + "-p*(-p\-q) = -p*(1\-q)" + using pre_test pre_test_test_same pre_post_left_import_sup by auto + +lemma pre_post_import_complement: + "--p*(-p\-q) = --p*top" + by (metis tests_dual.sup_idempotent tests_dual.inf_cases tests_dual.inf_closed pre_post_left_import_sup pre_post_zero_top tests_dual.top_def tests_dual.top_double_complement) + +lemma pre_post_export: + "-p\-q = (1\-q) \ --p*top" +proof (rule order.antisym) + have 1: "-p*(-p\-q) \ (1\-q) \ --p*top" + by (metis le_supI1 pre_test pre_test_test_same seq_pre_post_sub_associative) + have "--p*(-p\-q) \ (1\-q) \ --p*top" + by (simp add: pre_post_import_complement) + thus "-p\-q \ (1\-q) \ --p*top" + using 1 by (smt case_split_left eq_refl tests_dual.inf_complement) +next + show "(1\-q) \ --p*top \ -p\-q" + by (metis le_sup_iff tests_dual.double_negation tests_dual.sub_bot_least pre_neg_mult pre_post_galois pre_post_pre_one) +qed + +lemma pre_post_left_dist_mult: + "-p*-q\-r = (-p\-r) \ (-q\-r)" +proof - + have "\p q . -p*(-p*-q\-r) = -p*(-q\-r)" + using sup_monoid.add_commute tests_dual.sba_dual.sub_inf_def pre_post_left_import_sup tests_dual.inf_complement_intro by auto + hence 1: "(-p\-q)*(-p*-q\-r) \ (-p\-r) \ (-q\-r)" + by (metis sup_commute le_sup_iff sup_ge2 mult_left_one mult_right_dist_sup tests_dual.inf_left_unit sub_comm) + have "-(-p\-q)*(-p*-q\-r) = -(-p\-q)*top" + by (smt (z3) sup.left_commute sup_commute tests_dual.sba_dual.sub_sup_closed tests_dual.sub_sup_closed pre_post_import_complement pre_post_left_import_sup tests_dual.inf_absorb) + hence "-(-p\-q)*(-p*-q\-r) \ (-p\-r) \ (-q\-r)" + by (smt (z3) order.trans le_supI1 pre_post_left_sub_dist tests_dual.sba_dual.sub_sup_closed tests_dual.sub_sup_closed seq_pre_post_sub_associative) + thus ?thesis + using 1 by (smt (z3) le_sup_iff order.antisym case_split_left order_refl tests_dual.inf_closed tests_dual.inf_complement pre_post_left_sup_dist sub_comm) +qed + +lemma pre_post_left_import_mult: + "-r*(-p\-q) = -r*(-r*-p\-q)" + by (metis sup_commute tests_dual.inf_complement_intro pre_post_left_import_sup sub_mult_closed) + +lemma pre_post_right_import_sup: + "(-p\-q)*-r = (-p\-q\--r)*-r" + by (smt (z3) sup_monoid.add_commute tests_dual.sba_dual.inf_cases_2 tests_dual.sba_dual.inf_complement_intro tests_dual.sub_complement tests_dual.sub_inf_def pre_post_right_import_mult) + +lemma pre_post_shunting: + "x \ -p*-q\-r \ -p*x \ -q\-r" +proof - + have "--p*x \ -p*-q\-r" + by (metis tests_dual.double_negation order_trans pre_neg_mult pre_post_galois pre_post_left_sup_dist) + hence 1: "-p*x \ -q\-r \ x \ -p*-q\-r" + by (smt case_split_left eq_refl order_trans tests_dual.inf_complement pre_post_left_sup_dist sub_comm) + have "-p*(-p*-q\-r) \ -q\-r" + by (metis mult_left_isotone mult_left_one tests_dual.sub_bot_least pre_post_left_import_mult) + thus ?thesis + using 1 mult_right_isotone order_lesseq_imp by blast +qed + +(* +lemma pre_post_right_dist_sup: "-p\-q\-r = (-p\-q) \ (-p\-r)" oops +*) + +end + +class left_zero_pre_post_spec_greatest_2 = pre_post_spec_greatest_2 + bounded_idempotent_left_zero_semiring +begin + +lemma pre_post_right_dist_sup: + "-p\-q\-r = (-p\-q) \ (-p\-r)" +proof - + have 1: "(-p\-q\-r)*-q \ (-p\-q) \ (-p\-r)" + by (metis le_supI1 pre_post_seq_sub_associative tests_dual.sba_dual.inf_absorb tests_dual.sba_dual.sub_sup_closed) + have "(-p\-q\-r)*--q = (-p\-r)*--q" + by (simp add: pre_post_right_import_sup sup_commute) + hence "(-p\-q\-r)*--q \ (-p\-q) \ (-p\-r)" + by (metis sup_ge2 mult_left_sub_dist_sup_right mult_1_right order_trans tests_dual.inf_left_unit) + thus ?thesis + using 1 by (metis le_sup_iff order.antisym case_split_right tests_dual.sub_bot_least tests_dual.inf_commutative tests_dual.inf_complement pre_post_right_sub_dist) +qed + +end + +class havoc = + fixes H :: "'a" + +class idempotent_left_semiring_H = bounded_idempotent_left_semiring + havoc + + assumes H_zero : "H * bot = bot" + assumes H_split: "x \ x * bot \ H" +begin + +lemma H_galois: + "x * bot \ y \ x \ y \ H" + apply (rule iffI) + using H_split order_lesseq_imp sup_mono apply blast + by (smt (verit, ccfv_threshold) H_zero mult_right_dist_sup sup.cobounded2 sup.orderE sup_assoc sup_bot_left sup_commute zero_right_mult_decreasing) + +lemma H_greatest_finite: + "x * bot = bot \ x \ H" + by (metis H_galois le_iff_sup sup_bot_left sup_monoid.add_0_right) + +lemma H_reflexive: + "1 \ H" + using H_greatest_finite mult_left_one by blast + +lemma H_transitive: + "H = H * H" + by (metis H_greatest_finite H_reflexive H_zero preorder_idempotent mult_assoc) + +lemma T_split_H: + "top * bot \ H = top" + by (simp add: H_split order.antisym) + +(* +lemma "H * (x \ y) = H * x \ H * y" nitpick [expect=genuine,card=6] oops +*) + +end + +class pre_post_spec_least = bounded_idempotent_left_semiring + precondition_test_test + precondition_promote + pre_post + + assumes test_mult_right_distr_sup: "-p * (x \ y) = -p * x \ -p * y" + assumes pre_post_galois: "-p \ x\-q \ -p\-q \ x" +begin + +lemma shunting_top: + "-p * x \ y \ x \ y \ --p * top" +proof + assume "-p * x \ y" + thus "x \ y \ --p * top" + by (smt (verit, ccfv_SIG) case_split_left eq_refl le_supI1 le_supI2 mult_right_isotone tests_dual.sba_dual.top_def top_greatest) +next + assume "x \ y \ --p * top" + hence "-p * x \ -p * y" + by (metis sup_bot_right mult_assoc tests_dual.sup_complement mult_left_zero mult_right_isotone test_mult_right_distr_sup) + thus "-p * x \ y" + by (metis mult_left_isotone mult_left_one tests_dual.sub_bot_least order_trans) +qed + +lemma post_pre_left_isotone: + "x \ y \ x\-q \ y\-q" + by (smt order_refl order_trans pre_closed pre_post_galois) + +lemma pre_left_sub_dist: + "x\-q \ x\y\-q" + by (simp add: post_pre_left_isotone) + +lemma pre_post_left_isotone: + "-p \ -q \ -p\-r \ -q\-r" + using order_lesseq_imp pre_post_galois by blast + +lemma pre_post_left_sub_dist: + "-p\-r \ -p\-q\-r" + by (metis sup_ge1 tests_dual.inf_closed pre_post_left_isotone) + +lemma pre_post_left_sup_dist: + "-p*-q\-r \ -p\-r" + by (metis tests_dual.upper_bound_left pre_post_left_isotone sub_mult_closed) + +lemma pre_pre_post: + "(x\-p)\-p \ x" + by (metis order_refl pre_closed pre_post_galois) + +lemma pre_post_pre: + "-p \ (-p\-q)\-q" + by (simp add: pre_post_galois) + +lemma pre_post_zero_top: + "bot\-q = bot" + using bot_least order.eq_iff pre_post_galois tests_dual.sba_dual.sub_bot_def by blast + +lemma pre_post_pre_one: + "(1\-q)\-q = 1" + by (metis order.eq_iff pre_below_one pre_post_pre tests_dual.sba_dual.top_double_complement) + +lemma pre_post_right_antitone: + "-p \ -q \ -r\-q \ -r\-p" + using order_lesseq_imp pre_iso pre_post_galois by blast + +lemma pre_post_right_sub_dist: + "-r\-p\-q \ -r\-p" + by (metis sup_ge1 tests_dual.inf_closed pre_post_right_antitone) + +lemma pre_post_right_sup_dist: + "-r\-p \ -r\-p*-q" + by (metis tests_dual.upper_bound_left pre_post_right_antitone sub_mult_closed) + +lemma pre_top: + "top\-q = 1" + using order.eq_iff pre_below_one pre_post_galois tests_dual.sba_dual.one_def top_greatest by blast + +lemma pre_mult_top_increasing: + "-p \ -p*top\-q" + using pre_import_equiv pre_top tests_dual.sub_bot_least by auto + +lemma pre_post_below_mult_top: + "-p\-q \ -p*top" + using pre_import_equiv pre_post_galois by auto + +lemma pre_post_import_complement: + "--p*(-p\-q) = bot" +proof - + have "--p*(-p\-q) \ --p*(-p*top)" + by (simp add: mult_right_isotone pre_post_below_mult_top) + thus ?thesis + by (metis mult_assoc mult_left_zero sub_comm tests_dual.top_def order.antisym bot_least) +qed + +lemma pre_post_import_same: + "-p*(-p\-q) = -p\-q" +proof - + have "-p\-q = -p*(-p\-q) \ --p*(-p\-q)" + by (metis mult_left_one mult_right_dist_sup tests_dual.inf_complement) + thus ?thesis + using pre_post_import_complement by auto +qed + +lemma pre_post_export: + "-p\-q = -p*(1\-q)" +proof (rule order.antisym) + show "-p\-q \ -p*(1\-q)" + by (metis tests_dual.sub_bot_least pre_import_equiv pre_post_galois pre_post_pre_one) +next + have 1: "-p \ ((-p\-q) \ --p*top)\-q" + by (simp add: pre_post_galois) + have "--p \ ((-p\-q) \ --p*top)\-q" + by (simp add: le_supI2 pre_post_galois pre_post_below_mult_top) + hence "-p \ --p \ ((-p\-q) \ --p*top)\-q" + using 1 le_supI by blast + hence "1 \ ((-p\-q) \ --p*top)\-q" + by simp + hence "1\-q \ (-p\-q) \ --p*top" + using pre_post_galois tests_dual.sba_dual.one_def by blast + thus "-p*(1\-q) \ -p\-q" + by (simp add: shunting_top) +qed + +lemma pre_post_seq_associative: + "-r*(-p\-q) = -r*-p\-q" + by (metis pre_post_export tests_dual.sub_sup_closed mult_assoc) + +lemma pre_post_left_import_mult: + "-r*(-p\-q) = -r*(-r*-p\-q)" + by (metis mult_assoc tests_dual.sup_idempotent pre_post_seq_associative) + +lemma seq_pre_post_sub_associative: + "-r*(-p\-q) \ --r\-p\-q" + by (metis le_supI1 pre_post_left_sub_dist sup_commute shunting_top) + +lemma pre_post_left_import_sup: + "-r*(-p\-q) = -r*(--r\-p\-q)" + by (metis tests_dual.sba_dual.sub_sup_closed pre_post_seq_associative tests_dual.sup_complement_intro) + +lemma pre_post_left_dist_sup: + "-p\-q\-r = (-p\-r) \ (-q\-r)" + by (metis mult_right_dist_sup tests_dual.inf_closed pre_post_export) + +lemma pre_post_reflexive: + "-p\-p \ 1" + using pre_one_increasing pre_post_galois by auto + +lemma pre_post_compose: + "-q \ -r \ -p\-s \ (-p\-q)*(-r\-s)" + by (meson pre_compose pre_post_galois pre_post_pre pre_post_right_antitone) + +lemma pre_post_compose_1: + "-p\-r \ (-p\-q)*(-q\-r)" + by (simp add: pre_post_compose) + +lemma pre_post_compose_2: + "(-p\-p)*(-p\-q) = -p\-q" + using order.eq_iff mult_left_isotone pre_post_compose_1 pre_post_reflexive by fastforce + +lemma pre_post_compose_3: + "(-p\-q)*(-q\-q) = -p\-q" + by (metis order.antisym mult_right_isotone mult_1_right pre_post_compose_1 pre_post_reflexive) + +lemma pre_post_compose_4: + "(-p\-p)*(-p\-p) = -p\-p" + by (simp add: pre_post_compose_3) + +lemma pre_post_one_one: + "x\1 = 1 \ 1\1 \ x" + using order.eq_iff pre_below_one pre_post_galois tests_dual.sub_bot_def by force + +lemma pre_one_right: + "-p\1 = -p" + by (metis order.antisym mult_1_right one_def tests_dual.inf_complement pre_left_sub_dist pre_mult_top_increasing pre_one pre_seq pre_test_promote pre_top) + +lemma pre_pre_one: + "x\-q = x*-q\1" + by (metis one_def pre_one_right pre_seq) + +subclass precondition_test_diamond + apply unfold_locales + using tests_dual.sba_dual.sub_inf_def pre_one_right pre_pre_one by auto + +(* +lemma pre_post_shunting: "x \ -p*-q\-r \ -p*x \ -q\-r" nitpick [expect=genuine,card=3] oops +lemma "(-p\-q)*-r = (-p\-q\-r)*-r" nitpick [expect=genuine,card=3] oops +lemma "(-p\-q)*-r = (-p\-q\--r)*-r" nitpick [expect=genuine,card=3] oops +lemma "(-p\-q)*-r = (-p\-q*-r)*-r" nitpick [expect=genuine,card=3] oops +lemma "(-p\-q)*-r = (-p\-q*--r)*-r" nitpick [expect=genuine,card=3] oops +lemma "-p\-q\-r = (-p\-q) \ (-p\-r)" nitpick [expect=genuine,card=3] oops +lemma "-p\-q\-r = (-p\-q) * (-p\-r)" nitpick [expect=genuine,card=3] oops +lemma pre_post_right_dist_mult: "-p\-q*-r = (-p\-q) * (-p\-r)" oops +lemma pre_post_right_dist_mult: "-p\-q*-r = (-p\-q) \ (-p\-r)" oops +lemma post_pre_left_dist_sup: "x\y\-q = (x\-q) \ (y\-q)" oops +*) + +end + +class havoc_dual = + fixes Hd :: "'a" + +class idempotent_left_semiring_Hd = bounded_idempotent_left_semiring + havoc_dual + + assumes Hd_total: "Hd * top = top" + assumes Hd_least: "x * top = top \ Hd \ x" +begin + +lemma Hd_least_total: + "x * top = top \ Hd \ x" + by (metis Hd_least Hd_total order.antisym mult_left_isotone top_greatest) + +lemma Hd_reflexive: + "Hd \ 1" + by (simp add: Hd_least) + +lemma Hd_transitive: + "Hd = Hd * Hd" + by (simp add: Hd_least Hd_total order.antisym coreflexive_transitive total_mult_closed) + +end + +class pre_post_spec_least_Hd = idempotent_left_semiring_Hd + pre_post_spec_least + + assumes pre_one_mult_top: "(x\1)*top = x*top" +begin + +lemma Hd_pre_one: + "Hd\1 = 1" + by (metis Hd_total pre_seq pre_top) + +lemma pre_post_below_Hd: + "1\1 \ Hd" + using Hd_pre_one pre_post_one_one by auto + +lemma Hd_pre_post: + "Hd = 1\1" + by (metis Hd_least Hd_pre_one Hd_total order.eq_iff pre_one_mult_top pre_post_one_one) + +lemma top_left_zero: + "top*x = top" + by (metis mult_assoc mult_left_one mult_left_zero pre_closed pre_one_mult_top pre_seq pre_top) + +lemma test_dual_test: + "(-p\--p*top)*-p = -p\--p*top" + by (simp add: top_left_zero mult_right_dist_sup mult_assoc) + +lemma pre_zero_mult_top: + "(x\bot)*top = x*bot" + by (metis mult_assoc mult_left_zero one_def pre_one_mult_top pre_seq pre_bot) + +lemma pre_one_mult_Hd: + "(x\1)*Hd \ x" + by (metis Hd_pre_post one_def pre_closed pre_post_export pre_pre_post) + +lemma Hd_mult_pre_one: + "Hd*(x\1) \ x" +proof - + have 1: "-(x\1)*Hd*(x\1) \ x" + by (metis Hd_pre_post le_iff_sup mult_left_isotone pre_closed pre_one_right pre_post_export pre_pre_post sup_commute sup_monoid.add_0_right tests_dual.sba_dual.one_def tests_dual.top_def) + have "(x\1)*Hd*(x\1) \ x" + by (metis mult_isotone mult_1_right one_def pre_below_one pre_one_mult_Hd) + thus ?thesis + using 1 by (metis case_split_left pre_closed reflexive_one_closed tests_dual.sba_dual.one_def tests_dual.sba_dual.top_def mult_assoc) +qed + +lemma pre_post_one_def_1: + assumes "1 \ x\-q" + shows "Hd*(-q\--q*top) \ x" +proof - + have "Hd*(-q\--q*top) \ x*-q*(-q\--q*top)" + by (metis assms Hd_pre_post order.antisym pre_below_one pre_post_one_one pre_pre_one mult_left_isotone) + thus ?thesis + by (metis mult_assoc tests_dual.sup_complement mult_left_sub_dist_sup_left mult_left_zero mult_1_right tests_dual.inf_complement test_mult_right_distr_sup order_trans) +qed + +lemma pre_post_one_def: + "1\-q = Hd*(-q\--q*top)" +proof (rule order.antisym) + have "1 \ (1\1)*(-q\--q)\1" + by (metis pre_post_pre one_def mult_1_right tests_dual.inf_complement) + also have "... \ (1\1)*(-q\--q*top)\-q" + by (metis sup_right_isotone mult_right_isotone mult_1_right one_def post_pre_left_isotone pre_seq pre_test_promote test_dual_test top_right_mult_increasing) + finally show "1\-q \ Hd*(-q\--q*top)" + using Hd_pre_post pre_post_galois tests_dual.sub_bot_def by blast +next + show "Hd*(-q\--q*top) \ 1\-q" + by (simp add: pre_post_pre_one pre_post_one_def_1) +qed + +lemma pre_post_def: + "-p\-q = -p*Hd*(-q\--q*top)" + by (simp add: pre_post_export mult_assoc pre_post_one_def) + +end + +end + diff --git a/thys/Correctness_Algebras/Pre_Post_Modal.thy b/thys/Correctness_Algebras/Pre_Post_Modal.thy new file mode 100644 --- /dev/null +++ b/thys/Correctness_Algebras/Pre_Post_Modal.thy @@ -0,0 +1,137 @@ +(* Title: Pre-Post Specifications and Modal Operators + Author: Walter Guttmann + Maintainer: Walter Guttmann +*) + +section \Pre-Post Specifications and Modal Operators\ + +theory Pre_Post_Modal + +imports Pre_Post Hoare_Modal + +begin + +class pre_post_spec_whiledo = pre_post_spec_greatest + whiledo +begin + +lemma nat_test_pre_post: + "nat_test t s \ -q \ s \ (\n . x \ t n*-p*-q\(pSum t n*-q)) \ -p\x \ -q\--p*-q" + by (smt (verit, ccfv_threshold) nat_test_def nat_test_pre pSum_test_nat pre_post_galois tests_dual.sub_sup_closed) + +lemma nat_test_pre_post_2: + "nat_test t s \ -r \ s \ (\n . x \ t n*-p\(pSum t n)) \ -p\x \ -r\1" + by (smt (verit, ccfv_threshold) nat_test_def nat_test_pre_2 one_def pSum_test_nat pre_post_galois tests_dual.sub_sup_closed) + +end + +class pre_post_spec_hoare = pre_post_spec_whiledo + hoare_calculus_sound +begin + +lemma pre_post_while: + "x \ -p*-q\-q \ -p\x \ aL*-q\-q" + by (smt aL_test pre_post_galois sub_mult_closed while_soundness) + +text \Theorem 43.1\ + +lemma while_soundness_3: + "test_seq t \ -q \ Sum t \ x \ t 0*-p*-q\aL*-q \ (\n>0 . x \ t n*-p*-q\pSum t n*-q) \ -p\x \ -q\--p*-q" + by (smt (verit, del_insts) aL_test pSum_test tests_dual.inf_closed pre_post_galois sub_mult_closed test_seq_def while_soundness_1) + +text \Theorem 43.2\ + +lemma while_soundness_4: + "test_seq t \ -r \ Sum t \ (\n . x \ t n*-p\pSum t n) \ -p\x \ -r\1" + by (smt one_def pSum_test pre_post_galois sub_mult_closed test_seq_def while_soundness_2) + +end + +class pre_post_spec_hoare_pc_2 = pre_post_spec_hoare + hoare_calculus_pc_2 +begin + +text \Theorem 43.3\ + +lemma pre_post_while_pc: + "x \ -p*-q\-q \ -p\x \ -q\--p*-q" + by (metis pre_post_galois sub_mult_closed while_soundness_pc) + +end + +class pre_post_spec_hoare_pc = pre_post_spec_hoare + hoare_calculus_pc +begin + +subclass pre_post_spec_hoare_pc_2 .. + +lemma pre_post_one_one_top: + "1\1 = top" + using order.eq_iff pre_one_one pre_post_one_one by auto + +end + +class pre_post_spec_H = pre_post_spec_greatest + box_precondition + havoc + + assumes H_zero_2: "H * bot = bot" + assumes H_split_2: "x \ x * -q * top \ H * --q" +begin + +subclass idempotent_left_semiring_H + apply unfold_locales + apply (rule H_zero_2) + by (smt H_split_2 tests_dual.complement_bot mult_assoc mult_left_zero mult_1_right one_def) + +lemma pre_post_def_iff: + "-p * x * --q \ Z \ x \ Z \ --p * top \ H * -q" +proof (rule iffI) + assume "-p * x * --q \ Z" + hence "x * --q * top \ Z \ --p * top" + by (smt (verit, ccfv_threshold) Z_left_zero_above_one case_split_left_sup mult_assoc mult_left_isotone mult_right_dist_sup mult_right_isotone top_greatest top_mult_top) + thus "x \ Z \ --p * top \ H * -q" + by (metis sup_left_isotone order_trans H_split_2 tests_dual.double_negation) +next + assume "x \ Z \ --p * top \ H * -q" + hence "-p * x * --q \ -p * (Z * --q \ --p * top * --q \ H * -q * --q)" + by (metis mult_left_isotone mult_right_dist_sup mult_right_isotone mult_assoc) + thus "-p * x * --q \ Z" + by (metis H_zero_2 Z_mult_decreasing sup_commute sup_bot_left mult_assoc mult_right_dist_sup mult_right_isotone order_trans test_mult_left_dist_shunt test_mult_left_sub_dist_shunt tests_dual.top_def) +qed + +lemma pre_post_def: + "-p\-q = Z \ --p*top \ H*-q" + by (meson order.antisym order_refl pre_Z pre_post_galois pre_post_def_iff) + +end + +class pre_post_L = pre_post_spec_greatest + box_while + left_conway_semiring_L + left_kleene_conway_semiring + + assumes circ_below_L_add_star: "x\<^sup>\ \ L \ x\<^sup>\" +begin + +text \a loop does not abort if its body does not abort\ +text \this avoids abortion from all states* alternatively from states in -r if -r is an invariant\ + +lemma body_abort_loop: + assumes "Z = L" + and "x \ -p\1" + shows "-p\x \ 1\1" +proof - + have "-p * x * bot \ L" + by (metis assms pre_Z pre_post_galois tests_dual.sba_dual.one_def tests_dual.top_double_complement) + hence "(-p * x)\<^sup>\ * bot \ L" + by (metis L_split le_iff_sup star_left_induct sup_bot_left) + hence "(-p * x)\<^sup>\ * bot \ L" + by (smt L_left_zero L_split sup_commute circ_below_L_add_star le_iff_sup mult_right_dist_sup) + thus ?thesis + by (metis assms(1) a_restrict mult_isotone pre_pc_Z pre_post_compose_2 pre_post_one_one tests_dual.sba_dual.one_def while_def tests_dual.sup_right_zero) +qed + +end + +class pre_post_spec_Hd = pre_post_spec_least + diamond_precondition + idempotent_left_semiring_Hd + + assumes d_mult_top: "d(x) * top = x * top" +begin + +subclass pre_post_spec_least_Hd + apply unfold_locales + by (simp add: d_mult_top diamond_x_1 pre_def) + +end + +end + diff --git a/thys/Correctness_Algebras/Preconditions.thy b/thys/Correctness_Algebras/Preconditions.thy new file mode 100644 --- /dev/null +++ b/thys/Correctness_Algebras/Preconditions.thy @@ -0,0 +1,276 @@ +(* Title: Preconditions + Author: Walter Guttmann + Maintainer: Walter Guttmann +*) + +section \Preconditions\ + +theory Preconditions + +imports Tests + +begin + +class pre = + fixes pre :: "'a \ 'a \ 'a" (infixr "\" 55) + +class precondition = tests + pre + + assumes pre_closed: "x\-q = --(x\-q)" + assumes pre_seq: "x*y\-q = x\y\-q" + assumes pre_lower_bound_right: "x\-p*-q \ x\-q" + assumes pre_one_increasing: "-q \ 1\-q" +begin + +text \Theorem 39.2\ + +lemma pre_sub_distr: + "x\-p*-q \ (x\-p)*(x\-q)" + by (smt (z3) pre_closed pre_lower_bound_right tests_dual.sub_commutative tests_dual.sub_sup_closed tests_dual.least_upper_bound) + +text \Theorem 39.5\ + +lemma pre_below_one: + "x\-p \ 1" + by (metis pre_closed tests_dual.sub_bot_least) + +lemma pre_lower_bound_left: + "x\-p*-q \ x\-p" + using pre_lower_bound_right tests_dual.sub_commutative by fastforce + +text \Theorem 39.1\ + +lemma pre_iso: + "-p \ -q \ x\-p \ x\-q" + by (metis leq_def pre_lower_bound_right) + +text \Theorem 39.4 and Theorem 40.9\ + +lemma pre_below_pre_one: + "x\-p \ x\1" + using tests_dual.sba_dual.one_def pre_iso tests_dual.sub_bot_least by blast + +text \Theorem 39.3\ + +lemma pre_seq_below_pre_one: + "x*y\1 \ x\1" + by (metis one_def pre_below_pre_one pre_closed pre_seq) + +text \Theorem 39.6\ + +lemma pre_compose: + "-p \ x\-q \ -q \ y\-r \ -p \ x*y\-r" + by (metis pre_closed pre_iso tests_dual.transitive pre_seq) + +(* +lemma pre_test_test: "-p*(-p\-q) = -p*-q" nitpick [expect=genuine,card=2] oops +lemma pre_test_promote: "-p\-q = -p\-p*-q" nitpick [expect=genuine,card=2] oops +lemma pre_test: "-p\-q = --p\-q" nitpick [expect=genuine,card=2] oops +lemma pre_test: "-p\-q = -p*-q" nitpick [expect=genuine,card=2] oops +lemma pre_distr_mult: "x\-p*-q = (x\-p)*(x\-q)" nitpick [expect=genuine,card=4] oops +lemma pre_distr_plus: "x\-p\-q = (x\-p)*(x\-q)" nitpick [expect=genuine,card=2] oops +*) + +end + +class precondition_test_test = precondition + + assumes pre_test_test: "-p*(-p\-q) = -p*-q" +begin + +lemma pre_one: + "1\-p = -p" + by (metis pre_closed pre_test_test tests_dual.sba_dual.one_def tests_dual.sup_left_unit) + +lemma pre_import: + "-p*(x\-q) = -p*(-p*x\-q)" + by (metis pre_closed pre_seq pre_test_test) + +lemma pre_import_composition: + "-p*(-p*x*y\-q) = -p*(x\y\-q)" + by (metis pre_closed pre_seq pre_import) + +lemma pre_import_equiv: + "-p \ x\-q \ -p \ -p*x\-q" + by (metis leq_def pre_closed pre_import) + +lemma pre_import_equiv_mult: + "-p*-q \ x\-s \ -p*-q \ -q*x\-s" + by (smt leq_def pre_closed sub_assoc sub_mult_closed pre_import) + +(* +lemma pre_test_promote: "-p\-q = -p\-p*-q" nitpick [expect=genuine,card=2] oops +lemma pre_test: "-p\-q = --p\-q" nitpick [expect=genuine,card=2] oops +lemma pre_test: "-p\-q = -p*-q" nitpick [expect=genuine,card=2] oops +lemma pre_distr_mult: "x\-p*-q = (x\-p)*(x\-q)" nitpick [expect=genuine,card=4] oops +lemma pre_distr_plus: "x\-p\-q = (x\-p)*(x\-q)" nitpick [expect=genuine,card=2] oops +*) + +end + +class precondition_promote = precondition + + assumes pre_test_promote: "-p\-q = -p\-p*-q" +begin + +lemma pre_mult_test_promote: + "x*-p\-q = x*-p\-p*-q" + by (metis pre_seq pre_test_promote sub_mult_closed) + +(* +lemma pre_test_test: "-p*(-p\-q) = -p*-q" nitpick [expect=genuine,card=2] oops +lemma pre_test: "-p\-q = --p\-q" nitpick [expect=genuine,card=2] oops +lemma pre_test: "-p\-q = -p*-q" nitpick [expect=genuine,card=2] oops +lemma pre_distr_mult: "x\-p*-q = (x\-p)*(x\-q)" nitpick [expect=genuine,card=4] oops +lemma pre_distr_plus: "x\-p\-q = (x\-p)*(x\-q)" nitpick [expect=genuine,card=2] oops +*) + +end + +class precondition_test_box = precondition + + assumes pre_test: "-p\-q = --p\-q" +begin + +lemma pre_test_neg: + "--p*(-p\-q) = --p" + by (simp add: pre_test) + +lemma pre_bot: + "bot\-q = 1" + by (metis pre_test tests_dual.sba_dual.one_def tests_dual.sba_dual.sup_left_zero tests_dual.top_double_complement) + +lemma pre_export: + "-p*x\-q = --p\(x\-q)" + by (metis pre_closed pre_seq pre_test) + +lemma pre_neg_mult: + "--p \ -p*x\-q" + by (metis leq_def pre_closed pre_seq pre_test_neg) + +lemma pre_test_test_same: + "-p\-p = 1" + using pre_test tests_dual.sba_dual.less_eq_sup_top tests_dual.sba_dual.reflexive by auto + +lemma test_below_pre_test_mult: + "-q \ -p\-p*-q" + by (metis pre_test tests_dual.sba_dual.reflexive tests_dual.sba_dual.shunting tests_dual.sub_sup_closed) + +lemma test_below_pre_test: + "-q \ -p\-q" + by (simp add: pre_test tests_dual.sba_dual.upper_bound_right) + +lemma test_below_pre_test_2: + "--p \ -p\-q" + by (simp add: pre_test tests_dual.sba_dual.upper_bound_left) + +lemma pre_test_bot: + "-p\bot = --p" + by (metis pre_test tests_dual.sba_dual.sup_right_unit tests_dual.top_double_complement) + +lemma pre_test_one: + "-p\1 = 1" + by (metis pre_seq pre_bot tests_dual.sup_right_zero) + +subclass precondition_test_test + apply unfold_locales + by (simp add: pre_test tests_dual.sup_complement_intro) + +subclass precondition_promote + apply unfold_locales + by (metis pre_test tests_dual.sba_dual.sub_commutative tests_dual.sub_sup_closed tests_dual.inf_complement_intro) + +(* +lemma pre_test: "-p\-q = -p*-q" nitpick [expect=genuine,card=2] oops +lemma pre_distr_mult: "x\-p*-q = (x\-p)*(x\-q)" oops +lemma pre_distr_plus: "x\-p\-q = (x\-p)*(x\-q)" nitpick [expect=genuine,card=2] oops +*) + +end + +class precondition_test_diamond = precondition + + assumes pre_test: "-p\-q = -p*-q" +begin + +lemma pre_test_neg: + "--p*(-p\-q) = bot" + by (simp add: pre_test tests_dual.sub_associative tests_dual.sub_commutative) + +lemma pre_bot: + "bot\-q = bot" + by (metis pre_test tests_dual.sup_left_zero tests_dual.top_double_complement) + +lemma pre_export: + "-p*x\-q = -p*(x\-q)" + by (metis pre_closed pre_seq pre_test) + +lemma pre_neg_mult: + "-p*x\-q \ -p" + by (metis pre_closed pre_export tests_dual.upper_bound_left) + +lemma pre_test_test_same: + "-p\-p = -p" + by (simp add: pre_test) + +lemma test_above_pre_test_plus: + "--p\-p\-q \ -q" + using pre_test tests_dual.sba_dual.inf_complement_intro tests_dual.sub_commutative tests_dual.sub_inf_def tests_dual.upper_bound_left by auto + +lemma test_above_pre_test: + "-p\-q \ -q" + by (simp add: pre_test tests_dual.upper_bound_right) + +lemma test_above_pre_test_2: + "-p\-q \ -p" + by (simp add: pre_test tests_dual.upper_bound_left) + +lemma pre_test_bot: + "-p\bot = bot" + by (metis pre_test tests_dual.sup_right_zero tests_dual.top_double_complement) + +lemma pre_test_one: + "-p\1 = -p" + by (metis pre_test tests_dual.complement_top tests_dual.sup_right_unit) + +subclass precondition_test_test + apply unfold_locales + by (simp add: pre_test tests_dual.sub_associative) + +subclass precondition_promote + apply unfold_locales + by (metis pre_seq pre_test tests_dual.sup_idempotent) + +(* +lemma pre_test: "-p\-q = --p\-q" nitpick [expect=genuine,card=2] oops +lemma pre_distr_mult: "x\-p*-q = (x\-p)*(x\-q)" nitpick [expect=genuine,card=6] oops +lemma pre_distr_plus: "x\-p\-q = (x\-p)*(x\-q)" nitpick [expect=genuine,card=2] oops +*) + +end + +class precondition_distr_mult = precondition + + assumes pre_distr_mult: "x\-p*-q = (x\-p)*(x\-q)" +begin + +(* +lemma pre_test_test: "-p*(-p\-q) = -p*-q" nitpick [expect=genuine,card=2] oops +lemma pre_test_promote: "-p\-q = -p\-p*-q" nitpick [expect=genuine,card=2] oops +lemma pre_test: "-p\-q = --p\-q" nitpick [expect=genuine,card=2] oops +lemma pre_test: "-p\-q = -p*-q" nitpick [expect=genuine,card=2] oops +lemma pre_distr_plus: "x\-p\-q = (x\-p)*(x\-q)" nitpick [expect=genuine,card=2] oops +*) + +end + +class precondition_distr_plus = precondition + + assumes pre_distr_plus: "x\-p\-q = (x\-p)\(x\-q)" +begin + +(* +lemma pre_test_test: "-p*(-p\-q) = -p*-q" nitpick [expect=genuine,card=2] oops +lemma pre_test_promote: "-p\-q = -p\-p*-q" nitpick [expect=genuine,card=2] oops +lemma pre_test: "-p\-q = --p\-q" nitpick [expect=genuine,card=2] oops +lemma pre_test: "-p\-q = -p*-q" nitpick [expect=genuine,card=2] oops +lemma pre_distr_mult: "x\-p*-q = (x\-p)*(x\-q)" nitpick [expect=genuine,card=4] oops +*) + +end + +end + diff --git a/thys/Correctness_Algebras/ROOT b/thys/Correctness_Algebras/ROOT new file mode 100644 --- /dev/null +++ b/thys/Correctness_Algebras/ROOT @@ -0,0 +1,53 @@ +chapter AFP + +session Correctness_Algebras (AFP) = HOL + + + options [timeout = 600] + + sessions + Stone_Kleene_Relation_Algebras + Subset_Boolean_Algebras + MonoBoolTranAlgebra + + theories + Base + Omega_Algebras + Capped_Omega_Algebras + General_Refinement_Algebras + Lattice_Ordered_Semirings + Boolean_Semirings + Binary_Iterings + Binary_Iterings_Strict + Binary_Iterings_Nonstrict + Tests + Test_Iterings + N_Semirings + N_Semirings_Boolean + N_Semirings_Modal + Approximation + Recursion_Strict + N_Algebras + Recursion + N_Omega_Algebras + N_Omega_Binary_Iterings + N_Relation_Algebras + Domain + Domain_Iterings + Domain_Recursion + Extended_Designs + Relative_Domain + Relative_Modal + Complete_Tests + Complete_Domain + Preconditions + Hoare + Hoare_Modal + Pre_Post + Pre_Post_Modal + Monotonic_Boolean_Transformers + Monotonic_Boolean_Transformers_Instances + + document_files + "root.tex" + "root.bib" + diff --git a/thys/Correctness_Algebras/Recursion.thy b/thys/Correctness_Algebras/Recursion.thy new file mode 100644 --- /dev/null +++ b/thys/Correctness_Algebras/Recursion.thy @@ -0,0 +1,634 @@ +(* Title: Recursion + Author: Walter Guttmann + Maintainer: Walter Guttmann +*) + +section \Recursion\ + +theory Recursion + +imports Approximation N_Algebras + +begin + +class n_algebra_apx = n_algebra + apx + + assumes apx_def: "x \ y \ x \ y \ L \ C y \ x \ n(x) * top" +begin + +lemma apx_transitive_2: + assumes "x \ y" + and "y \ z" + shows "x \ z" +proof - + have "C z \ C (y \ n(y) * top)" + using assms(2) apx_def le_inf_iff by blast + also have "... = C y \ n(y) * top" + by (simp add: C_n_mult_closed inf_sup_distrib1) + also have "... \ x \ n(x) * top \ n(y) * top" + using assms(1) apx_def sup_left_isotone by blast + also have "... = x \ n(x) * top \ n(C y) * top" + by (simp add: n_C) + also have "... \ x \ n(x) * top" + by (metis assms(1) sup_assoc sup_idem sup_right_isotone apx_def mult_left_isotone n_add_n_top n_isotone) + finally show ?thesis + by (smt assms sup_assoc sup_commute apx_def le_iff_sup) +qed + +lemma apx_meet_L: + assumes "y \ x" + shows "x \ L \ y \ L" +proof - + have "x \ L = C x \ L" + by (simp add: inf.left_commute inf.sup_monoid.add_assoc n_L_top_meet_L) + also have "... \ (y \ n(y) * top) \ L" + using assms apx_def inf.sup_left_isotone by blast + also have "... = (y \ L) \ (n(y) * top \ L)" + by (simp add: inf_sup_distrib2) + also have "... \ (y \ L) \ n(y \ L) * top" + using n_n_meet_L sup_right_isotone by force + finally show ?thesis + by (metis le_iff_sup inf_le2 n_less_eq_char) +qed + +text \AACP Theorem 4.1\ + +subclass apx_biorder + apply unfold_locales + apply (simp add: apx_def inf.coboundedI2) + apply (metis sup_same_context order.antisym apx_def apx_meet_L relative_equality) + using apx_transitive_2 by blast + +lemma sup_apx_left_isotone_2: + assumes "x \ y" + shows "x \ z \ y \ z" +proof - + have 1: "x \ z \ y \ z \ L" + by (smt assms sup_assoc sup_commute sup_left_isotone apx_def) + have "C (y \ z) \ x \ n(x) * top \ C z" + using assms apx_def inf_sup_distrib1 sup_left_isotone by auto + also have "... \ x \ z \ n(x) * top" + using inf.coboundedI1 inf.sup_monoid.add_commute sup.cobounded1 sup.cobounded2 sup_assoc sup_least sup_right_isotone by auto + also have "... \ x \ z \ n(x \ z) * top" + using mult_isotone n_left_upper_bound semiring.add_left_mono by force + finally show ?thesis + using 1 apx_def by blast +qed + +lemma mult_apx_left_isotone_2: + assumes "x \ y" + shows "x * z \ y * z" +proof - + have "x * z \ y * z \ L * z" + by (metis assms apx_def mult_left_isotone mult_right_dist_sup) + hence 1: "x * z \ y * z \ L" + using n_L_below_L order_lesseq_imp semiring.add_left_mono by blast + have "C (y * z) = C y * z" + by (simp add: n_L_T_meet_mult) + also have "... \ x * z \ n(x) * top * z" + by (metis assms apx_def mult_left_isotone mult_right_dist_sup) + also have "... \ x * z \ n(x * z) * top" + by (simp add: n_top_split) + finally show ?thesis + using 1 by (simp add: apx_def) +qed + +lemma mult_apx_right_isotone_2: + assumes "x \ y" + shows "z * x \ z * y" +proof - + have "z * x \ z * y \ z * L" + by (metis assms apx_def mult_left_dist_sup mult_right_isotone) + also have "... \ z * y \ z * bot \ L" + using n_L_split_L semiring.add_left_mono sup_assoc by presburger + finally have 1: "z * x \ z * y \ L" + using mult_right_isotone sup.absorb_iff1 by auto + have "C (z * y) \ z * C y" + by (simp add: n_L_T_meet_mult n_L_T_meet_mult_propagate) + also have "... \ z * (x \ n(x) * top)" + using assms apx_def mult_right_isotone by blast + also have "... = z * x \ z * n(x) * top" + by (simp add: mult_left_dist_sup mult_assoc) + also have "... \ z * x \ n(z * x) * top" + by (simp add: n_split_top) + finally show ?thesis + using 1 apx_def by blast +qed + +text \AACP Theorem 4.1 and Theorem 4.2\ + +subclass apx_semiring + apply unfold_locales + apply (simp add: apx_def n_L_below_nL_top sup.absorb2) + using sup_apx_left_isotone_2 apply blast + using mult_apx_left_isotone_2 apply blast + by (simp add: mult_apx_right_isotone_2) + +text \AACP Theorem 4.2\ + +lemma meet_L_apx_isotone: + "x \ y \ x \ L \ y \ L" + by (smt (verit) apx_meet_L apx_def inf.cobounded2 inf.left_commute n_L_top_meet_L n_less_eq_char sup.absorb2) + +text \AACP Theorem 4.2\ + +lemma n_L_apx_isotone: + assumes "x \ y" + shows "n(x) * L \ n(y) * L" +proof - + have "C (n(y) * L) \ n(C y) * L" + by (simp add: n_C) + also have "... \ n(x) * L \ n(n(x) * L) * top" + by (metis assms apx_def n_add_n_top n_galois n_isotone n_n_L) + finally show ?thesis + using apx_def le_inf_iff n_L_decreasing_meet_L sup.absorb2 by auto +qed + +definition kappa_apx_meet :: "('a \ 'a) \ bool" + where "kappa_apx_meet f \ apx.has_least_fixpoint f \ has_apx_meet (\ f) (\ f) \ \ f = \ f \ \ f" + +definition kappa_mu_nu :: "('a \ 'a) \ bool" + where "kappa_mu_nu f \ apx.has_least_fixpoint f \ \ f = \ f \ (\ f \ L)" + +definition nu_below_mu_nu :: "('a \ 'a) \ bool" + where "nu_below_mu_nu f \ C (\ f) \ \ f \ (\ f \ L) \ n(\ f) * top" + +definition nu_below_mu_nu_2 :: "('a \ 'a) \ bool" + where "nu_below_mu_nu_2 f \ C (\ f) \ \ f \ (\ f \ L) \ n(\ f \ (\ f \ L)) * top" + +definition mu_nu_apx_nu :: "('a \ 'a) \ bool" + where "mu_nu_apx_nu f \ \ f \ (\ f \ L) \ \ f" + +definition mu_nu_apx_meet :: "('a \ 'a) \ bool" + where "mu_nu_apx_meet f \ has_apx_meet (\ f) (\ f) \ \ f \ \ f = \ f \ (\ f \ L)" + +definition apx_meet_below_nu :: "('a \ 'a) \ bool" + where "apx_meet_below_nu f \ has_apx_meet (\ f) (\ f) \ \ f \ \ f \ \ f" + +lemma mu_below_l: + "\ f \ \ f \ (\ f \ L)" + by simp + +lemma l_below_nu: + "has_least_fixpoint f \ has_greatest_fixpoint f \ \ f \ (\ f \ L) \ \ f" + by (simp add: mu_below_nu) + +lemma n_l_nu: + "has_least_fixpoint f \ has_greatest_fixpoint f \ (\ f \ (\ f \ L)) \ L = \ f \ L" + by (meson l_below_nu inf.cobounded1 inf.sup_same_context order_trans sup_ge2) + +lemma l_apx_mu: + "\ f \ (\ f \ L) \ \ f" +proof - + have 1: "\ f \ (\ f \ L) \ \ f \ L" + using sup_right_isotone by auto + have "C (\ f) \ \ f \ (\ f \ L) \ n(\ f \ (\ f \ L)) * top" + by (simp add: le_supI1) + thus ?thesis + using 1 apx_def by blast +qed + +text \AACP Theorem 4.8 implies Theorem 4.9\ + +lemma nu_below_mu_nu_nu_below_mu_nu_2: + assumes "nu_below_mu_nu f" + shows "nu_below_mu_nu_2 f" +proof - + have "C (\ f) = C (C (\ f))" + by auto + also have "... \ C (\ f \ (\ f \ L) \ n(\ f) * top)" + using assms nu_below_mu_nu_def by auto + also have "... = C (\ f \ (\ f \ L)) \ C (n(\ f) * top)" + using inf_sup_distrib1 by auto + also have "... = C (\ f \ (\ f \ L)) \ n(\ f) * top" + by (simp add: C_n_mult_closed) + also have "... \ \ f \ (\ f \ L) \ n(\ f) * top" + using inf_le2 sup_left_isotone by blast + also have "... = \ f \ (\ f \ L) \ n(\ f \ L) * top" + using n_n_meet_L by auto + also have "... \ \ f \ (\ f \ L) \ n(\ f \ (\ f \ L)) * top" + using mult_isotone n_right_upper_bound semiring.add_left_mono by auto + finally show ?thesis + by (simp add: nu_below_mu_nu_2_def) +qed + +text \AACP Theorem 4.9 implies Theorem 4.8\ + +lemma nu_below_mu_nu_2_nu_below_mu_nu: + assumes "has_least_fixpoint f" + and "has_greatest_fixpoint f" + and "nu_below_mu_nu_2 f" + shows "nu_below_mu_nu f" +proof - + have "C (\ f) \ \ f \ (\ f \ L) \ n(\ f \ (\ f \ L)) * top" + using assms(3) nu_below_mu_nu_2_def by blast + also have "... \ \ f \ (\ f \ L) \ n(\ f) * top" + by (metis assms(1,2) order.eq_iff n_n_meet_L n_l_nu) + finally show ?thesis + using nu_below_mu_nu_def by blast +qed + +lemma nu_below_mu_nu_equivalent: + "has_least_fixpoint f \ has_greatest_fixpoint f \ (nu_below_mu_nu f \ nu_below_mu_nu_2 f)" + using nu_below_mu_nu_2_nu_below_mu_nu nu_below_mu_nu_nu_below_mu_nu_2 by blast + +text \AACP Theorem 4.9 implies Theorem 4.10\ + +lemma nu_below_mu_nu_2_mu_nu_apx_nu: + assumes "has_least_fixpoint f" + and "has_greatest_fixpoint f" + and "nu_below_mu_nu_2 f" + shows "mu_nu_apx_nu f" +proof - + have "\ f \ (\ f \ L) \ \ f \ L" + using assms(1,2) l_below_nu le_supI1 by blast + thus ?thesis + using assms(3) apx_def mu_nu_apx_nu_def nu_below_mu_nu_2_def by blast +qed + +text \AACP Theorem 4.10 implies Theorem 4.11\ + +lemma mu_nu_apx_nu_mu_nu_apx_meet: + assumes "mu_nu_apx_nu f" + shows "mu_nu_apx_meet f" +proof - + let ?l = "\ f \ (\ f \ L)" + have "is_apx_meet (\ f) (\ f) ?l" + proof (unfold is_apx_meet_def, intro conjI) + show "?l \ \ f" + by (simp add: l_apx_mu) + show "?l \ \ f" + using assms mu_nu_apx_nu_def by blast + show "\w. w \ \ f \ w \ \ f \ w \ ?l" + by (metis apx_meet_L le_inf_iff sup.absorb1 sup_apx_left_isotone) + qed + thus ?thesis + by (simp add: apx_meet_char mu_nu_apx_meet_def) +qed + +text \AACP Theorem 4.11 implies Theorem 4.12\ + +lemma mu_nu_apx_meet_apx_meet_below_nu: + "has_least_fixpoint f \ has_greatest_fixpoint f \ mu_nu_apx_meet f \ apx_meet_below_nu f" + using apx_meet_below_nu_def l_below_nu mu_nu_apx_meet_def by auto + +text \AACP Theorem 4.12 implies Theorem 4.9\ + +lemma apx_meet_below_nu_nu_below_mu_nu_2: + assumes "apx_meet_below_nu f" + shows "nu_below_mu_nu_2 f" +proof - + let ?l = "\ f \ (\ f \ L)" + have "\m . m \ \ f \ m \ \ f \ m \ \ f \ C (\ f) \ ?l \ n(?l) * top" + proof + fix m + show "m \ \ f \ m \ \ f \ m \ \ f \ C (\ f) \ ?l \ n(?l) * top" + proof + assume 1: "m \ \ f \ m \ \ f \ m \ \ f" + hence "m \ ?l" + by (smt (z3) apx_def sup.left_commute sup_inf_distrib1 sup_left_divisibility) + hence "m \ n(m) * top \ ?l \ n(?l) * top" + by (metis sup_mono mult_left_isotone n_isotone) + thus "C (\ f) \ ?l \ n(?l) * top" + using 1 apx_def order.trans by blast + qed + qed + thus ?thesis + by (smt (verit, ccfv_threshold) assms apx_meet_below_nu_def apx_meet_same apx_meet_unique is_apx_meet_def nu_below_mu_nu_2_def) +qed + +text \AACP Theorem 4.5 implies Theorem 4.6\ + +lemma has_apx_least_fixpoint_kappa_apx_meet: + assumes "has_least_fixpoint f" + and "has_greatest_fixpoint f" + and "apx.has_least_fixpoint f" + shows "kappa_apx_meet f" +proof - + have 1: "\w . w \ \ f \ w \ \ f \ C (\ f) \ w \ n(w) * top" + by (metis assms(2,3) apx_def inf.sup_right_isotone order_trans kappa_below_nu) + have "\w . w \ \ f \ w \ \ f \ w \ \ f \ L" + by (metis assms(1,3) sup_left_isotone apx_def mu_below_kappa order_trans) + hence "\w . w \ \ f \ w \ \ f \ w \ \ f" + using 1 apx_def by blast + hence "is_apx_meet (\ f) (\ f) (\ f)" + by (simp add: assms is_apx_meet_def kappa_apx_below_mu kappa_apx_below_nu) + thus ?thesis + by (simp add: assms(3) kappa_apx_meet_def apx_meet_char) +qed + +text \AACP Theorem 4.6 implies Theorem 4.12\ + +lemma kappa_apx_meet_apx_meet_below_nu: + "has_greatest_fixpoint f \ kappa_apx_meet f \ apx_meet_below_nu f" + using apx_meet_below_nu_def kappa_apx_meet_def kappa_below_nu by force + +text \AACP Theorem 4.12 implies Theorem 4.7\ + +lemma apx_meet_below_nu_kappa_mu_nu: + assumes "has_least_fixpoint f" + and "has_greatest_fixpoint f" + and "isotone f" + and "apx.isotone f" + and "apx_meet_below_nu f" + shows "kappa_mu_nu f" +proof - + let ?l = "\ f \ (\ f \ L)" + let ?m = "\ f \ \ f" + have 1: "?m = ?l" + by (metis assms(1,2,5) apx_meet_below_nu_nu_below_mu_nu_2 mu_nu_apx_meet_def mu_nu_apx_nu_mu_nu_apx_meet nu_below_mu_nu_2_mu_nu_apx_nu) + have 2: "?l \ f(?l) \ L" + proof - + have "?l \ \ f \ L" + using sup_right_isotone by auto + also have "... = f(\ f) \ L" + by (simp add: assms(1) mu_unfold) + also have "... \ f(?l) \ L" + using assms(3) isotone_def sup_ge1 sup_left_isotone by blast + finally show "?l \ f(?l) \ L" + . + qed + have "C (f(?l)) \ ?l \ n(?l) * top" + proof - + have "C (f(?l)) \ C (f(\ f))" + using assms(1-3) l_below_nu inf.sup_right_isotone isotone_def by blast + also have "... = C (\ f)" + by (metis assms(2) nu_unfold) + also have "... \ ?l \ n(?l) * top" + by (metis assms(5) apx_meet_below_nu_nu_below_mu_nu_2 nu_below_mu_nu_2_def) + finally show "C (f(?l)) \ ?l \ n(?l) * top" + . + qed + hence 3: "?l \ f(?l)" + using 2 apx_def by blast + have 4: "f(?l) \ \ f" + proof - + have "?l \ \ f" + by (simp add: l_apx_mu) + thus "f(?l) \ \ f" + by (metis assms(1,4) mu_unfold ord.isotone_def) + qed + have "f(?l) \ \ f" + proof - + have "?l \ \ f" + using 1 + by (metis apx_meet_below_nu_def assms(5) apx_meet is_apx_meet_def) + thus "f(?l) \ \ f" + by (metis assms(2,4) nu_unfold ord.isotone_def) + qed + hence "f(?l) \ ?l" + using 1 4 apx_meet_below_nu_def assms(5) apx_meet is_apx_meet_def by fastforce + hence 5: "f(?l) = ?l" + using 3 apx.order.antisym by blast + have "\y . f(y) = y \ ?l \ y" + proof + fix y + show "f(y) = y \ ?l \ y" + proof + assume 6: "f(y) = y" + hence 7: "?l \ y \ L" + using assms(1) inf.cobounded2 is_least_fixpoint_def least_fixpoint semiring.add_mono by blast + have "y \ \ f" + using 6 assms(2) greatest_fixpoint is_greatest_fixpoint_def by auto + hence "C y \ ?l \ n(?l) * top" + using assms(5) apx_meet_below_nu_nu_below_mu_nu_2 inf.sup_right_isotone nu_below_mu_nu_2_def order_trans by blast + thus "?l \ y" + using 7 apx_def by blast + qed + qed + thus ?thesis + using 5 apx.least_fixpoint_same apx.has_least_fixpoint_def apx.is_least_fixpoint_def kappa_mu_nu_def by auto +qed + +text \AACP Theorem 4.7 implies Theorem 4.5\ + +lemma kappa_mu_nu_has_apx_least_fixpoint: + "kappa_mu_nu f \ apx.has_least_fixpoint f" + by (simp add: kappa_mu_nu_def) + +text \AACP Theorem 4.8 implies Theorem 4.7\ + +lemma nu_below_mu_nu_kappa_mu_nu: + "has_least_fixpoint f \ has_greatest_fixpoint f \ isotone f \ apx.isotone f \ nu_below_mu_nu f \ kappa_mu_nu f" + using apx_meet_below_nu_kappa_mu_nu mu_nu_apx_meet_apx_meet_below_nu mu_nu_apx_nu_mu_nu_apx_meet nu_below_mu_nu_2_mu_nu_apx_nu nu_below_mu_nu_nu_below_mu_nu_2 by blast + +text \AACP Theorem 4.7 implies Theorem 4.8\ + +lemma kappa_mu_nu_nu_below_mu_nu: + "has_least_fixpoint f \ has_greatest_fixpoint f \ kappa_mu_nu f \ nu_below_mu_nu f" + by (simp add: apx_meet_below_nu_nu_below_mu_nu_2 has_apx_least_fixpoint_kappa_apx_meet kappa_apx_meet_apx_meet_below_nu kappa_mu_nu_has_apx_least_fixpoint nu_below_mu_nu_2_nu_below_mu_nu) + +definition kappa_mu_nu_L :: "('a \ 'a) \ bool" + where "kappa_mu_nu_L f \ apx.has_least_fixpoint f \ \ f = \ f \ n(\ f) * L" + +definition nu_below_mu_nu_L :: "('a \ 'a) \ bool" + where "nu_below_mu_nu_L f \ C (\ f) \ \ f \ n(\ f) * top" + +definition mu_nu_apx_nu_L :: "('a \ 'a) \ bool" + where "mu_nu_apx_nu_L f \ \ f \ n(\ f) * L \ \ f" + +definition mu_nu_apx_meet_L :: "('a \ 'a) \ bool" + where "mu_nu_apx_meet_L f \ has_apx_meet (\ f) (\ f) \ \ f \ \ f = \ f \ n(\ f) * L" + +lemma n_below_l: + "x \ n(y) * L \ x \ (y \ L)" + using n_L_decreasing_meet_L semiring.add_left_mono by auto + +lemma n_equal_l: + assumes "nu_below_mu_nu_L f" + shows "\ f \ n(\ f) * L = \ f \ (\ f \ L)" +proof - + have "\ f \ L \ (\ f \ n(\ f) * top) \ L" + by (meson assms order.trans inf.boundedI inf.cobounded2 meet_L_below_C nu_below_mu_nu_L_def) + also have "... \ \ f \ (n(\ f) * top \ L)" + by (simp add: inf.coboundedI2 inf.sup_monoid.add_commute inf_sup_distrib1) + also have "... \ \ f \ n(\ f) * L" + by (simp add: n_T_meet_L) + finally have "\ f \ (\ f \ L) \ \ f \ n(\ f) * L" + by simp + thus "\ f \ n(\ f) * L = \ f \ (\ f \ L)" + by (meson order.antisym n_below_l) +qed + +text \AACP Theorem 4.14 implies Theorem 4.8\ + +lemma nu_below_mu_nu_L_nu_below_mu_nu: + "nu_below_mu_nu_L f \ nu_below_mu_nu f" + by (metis sup_assoc sup_right_top mult_left_dist_sup n_equal_l nu_below_mu_nu_L_def nu_below_mu_nu_def) + +text \AACP Theorem 4.14 implies Theorem 4.13\ + +lemma nu_below_mu_nu_L_kappa_mu_nu_L: + "has_least_fixpoint f \ has_greatest_fixpoint f \ isotone f \ apx.isotone f \ nu_below_mu_nu_L f \ kappa_mu_nu_L f" + using kappa_mu_nu_L_def kappa_mu_nu_def n_equal_l nu_below_mu_nu_L_nu_below_mu_nu nu_below_mu_nu_kappa_mu_nu by force + +text \AACP Theorem 4.14 implies Theorem 4.15\ + +lemma nu_below_mu_nu_L_mu_nu_apx_nu_L: + "has_least_fixpoint f \ has_greatest_fixpoint f \ nu_below_mu_nu_L f \ mu_nu_apx_nu_L f" + using mu_nu_apx_nu_L_def mu_nu_apx_nu_def n_equal_l nu_below_mu_nu_2_mu_nu_apx_nu nu_below_mu_nu_L_nu_below_mu_nu nu_below_mu_nu_nu_below_mu_nu_2 by auto + +text \AACP Theorem 4.14 implies Theorem 4.16\ + +lemma nu_below_mu_nu_L_mu_nu_apx_meet_L: + "has_least_fixpoint f \ has_greatest_fixpoint f \ nu_below_mu_nu_L f \ mu_nu_apx_meet_L f" + using mu_nu_apx_meet_L_def mu_nu_apx_meet_def mu_nu_apx_nu_mu_nu_apx_meet n_equal_l nu_below_mu_nu_2_mu_nu_apx_nu nu_below_mu_nu_L_nu_below_mu_nu nu_below_mu_nu_nu_below_mu_nu_2 by auto + +text \AACP Theorem 4.15 implies Theorem 4.14\ + +lemma mu_nu_apx_nu_L_nu_below_mu_nu_L: + assumes "has_least_fixpoint f" + and "has_greatest_fixpoint f" + and "mu_nu_apx_nu_L f" + shows "nu_below_mu_nu_L f" +proof - + let ?n = "\ f \ n(\ f) * L" + let ?l = "\ f \ (\ f \ L)" + have "C (\ f) \ ?n \ n(?n) * top" + using assms(3) apx_def mu_nu_apx_nu_L_def by blast + also have "... \ ?n \ n(?l) * top" + using mult_left_isotone n_L_decreasing_meet_L n_isotone semiring.add_left_mono by auto + also have "... \ ?n \ n(\ f) * top" + using assms(1,2) l_below_nu mult_left_isotone n_isotone sup_right_isotone by auto + finally show ?thesis + by (metis sup_assoc sup_right_top mult_left_dist_sup nu_below_mu_nu_L_def) +qed + +text \AACP Theorem 4.13 implies Theorem 4.15\ + +lemma kappa_mu_nu_L_mu_nu_apx_nu_L: + "has_greatest_fixpoint f \ kappa_mu_nu_L f \ mu_nu_apx_nu_L f" + using kappa_mu_nu_L_def kappa_apx_below_nu mu_nu_apx_nu_L_def by fastforce + +text \AACP Theorem 4.16 implies Theorem 4.15\ + +lemma mu_nu_apx_meet_L_mu_nu_apx_nu_L: + "mu_nu_apx_meet_L f \ mu_nu_apx_nu_L f" + using apx_meet_char is_apx_meet_def mu_nu_apx_meet_L_def mu_nu_apx_nu_L_def by fastforce + +text \AACP Theorem 4.13 implies Theorem 4.14\ + +lemma kappa_mu_nu_L_nu_below_mu_nu_L: + "has_least_fixpoint f \ has_greatest_fixpoint f \ kappa_mu_nu_L f \ nu_below_mu_nu_L f" + by (simp add: kappa_mu_nu_L_mu_nu_apx_nu_L mu_nu_apx_nu_L_nu_below_mu_nu_L) + +(* +lemma nu_below_mu_nu_nu_below_mu_nu_L: "nu_below_mu_nu f \ nu_below_mu_nu_L f" nitpick [expect=genuine,card=3] oops +*) + +lemma unfold_fold_1: + "isotone f \ has_least_prefixpoint f \ apx.has_least_fixpoint f \ f(x) \ x \ \ f \ x \ L" + by (metis sup_left_isotone apx_def has_least_fixpoint_def is_least_prefixpoint_def least_prefixpoint_char least_prefixpoint_fixpoint order_trans pmu_mu kappa_apx_below_mu) + +lemma unfold_fold_2: + assumes "isotone f" + and "apx.isotone f" + and "has_least_prefixpoint f" + and "has_greatest_fixpoint f" + and "apx.has_least_fixpoint f" + and "f(x) \ x" + and "\ f \ L \ x \ L" + shows "\ f \ x" +proof - + have "\ f \ L = \ f \ L" + by (smt (z3) apx_meet_L assms(4,5) order.eq_iff inf.cobounded1 kappa_apx_below_nu kappa_below_nu le_inf_iff) + hence "\ f = (\ f \ L) \ \ f" + by (metis assms(1-5) apx_meet_below_nu_kappa_mu_nu has_apx_least_fixpoint_kappa_apx_meet sup_commute least_fixpoint_char least_prefixpoint_fixpoint kappa_apx_meet_apx_meet_below_nu kappa_mu_nu_def) + thus ?thesis + by (metis assms(1,3,6,7) sup_least is_least_prefixpoint_def least_prefixpoint le_inf_iff pmu_mu) +qed + +end + +class n_algebra_apx_2 = n_algebra + apx + + assumes apx_def: "x \ y \ x \ y \ L \ y \ x \ n(x) * top" +begin + +lemma apx_transitive_2: + assumes "x \ y" + and "y \ z" + shows "x \ z" +proof - + have "z \ y \ n(y) * top" + using assms(2) apx_def by auto + also have "... \ x \ n(x) * top \ n(y) * top" + using assms(1) apx_def sup_left_isotone by blast + also have "... \ x \ n(x) * top" + by (metis assms(1) sup_assoc sup_idem sup_right_isotone apx_def mult_left_isotone n_add_n_top n_isotone) + finally show ?thesis + by (smt assms sup_assoc sup_commute apx_def le_iff_sup) +qed + +lemma apx_meet_L: + assumes "y \ x" + shows "x \ L \ y \ L" +proof - + have "x \ L \ (y \ L) \ (n(y) * top \ L)" + by (metis assms apx_def inf.sup_left_isotone inf_sup_distrib2) + also have "... \ (y \ L) \ n(y \ L) * top" + using n_n_meet_L sup_right_isotone by force + finally show ?thesis + by (metis le_iff_sup inf_le2 n_less_eq_char) +qed + +text \AACP Theorem 4.1\ + +subclass apx_biorder + apply unfold_locales + apply (simp add: apx_def) + using apx_def order.eq_iff n_less_eq_char apply blast + using apx_transitive_2 by blast + +lemma sup_apx_left_isotone_2: + assumes "x \ y" + shows "x \ z \ y \ z" +proof - + have 1: "x \ z \ y \ z \ L" + by (smt assms sup_assoc sup_commute sup_left_isotone apx_def) + have "y \ z \ x \ n(x) * top \ z" + using assms apx_def sup_left_isotone by blast + also have "... \ x \ z \ n(x \ z) * top" + by (metis sup_assoc sup_commute sup_right_isotone mult_left_isotone n_right_upper_bound) + finally show ?thesis + using 1 apx_def by auto +qed + +lemma mult_apx_left_isotone_2: + assumes "x \ y" + shows "x * z \ y * z" +proof - + have "x * z \ y * z \ L * z" + by (metis assms apx_def mult_left_isotone mult_right_dist_sup) + hence 1: "x * z \ y * z \ L" + using n_L_below_L order_lesseq_imp semiring.add_left_mono by blast + have "y * z \ x * z \ n(x) * top * z" + by (metis assms apx_def mult_left_isotone mult_right_dist_sup) + also have "... \ x * z \ n(x * z) * top" + by (simp add: n_top_split) + finally show ?thesis + using 1 by (simp add: apx_def) +qed + +lemma mult_apx_right_isotone_2: + assumes "x \ y" + shows "z * x \ z * y" +proof - + have "z * x \ z * y \ z * L" + by (metis assms apx_def mult_left_dist_sup mult_right_isotone) + also have "... \ z * y \ z * bot \ L" + using n_L_split_L semiring.add_left_mono sup_assoc by auto + finally have 1: "z * x \ z * y \ L" + using mult_right_isotone sup.absorb_iff1 by force + have "z * y \ z * (x \ n(x) * top)" + using assms apx_def mult_right_isotone by blast + also have "... = z * x \ z * n(x) * top" + by (simp add: mult_left_dist_sup mult_assoc) + also have "... \ z * x \ n(z * x) * top" + by (simp add: n_split_top) + finally show ?thesis + using 1 by (simp add: apx_def) +qed + +end + +end + diff --git a/thys/Correctness_Algebras/Recursion_Strict.thy b/thys/Correctness_Algebras/Recursion_Strict.thy new file mode 100644 --- /dev/null +++ b/thys/Correctness_Algebras/Recursion_Strict.thy @@ -0,0 +1,445 @@ +(* Title: Strict Recursion + Author: Walter Guttmann + Maintainer: Walter Guttmann +*) + +section \Strict Recursion\ + +theory Recursion_Strict + +imports N_Semirings Approximation + +begin + +class semiring_apx = n_semiring + apx + + assumes apx_def: "x \ y \ x \ y \ n(x) * L \ y \ x \ n(x) * top" +begin + +lemma apx_n_order_reverse: + "y \ x \ n(x) \ n(y)" + by (metis apx_def le_iff_sup n_sup_left_absorb_mult n_dist_sup n_export) + +lemma apx_n_order: + "x \ y \ y \ x \ n(x) = n(y)" + by (simp add: apx_n_order_reverse order.antisym) + +lemma apx_transitive: + assumes "x \ y" + and "y \ z" + shows "x \ z" +proof - + have "n(y) * L \ n(x) * L" + by (simp add: apx_n_order_reverse assms(1) mult_left_isotone) + hence 1: "x \ z \ n(x) * L" + by (smt assms sup_assoc sup_right_divisibility apx_def le_iff_sup) + have "z \ x \ n(x) * top \ n(x \ n(x) * top) * top" + by (smt (verit) assms sup_left_isotone order_refl sup_assoc sup_mono apx_def mult_left_isotone n_isotone order_trans) + also have "... = x \ n(x) * top" + by (simp add: n_dist_sup n_export n_sup_left_absorb_mult) + finally show ?thesis + using 1 by (simp add: apx_def) +qed + +text \Theorem 16.1\ + +subclass apx_biorder + apply unfold_locales + apply (simp add: apx_def) + apply (smt (verit) order.antisym le_sup_iff apx_def eq_refl le_iff_sup n_galois apx_n_order) + using apx_transitive by blast + +lemma sup_apx_left_isotone: + assumes "x \ y" + shows "x \ z \ y \ z" +proof - + have "x \ y \ n(x) * L \ y \ x \ n(x) * top" + using assms apx_def by auto + hence "z \ x \ z \ y \ n(z \ x) * L \ z \ y \ z \ x \ n(z \ x) * top" + by (metis sup_assoc sup_right_isotone mult_right_sub_dist_sup_right n_dist_sup order_trans) + thus ?thesis + by (simp add: apx_def sup_commute) +qed + +lemma mult_apx_left_isotone: + assumes "x \ y" + shows "x * z \ y * z" +proof - + have "x \ y \ n(x) * L" + using assms apx_def by auto + hence "x * z \ y * z \ n(x) * L" + by (smt (verit, ccfv_threshold) L_left_zero mult_left_isotone semiring.distrib_right mult_assoc) + hence 1: "x * z \ y * z \ n(x * z) * L" + by (meson mult_left_isotone n_mult_left_upper_bound order_lesseq_imp sup_mono) + have "y * z \ x * z \ n(x) * top * z" + by (metis assms apx_def mult_left_isotone mult_right_dist_sup) + hence "y * z \ x * z \ n(x * z) * top" + using mult_isotone n_mult_left_upper_bound order.trans sup_right_isotone top_greatest mult_assoc by presburger + thus ?thesis + using 1 by (simp add: apx_def) +qed + +lemma mult_apx_right_isotone: + assumes "x \ y" + shows "z * x \ z * y" +proof - + have "x \ y \ n(x) * L" + using assms apx_def by auto + hence 1: "z * x \ z * y \ n(z * x) * L" + by (smt sup_assoc sup_ge1 sup_bot_right mult_assoc mult_left_dist_sup mult_right_isotone n_L_split) + have "y \ x \ n(x) * top" + using assms apx_def by auto + hence "z * y \ z * x \ z * n(x) * top" + by (smt mult_assoc mult_left_dist_sup mult_right_isotone) + also have "... \ z * x \ n(z * x) * top" + by (smt (verit) sup_assoc le_supI le_sup_iff sup_ge1 sup_bot_right mult_left_dist_sup n_L_split n_top_split order_trans) + finally show ?thesis + using 1 by (simp add: apx_def) +qed + +text \Theorem 16.1 and Theorem 16.2\ + +subclass apx_semiring + apply unfold_locales + apply (metis sup_right_top sup_ge2 apx_def mult_left_one n_L top_greatest) + apply (simp add: sup_apx_left_isotone) + apply (simp add: mult_apx_left_isotone) + by (simp add: mult_apx_right_isotone) + +text \Theorem 16.2\ + +lemma ni_apx_isotone: + "x \ y \ ni(x) \ ni(y)" + using apx_n_order_reverse apx_def le_supI1 n_ni ni_def ni_n_order by force + +text \Theorem 17\ + +definition kappa_apx_meet :: "('a \ 'a) \ bool" + where "kappa_apx_meet f \ apx.has_least_fixpoint f \ has_apx_meet (\ f) (\ f) \ \ f = \ f \ \ f" + +definition kappa_mu_nu :: "('a \ 'a) \ bool" + where "kappa_mu_nu f \ apx.has_least_fixpoint f \ \ f = \ f \ n(\ f) * L" + +definition nu_below_mu_nu :: "('a \ 'a) \ bool" + where "nu_below_mu_nu f \ \ f \ \ f \ n(\ f) * top" + +definition mu_nu_apx_nu :: "('a \ 'a) \ bool" + where "mu_nu_apx_nu f \ \ f \ n(\ f) * L \ \ f" + +definition mu_nu_apx_meet :: "('a \ 'a) \ bool" + where "mu_nu_apx_meet f \ has_apx_meet (\ f) (\ f) \ \ f \ \ f = \ f \ n(\ f) * L" + +definition apx_meet_below_nu :: "('a \ 'a) \ bool" + where "apx_meet_below_nu f \ has_apx_meet (\ f) (\ f) \ \ f \ \ f \ \ f" + +lemma mu_below_l: + "\ f \ \ f \ n(\ f) * L" + by simp + +lemma l_below_nu: + "has_least_fixpoint f \ has_greatest_fixpoint f \ \ f \ n(\ f) * L \ \ f" + by (simp add: mu_below_nu n_L_decreasing) + +lemma n_l_nu: + "has_least_fixpoint f \ has_greatest_fixpoint f \ n(\ f \ n(\ f) * L) = n(\ f)" + by (metis le_iff_sup mu_below_nu n_dist_sup n_n_L) + +lemma l_apx_mu: + "has_least_fixpoint f \ has_greatest_fixpoint f \ \ f \ n(\ f) * L \ \ f" + by (simp add: apx_def le_supI1 n_l_nu) + +text \Theorem 17.4 implies Theorem 17.5\ + +lemma nu_below_mu_nu_mu_nu_apx_nu: + "has_least_fixpoint f \ has_greatest_fixpoint f \ nu_below_mu_nu f \ mu_nu_apx_nu f" + by (smt (z3) l_below_nu apx_def le_sup_iff sup.absorb2 sup_commute sup_monoid.add_assoc mu_nu_apx_nu_def n_l_nu nu_below_mu_nu_def) + +text \Theorem 17.5 implies Theorem 17.6\ + +lemma mu_nu_apx_nu_mu_nu_apx_meet: + assumes "has_least_fixpoint f" + and "has_greatest_fixpoint f" + and "mu_nu_apx_nu f" + shows "mu_nu_apx_meet f" +proof - + let ?l = "\ f \ n(\ f) * L" + have "is_apx_meet (\ f) (\ f) ?l" + apply (unfold is_apx_meet_def, intro conjI) + apply (simp add: assms(1,2) l_apx_mu) + using assms(3) mu_nu_apx_nu_def apply blast + by (meson assms(1,2) l_below_nu apx_def order_trans sup_ge1 sup_left_isotone) + thus ?thesis + by (simp add: apx_meet_char mu_nu_apx_meet_def) +qed + +text \Theorem 17.6 implies Theorem 17.7\ + +lemma mu_nu_apx_meet_apx_meet_below_nu: + "has_least_fixpoint f \ has_greatest_fixpoint f \ mu_nu_apx_meet f \ apx_meet_below_nu f" + using apx_meet_below_nu_def l_below_nu mu_nu_apx_meet_def by auto + +text \Theorem 17.7 implies Theorem 17.4\ + +lemma apx_meet_below_nu_nu_below_mu_nu: + assumes "apx_meet_below_nu f" + shows "nu_below_mu_nu f" +proof - + have "\m . m \ \ f \ m \ \ f \ m \ \ f \ \ f \ \ f \ n(m) * top" + by (smt (verit) sup_assoc sup_left_isotone sup_right_top apx_def mult_left_dist_sup order_trans) + thus ?thesis + by (smt (verit) assms sup_right_isotone apx_greatest_lower_bound apx_meet_below_nu_def apx_reflexive mult_left_isotone n_isotone nu_below_mu_nu_def order_trans) +qed + +text \Theorem 17.1 implies Theorem 17.2\ + +lemma has_apx_least_fixpoint_kappa_apx_meet: + assumes "has_least_fixpoint f" + and "has_greatest_fixpoint f" + and "apx.has_least_fixpoint f" + shows "kappa_apx_meet f" +proof - + have "\w . w \ \ f \ w \ \ f \ w \ \ f" + by (meson assms apx_def order.trans kappa_below_nu mu_below_kappa semiring.add_right_mono) + hence "is_apx_meet (\ f) (\ f) (\ f)" + by (simp add: assms is_apx_meet_def kappa_apx_below_mu kappa_apx_below_nu) + thus ?thesis + by (simp add: assms(3) kappa_apx_meet_def apx_meet_char) +qed + +text \Theorem 17.2 implies Theorem 17.7\ + +lemma kappa_apx_meet_apx_meet_below_nu: + "has_greatest_fixpoint f \ kappa_apx_meet f \ apx_meet_below_nu f" + using apx_meet_below_nu_def kappa_apx_meet_def kappa_below_nu by force + +text \Theorem 17.7 implies Theorem 17.3\ + +lemma apx_meet_below_nu_kappa_mu_nu: + assumes "has_least_fixpoint f" + and "has_greatest_fixpoint f" + and "isotone f" + and "apx.isotone f" + and "apx_meet_below_nu f" + shows "kappa_mu_nu f" +proof - + let ?l = "\ f \ n(\ f) * L" + let ?m = "\ f \ \ f" + have 1: "?l \ \ f" + using apx_meet_below_nu_nu_below_mu_nu assms(1,2,5) mu_nu_apx_nu_def nu_below_mu_nu_mu_nu_apx_nu by blast + hence 2: "?m = ?l" + using assms(1,2) mu_nu_apx_meet_def mu_nu_apx_nu_def mu_nu_apx_nu_mu_nu_apx_meet by blast + have "\ f \ f(?l)" + by (metis assms(1,3) isotone_def mu_unfold sup_ge1) + hence 3: "?l \ f(?l) \ n(?l) * L" + using assms(1,2) semiring.add_right_mono n_l_nu by auto + have "f(?l) \ f(\ f)" + using assms(1-3) l_below_nu isotone_def by blast + also have "... \ ?l \ n(?l) * top" + using 1 by (metis assms(2) apx_def nu_unfold) + finally have 4: "?l \ f(?l)" + using 3 apx_def by blast + have 5: "f(?l) \ \ f" + by (metis assms(1,2,4) apx.isotone_def is_least_fixpoint_def least_fixpoint l_apx_mu) + have "f(?l) \ \ f" + using 1 by (metis assms(2,4) apx.isotone_def greatest_fixpoint is_greatest_fixpoint_def) + hence "f(?l) \ ?l" + using 2 5 apx_meet_below_nu_def assms(5) apx_greatest_lower_bound by fastforce + hence "f(?l) = ?l" + using 4 by (simp add: apx.order.antisym) + thus ?thesis + using 1 by (smt (verit, del_insts) assms(1,2) sup_left_isotone apx_antisymmetric apx_def apx.least_fixpoint_char greatest_fixpoint apx.is_least_fixpoint_def is_greatest_fixpoint_def is_least_fixpoint_def least_fixpoint n_l_nu order_trans kappa_mu_nu_def) +qed + +text \Theorem 17.3 implies Theorem 17.1\ + +lemma kappa_mu_nu_has_apx_least_fixpoint: + "kappa_mu_nu f \ apx.has_least_fixpoint f" + using kappa_mu_nu_def by auto + +text \Theorem 17.4 implies Theorem 17.3\ + +lemma nu_below_mu_nu_kappa_mu_nu: + "has_least_fixpoint f \ has_greatest_fixpoint f \ isotone f \ apx.isotone f \ nu_below_mu_nu f \ kappa_mu_nu f" + using apx_meet_below_nu_kappa_mu_nu mu_nu_apx_meet_apx_meet_below_nu mu_nu_apx_nu_mu_nu_apx_meet nu_below_mu_nu_mu_nu_apx_nu by blast + +text \Theorem 17.3 implies Theorem 17.4\ + +lemma kappa_mu_nu_nu_below_mu_nu: + "has_least_fixpoint f \ has_greatest_fixpoint f \ kappa_mu_nu f \ nu_below_mu_nu f" + by (simp add: apx_meet_below_nu_nu_below_mu_nu has_apx_least_fixpoint_kappa_apx_meet kappa_apx_meet_apx_meet_below_nu kappa_mu_nu_def) + +definition kappa_mu_nu_ni :: "('a \ 'a) \ bool" + where "kappa_mu_nu_ni f \ apx.has_least_fixpoint f \ \ f = \ f \ ni(\ f)" + +lemma kappa_mu_nu_ni_kappa_mu_nu: + "kappa_mu_nu_ni f \ kappa_mu_nu f" + by (simp add: kappa_mu_nu_def kappa_mu_nu_ni_def ni_def) + +lemma nu_below_mu_nu_kappa_mu_nu_ni: + "has_least_fixpoint f \ has_greatest_fixpoint f \ isotone f \ apx.isotone f \ nu_below_mu_nu f \ kappa_mu_nu_ni f" + by (simp add: kappa_mu_nu_ni_kappa_mu_nu nu_below_mu_nu_kappa_mu_nu) + +lemma kappa_mu_nu_ni_nu_below_mu_nu: + "has_least_fixpoint f \ has_greatest_fixpoint f \ kappa_mu_nu_ni f \ nu_below_mu_nu f" + using kappa_mu_nu_ni_kappa_mu_nu kappa_mu_nu_nu_below_mu_nu by blast + +end + +class itering_apx = n_itering + semiring_apx +begin + +text \Theorem 16.3\ + +lemma circ_apx_isotone: + assumes "x \ y" + shows "x\<^sup>\ \ y\<^sup>\" +proof - + have 1: "x \ y \ n(x) * L \ y \ x \ n(x) * top" + using assms apx_def by auto + hence "y\<^sup>\ \ x\<^sup>\ \ x\<^sup>\ * n(x) * top" + by (metis circ_isotone circ_left_top circ_unfold_sum mult_assoc) + also have "... \ x\<^sup>\ \ n(x\<^sup>\ * x) * top" + by (smt le_sup_iff n_isotone n_top_split order_refl order_trans right_plus_below_circ zero_right_mult_decreasing) + also have "... \ x\<^sup>\ \ n(x\<^sup>\) * top" + by (simp add: circ_plus_same n_circ_left_unfold) + finally have 2: "y\<^sup>\ \ x\<^sup>\ \ n(x\<^sup>\) * top" + . + have "x\<^sup>\ \ y\<^sup>\ \ y\<^sup>\ * n(x) * L" + using 1 by (metis L_left_zero circ_isotone circ_unfold_sum mult_assoc) + also have "... = y\<^sup>\ \ n(y\<^sup>\ * x) * L" + by (metis sup_assoc sup_bot_right mult_assoc mult_zero_sup_circ_2 n_L_split n_mult_right_bot) + also have "... \ y\<^sup>\ \ n(x\<^sup>\ * x) * L \ n(x\<^sup>\) * n(top * x) * L" + using 2 by (metis sup_assoc sup_right_isotone mult_assoc mult_left_isotone mult_right_dist_sup n_dist_sup n_export n_isotone) + finally have "x\<^sup>\ \ y\<^sup>\ \ n(x\<^sup>\) * L" + by (metis sup_assoc circ_plus_same n_sup_left_absorb_mult n_circ_left_unfold n_dist_sup n_export ni_def ni_dist_sup) + thus ?thesis + using 2 by (simp add: apx_def) +qed + +end + +class omega_algebra_apx = n_omega_algebra_2 + semiring_apx + +sublocale omega_algebra_apx < star: itering_apx where circ = star .. + +sublocale omega_algebra_apx < nL_omega: itering_apx where circ = Omega .. + +context omega_algebra_apx +begin + +text \Theorem 16.4\ + +lemma omega_apx_isotone: + assumes "x \ y" + shows "x\<^sup>\ \ y\<^sup>\" +proof - + have 1: "x \ y \ n(x) * L \ y \ x \ n(x) * top" + using assms apx_def by auto + hence "y\<^sup>\ \ x\<^sup>\ * n(x) * top * (x\<^sup>\ * n(x) * top)\<^sup>\ \ x\<^sup>\ \ x\<^sup>\ * n(x) * top * (x\<^sup>\ * n(x) * top)\<^sup>\ * x\<^sup>\" + by (smt sup_assoc mult_assoc mult_left_one mult_right_dist_sup omega_decompose omega_isotone omega_unfold star_left_unfold_equal) + also have "... \ x\<^sup>\ * n(x) * top \ x\<^sup>\ \ x\<^sup>\ * n(x) * top * (x\<^sup>\ * n(x) * top)\<^sup>\ * x\<^sup>\" + using mult_top_omega omega_unfold sup_left_isotone by auto + also have "... = x\<^sup>\ * n(x) * top \ x\<^sup>\" + by (smt (z3) mult_left_dist_sup sup_assoc sup_commute sup_left_top mult_assoc) + also have "... \ n(x\<^sup>\ * x) * top \ x\<^sup>\ * bot \ x\<^sup>\" + using n_top_split semiring.add_left_mono sup_commute by fastforce + also have "... \ n(x\<^sup>\ * x) * top \ x\<^sup>\" + using semiring.add_right_mono star_bot_below_omega sup_commute by fastforce + finally have 2: "y\<^sup>\ \ x\<^sup>\ \ n(x\<^sup>\) * top" + by (metis sup_commute sup_right_isotone mult_left_isotone n_star_below_n_omega n_star_left_unfold order_trans star.circ_plus_same) + have "x\<^sup>\ \ (y \ n(x) * L)\<^sup>\" + using 1 by (simp add: omega_isotone) + also have "... = y\<^sup>\ * n(x) * L * (y\<^sup>\ * n(x) * L)\<^sup>\ \ y\<^sup>\ \ y\<^sup>\ * n(x) * L * (y\<^sup>\ * n(x) * L)\<^sup>\ * y\<^sup>\" + by (smt sup_assoc mult_assoc mult_left_one mult_right_dist_sup omega_decompose omega_isotone omega_unfold star_left_unfold_equal) + also have "... = y\<^sup>\ * n(x) * L \ y\<^sup>\" + using L_left_zero sup_assoc sup_monoid.add_commute mult_assoc by force + also have "... \ y\<^sup>\ \ y\<^sup>\ * bot \ n(y\<^sup>\ * x) * L" + by (simp add: n_L_split sup_assoc sup_commute) + also have "... \ y\<^sup>\ \ n(x\<^sup>\ * x) * L \ n(x\<^sup>\) * n(top * x) * L" + using 1 by (metis sup_right_isotone sup_bot_right apx_def mult_assoc mult_left_dist_sup mult_left_isotone mult_right_dist_sup n_dist_sup n_export n_isotone star.circ_apx_isotone star_mult_omega sup_assoc) + finally have "x\<^sup>\ \ y\<^sup>\ \ n(x\<^sup>\) * L" + by (smt (verit, best) le_supE sup.orderE sup_commute sup_assoc sup_isotone mult_right_dist_sup n_sup_left_absorb_mult n_star_left_unfold ni_def ni_star_below_ni_omega order_refl order_trans star.circ_plus_same) + thus ?thesis + using 2 by (simp add: apx_def) +qed + +end + +class omega_algebra_apx_extra = omega_algebra_apx + + assumes n_split_omega: "x\<^sup>\ \ x\<^sup>\ * bot \ n(x\<^sup>\) * top" +begin + +lemma omega_n_star: + "x\<^sup>\ \ n(x\<^sup>\) * top \ x\<^sup>\ * n(x\<^sup>\) * top" +proof - + have 1: "n(x\<^sup>\) * top \ n(x\<^sup>\) * top" + by (simp add: mult_left_isotone n_star_below_n_omega) + have "... \ x\<^sup>\ * n(x\<^sup>\) * top" + by (simp add: star_n_omega_top) + thus ?thesis + using 1 by (metis le_sup_iff n_split_omega order_trans star_n_omega_top) +qed + +lemma n_omega_zero: + "n(x\<^sup>\) = bot \ n(x\<^sup>\) = bot \ x\<^sup>\ \ x\<^sup>\ * bot" + by (metis sup_bot_right order.eq_iff mult_left_zero n_mult_bot n_split_omega star_bot_below_omega) + +lemma n_split_nu_mu: + "y\<^sup>\ \ y\<^sup>\ * z \ y\<^sup>\ * z \ n(y\<^sup>\ \ y\<^sup>\ * z) * top" +proof - + have "y\<^sup>\ \ y\<^sup>\ * bot \ n(y\<^sup>\ \ y\<^sup>\ * z) * top" + by (smt sup_ge1 sup_right_isotone mult_left_isotone n_isotone n_split_omega order_trans) + also have "... \ y\<^sup>\ * z \ n(y\<^sup>\ \ y\<^sup>\ * z) * top" + using nL_star.star_zero_below_circ_mult sup_left_isotone by auto + finally show ?thesis + by simp +qed + +lemma loop_exists: + "\ (\x . y * x \ z) \ \ (\x . y * x \ z) \ n(\ (\x . y * x \ z)) * top" + by (metis n_split_nu_mu omega_loop_nu star_loop_mu) + +lemma loop_apx_least_fixpoint: + "apx.is_least_fixpoint (\x . y * x \ z) (\ (\x . y * x \ z) \ n(\ (\x . y * x \ z)) * L)" + using apx.least_fixpoint_char affine_apx_isotone affine_has_greatest_fixpoint affine_has_least_fixpoint affine_isotone kappa_mu_nu_def nu_below_mu_nu_def nu_below_mu_nu_kappa_mu_nu loop_exists by auto + +lemma loop_has_apx_least_fixpoint: + "apx.has_least_fixpoint (\x . y * x \ z)" + using affine_apx_isotone affine_has_greatest_fixpoint affine_has_least_fixpoint affine_isotone kappa_mu_nu_def nu_below_mu_nu_def nu_below_mu_nu_kappa_mu_nu loop_exists by auto + +lemma loop_semantics: + "\ (\x . y * x \ z) = \ (\x . y * x \ z) \ n(\ (\x . y * x \ z)) * L" + using apx.least_fixpoint_char loop_apx_least_fixpoint by auto + +lemma loop_apx_least_fixpoint_ni: + "apx.is_least_fixpoint (\x . y * x \ z) (\ (\x . y * x \ z) \ ni(\ (\x . y * x \ z)))" + using ni_def loop_apx_least_fixpoint by auto + +lemma loop_semantics_ni: + "\ (\x . y * x \ z) = \ (\x . y * x \ z) \ ni(\ (\x . y * x \ z))" + using ni_def loop_semantics by auto + +text \Theorem 18\ + +lemma loop_semantics_kappa_mu_nu: + "\ (\x . y * x \ z) = n(y\<^sup>\) * L \ y\<^sup>\ * z" +proof - + have "\ (\x . y * x \ z) = y\<^sup>\ * z \ n(y\<^sup>\ \ y\<^sup>\ * z) * L" + by (metis loop_semantics omega_loop_nu star_loop_mu) + thus ?thesis + by (smt sup_assoc sup_commute le_iff_sup mult_right_dist_sup n_L_decreasing n_dist_sup) +qed + +end + +class omega_algebra_apx_extra_2 = omega_algebra_apx + + assumes omega_n_star: "x\<^sup>\ \ x\<^sup>\ * n(x\<^sup>\) * top" +begin + +subclass omega_algebra_apx_extra + apply unfold_locales + using omega_n_star star_n_omega_top by auto + +end + +end + diff --git a/thys/Correctness_Algebras/Relative_Domain.thy b/thys/Correctness_Algebras/Relative_Domain.thy new file mode 100644 --- /dev/null +++ b/thys/Correctness_Algebras/Relative_Domain.thy @@ -0,0 +1,655 @@ +(* Title: Relative Domain + Author: Walter Guttmann + Maintainer: Walter Guttmann +*) + +section \Relative Domain\ + +theory Relative_Domain + +imports Tests + +begin + +class Z = + fixes Z :: "'a" + +class relative_domain_semiring = idempotent_left_semiring + dom + Z + + assumes d_restrict : "x \ d(x) * x \ Z" + assumes d_mult_d : "d(x * y) = d(x * d(y))" + assumes d_below_one: "d(x) \ 1" + assumes d_Z : "d(Z) = bot" + assumes d_dist_sup : "d(x \ y) = d(x) \ d(y)" + assumes d_export : "d(d(x) * y) = d(x) * d(y)" +begin + +lemma d_plus_one: + "d(x) \ 1 = 1" + by (simp add: d_below_one sup_absorb2) + +text \Theorem 44.2\ + +lemma d_zero: + "d(bot) = bot" + by (metis d_Z d_export mult_left_zero) + +text \Theorem 44.3\ + +lemma d_involutive: + "d(d(x)) = d(x)" + by (metis d_mult_d mult_left_one) + +lemma d_fixpoint: + "(\y . x = d(y)) \ x = d(x)" + using d_involutive by auto + +lemma d_type: + "\P . (\x . x = d(x) \ P(x)) \ (\x . P(d(x)))" + by (metis d_involutive) + +text \Theorem 44.4\ + +lemma d_mult_sub: + "d(x * y) \ d(x)" + by (smt (verit, ccfv_threshold) d_plus_one d_dist_sup d_mult_d le_iff_sup mult.right_neutral mult_left_sub_dist_sup_right sup_commute) + +lemma d_sub_one: + "x \ 1 \ x \ d(x) \ Z" + by (metis sup_left_isotone d_restrict mult_right_isotone mult_1_right order_trans) + +lemma d_one: + "d(1) \ Z = 1 \ Z" + by (meson d_sub_one d_below_one order.trans preorder_one_closed sup.cobounded1 sup_same_context) + +text \Theorem 44.8\ + +lemma d_strict: + "d(x) = bot \ x \ Z" + by (metis sup_commute sup_bot_right d_Z d_dist_sup d_restrict le_iff_sup mult_left_zero) + +text \Theorem 44.1\ + +lemma d_isotone: + "x \ y \ d(x) \ d(y)" + using d_dist_sup sup_right_divisibility by force + +lemma d_plus_left_upper_bound: + "d(x) \ d(x \ y)" + by (simp add: d_isotone) + +lemma d_idempotent: + "d(x) * d(x) = d(x)" + by (smt (verit, ccfv_threshold) d_involutive d_mult_sub d_Z d_dist_sup d_export d_restrict le_iff_sup sup_bot_left sup_commute) + +text \Theorem 44.12\ + +lemma d_least_left_preserver: + "x \ d(y) * x \ Z \ d(x) \ d(y)" + apply (rule iffI) + apply (smt (z3) comm_monoid.comm_neutral d_involutive d_mult_sub d_plus_left_upper_bound d_Z d_dist_sup order_trans sup_absorb2 sup_bot.comm_monoid_axioms) + by (smt (verit, del_insts) d_restrict mult_right_dist_sup sup.cobounded1 sup.orderE sup_assoc sup_commute) + +text \Theorem 44.9\ + +lemma d_weak_locality: + "x * y \ Z \ x * d(y) \ Z" + by (metis d_mult_d d_strict) + +lemma d_sup_closed: + "d(d(x) \ d(y)) = d(x) \ d(y)" + by (simp add: d_involutive d_dist_sup) + +lemma d_mult_closed: + "d(d(x) * d(y)) = d(x) * d(y)" + using d_export d_mult_d by auto + +lemma d_mult_left_lower_bound: + "d(x) * d(y) \ d(x)" + by (metis d_export d_involutive d_mult_sub) + +lemma d_mult_left_absorb_sup: + "d(x) * (d(x) \ d(y)) = d(x)" + by (smt d_sup_closed d_export d_idempotent d_involutive d_mult_sub order.eq_iff mult_left_sub_dist_sup_left) + +lemma d_sup_left_absorb_mult: + "d(x) \ d(x) * d(y) = d(x)" + using d_mult_left_lower_bound sup.absorb_iff1 by auto + +lemma d_commutative: + "d(x) * d(y) = d(y) * d(x)" + by (metis sup_commute order.antisym d_sup_left_absorb_mult d_below_one d_export d_mult_left_absorb_sup mult_assoc mult_left_isotone mult_left_one) + +lemma d_mult_greatest_lower_bound: + "d(x) \ d(y) * d(z) \ d(x) \ d(y) \ d(x) \ d(z)" + by (metis d_commutative d_idempotent d_mult_left_lower_bound mult_isotone order_trans) + +lemma d_sup_left_dist_mult: + "d(x) \ d(y) * d(z) = (d(x) \ d(y)) * (d(x) \ d(z))" + by (metis sup_assoc d_commutative d_dist_sup d_idempotent d_mult_left_absorb_sup mult_right_dist_sup) + +lemma d_order: + "d(x) \ d(y) \ d(x) = d(x) * d(y)" + by (metis d_mult_greatest_lower_bound d_mult_left_absorb_sup le_iff_sup order_refl) + +text \Theorem 44.6\ + +lemma Z_mult_decreasing: + "Z * x \ Z" + by (metis d_mult_sub bot.extremum d_strict order.eq_iff) + +text \Theorem 44.5\ + +lemma d_below_d_one: + "d(x) \ d(1)" + by (metis d_mult_sub mult_left_one) + +text \Theorem 44.7\ + +lemma d_relative_Z: + "d(x) * x \ Z = x \ Z" + by (metis sup_ge1 sup_same_context d_below_one d_restrict mult_isotone mult_left_one) + +lemma Z_left_zero_above_one: + "1 \ x \ Z * x = Z" + by (metis Z_mult_decreasing order.eq_iff mult_right_isotone mult_1_right) + +text \Theorem 44.11\ + +lemma kat_4: + "d(x) * y = d(x) * y * d(z) \ d(x) * y \ y * d(z)" + by (metis d_below_one mult_left_isotone mult_left_one) + +lemma kat_4_equiv: + "d(x) * y = d(x) * y * d(z) \ d(x) * y \ y * d(z)" + apply (rule iffI) + apply (simp add: kat_4) + apply (rule order.antisym) + apply (metis d_idempotent mult_assoc mult_right_isotone) + by (metis d_below_one mult_right_isotone mult_1_right) + +lemma kat_4_equiv_opp: + "y * d(x) = d(z) * y * d(x) \ y * d(x) \ d(z) * y" + apply (rule iffI) + using d_below_one mult_right_isotone apply fastforce + apply (rule order.antisym) + apply (metis d_idempotent mult_assoc mult_left_isotone) + by (metis d_below_one mult_left_isotone mult_left_one) + +text \Theorem 44.10\ + +lemma d_restrict_iff_1: + "d(x) * y \ z \ d(x) * y \ d(x) * z" + by (smt (verit, del_insts) d_below_one d_idempotent mult_assoc mult_left_isotone mult_left_one mult_right_isotone order_trans) + +(* independence of axioms, checked in relative_domain_semiring without the respective axiom: +lemma d_restrict : "x \ d(x) * x \ Z" nitpick [expect=genuine,card=2] oops +lemma d_mult_d : "d(x * y) = d(x * d(y))" nitpick [expect=genuine,card=3] oops +lemma d_below_one: "d(x) \ 1" nitpick [expect=genuine,card=3] oops +lemma d_Z : "d(Z) = bot" nitpick [expect=genuine,card=2] oops +lemma d_dist_sup : "d(x \ y) = d(x) \ d(y)" nitpick [expect=genuine,card=3] oops +lemma d_export : "d(d(x) * y) = d(x) * d(y)" nitpick [expect=genuine,card=5] oops +*) + +end + +typedef (overloaded) 'a dImage = "{ x::'a::relative_domain_semiring . (\y::'a . x = d(y)) }" + by auto + +lemma simp_dImage[simp]: + "\y . Rep_dImage x = d(y)" + using Rep_dImage by simp + +setup_lifting type_definition_dImage + +text \Theorem 44\ + +instantiation dImage :: (relative_domain_semiring) bounded_distrib_lattice +begin + +lift_definition sup_dImage :: "'a dImage \ 'a dImage \ 'a dImage" is sup + by (metis d_dist_sup) + +lift_definition inf_dImage :: "'a dImage \ 'a dImage \ 'a dImage" is times + by (metis d_export) + +lift_definition bot_dImage :: "'a dImage" is bot + by (metis d_zero) + +lift_definition top_dImage :: "'a dImage" is "d(1)" + by auto + +lift_definition less_eq_dImage :: "'a dImage \ 'a dImage \ bool" is less_eq . + +lift_definition less_dImage :: "'a dImage \ 'a dImage \ bool" is less . + +instance + apply intro_classes + apply (simp add: less_dImage.rep_eq less_eq_dImage.rep_eq less_le_not_le) + apply (simp add: less_eq_dImage.rep_eq) + using less_eq_dImage.rep_eq apply simp + apply (simp add: Rep_dImage_inject less_eq_dImage.rep_eq) + apply (metis (mono_tags) d_involutive d_mult_sub inf_dImage.rep_eq less_eq_dImage.rep_eq simp_dImage) + apply (metis (mono_tags) d_mult_greatest_lower_bound inf_dImage.rep_eq less_eq_dImage.rep_eq order_refl simp_dImage) + apply (metis (mono_tags) d_mult_greatest_lower_bound inf_dImage.rep_eq less_eq_dImage.rep_eq simp_dImage) + apply (simp add: less_eq_dImage.rep_eq sup_dImage.rep_eq) + apply (simp add: less_eq_dImage.rep_eq sup_dImage.rep_eq) + apply (simp add: less_eq_dImage.rep_eq sup_dImage.rep_eq) + apply (simp add: bot_dImage.rep_eq less_eq_dImage.rep_eq) + apply (smt (z3) d_below_d_one less_eq_dImage.rep_eq simp_dImage top_dImage.rep_eq) + by (smt (z3) inf_dImage.rep_eq sup_dImage.rep_eq simp_dImage Rep_dImage_inject d_sup_left_dist_mult) + +end + +class bounded_relative_domain_semiring = relative_domain_semiring + bounded_idempotent_left_semiring +begin + +lemma Z_top: + "Z * top = Z" + by (simp add: Z_left_zero_above_one) + +lemma d_restrict_top: + "x \ d(x) * top \ Z" + by (metis sup_left_isotone d_restrict mult_right_isotone order_trans top_greatest) + +(* +lemma d_one_one: "d(1) = 1" nitpick [expect=genuine,card=2] oops +*) + +end + +class relative_domain_semiring_split = relative_domain_semiring + + assumes split_Z: "x * (y \ Z) \ x * y \ Z" +begin + +lemma d_restrict_iff: + "(x \ y \ Z) \ (x \ d(x) * y \ Z)" +proof - + have "x \ y \ Z \ x \ d(x) * (y \ Z) \ Z" + by (smt sup_left_isotone d_restrict le_iff_sup mult_left_sub_dist_sup_left order_trans) + hence "x \ y \ Z \ x \ d(x) * y \ Z" + by (meson le_supI order_lesseq_imp split_Z sup.cobounded2) + thus ?thesis + by (meson d_restrict_iff_1 le_supI mult_left_sub_dist_sup_left order_lesseq_imp sup.cobounded2) +qed + +end + +class relative_antidomain_semiring = idempotent_left_semiring + dom + Z + uminus + + assumes a_restrict : "-x * x \ Z" + assumes a_mult_d : "-(x * y) = -(x * --y)" + assumes a_complement: "-x * --x = bot" + assumes a_Z : "-Z = 1" + assumes a_export : "-(--x * y) = -x \ -y" + assumes a_dist_sup : "-(x \ y) = -x * -y" + assumes d_def : "d(x) = --x" +begin + +notation + uminus ("a") + +text \Theorem 45.7\ + +lemma a_complement_one: + "--x \ -x = 1" + by (metis a_Z a_complement a_export a_mult_d mult_left_one) + +text \Theorem 45.5 and Theorem 45.6\ + +lemma a_d_closed: + "d(a(x)) = a(x)" + by (metis a_mult_d d_def mult_left_one) + +lemma a_below_one: + "a(x) \ 1" + using a_complement_one sup_right_divisibility by auto + +lemma a_export_a: + "a(a(x) * y) = d(x) \ a(y)" + by (metis a_d_closed a_export d_def) + +lemma a_sup_absorb: + "(x \ a(y)) * a(a(y)) = x * a(a(y))" + by (simp add: a_complement mult_right_dist_sup) + +text \Theorem 45.10\ + +lemma a_greatest_left_absorber: + "a(x) * y \ Z \ a(x) \ a(y)" + apply (rule iffI) + apply (smt a_Z a_sup_absorb a_dist_sup a_export_a a_mult_d sup_commute d_def le_iff_sup mult_left_one) + by (meson a_restrict mult_isotone order.refl order_trans) + +lemma a_plus_left_lower_bound: + "a(x \ y) \ a(x)" + by (metis a_greatest_left_absorber a_restrict sup_commute mult_left_sub_dist_sup_right order_trans) + +text \Theorem 45.2\ + +subclass relative_domain_semiring + apply unfold_locales + apply (smt (verit) a_Z a_complement_one a_restrict sup_commute sup_ge1 case_split_left d_def order_trans) + using a_mult_d d_def apply force + apply (simp add: a_below_one d_def) + apply (metis a_Z a_complement d_def mult_left_one) + apply (simp add: a_export_a a_dist_sup d_def) + using a_dist_sup a_export d_def by auto + +text \Theorem 45.1\ + +subclass tests + apply unfold_locales + apply (simp add: mult_assoc) + apply (metis a_dist_sup sup_commute) + apply (smt a_complement a_d_closed a_export_a sup_bot_right d_sup_left_dist_mult) + apply (metis a_d_closed a_dist_sup d_def) + apply (rule the_equality[THEN sym]) + apply (simp add: a_complement) + apply (simp add: a_complement) + using a_d_closed a_Z d_Z d_def apply force + using a_export a_mult_d apply fastforce + apply (metis a_d_closed d_order) + by (simp add: less_le_not_le) + +lemma a_plus_mult_d: + "-(x * y) \ -(x * --y) = -(x * --y)" + using a_mult_d by auto + +lemma a_mult_d_2: + "a(x * y) = a(x * d(y))" + using a_mult_d d_def by auto + +lemma a_3: + "a(x) * a(y) * d(x \ y) = bot" + by (metis a_complement a_dist_sup d_def) + +lemma a_fixpoint: + "\x . (a(x) = x \ (\y . y = bot))" + by (metis a_complement_one mult_1_left mult_left_zero order.refl sup.order_iff tests_dual.one_def) + +text \Theorem 45.9\ + +lemma a_strict: + "a(x) = 1 \ x \ Z" + by (metis a_Z d_def d_strict order.refl tests_dual.sba_dual.double_negation) + +lemma d_complement_zero: + "d(x) * a(x) = bot" + by (simp add: d_def tests_dual.sub_commutative) + +lemma a_complement_zero: + "a(x) * d(x) = bot" + by (simp add: d_def) + +lemma a_shunting_zero: + "a(x) * d(y) = bot \ a(x) \ a(y)" + by (simp add: d_def tests_dual.sba_dual.less_eq_inf_bot) + +lemma a_antitone: + "x \ y \ a(y) \ a(x)" + using a_plus_left_lower_bound sup_commute sup_right_divisibility by fastforce + +lemma a_mult_deMorgan: + "a(a(x) * a(y)) = d(x \ y)" + by (simp add: a_dist_sup d_def) + +lemma a_mult_deMorgan_1: + "a(a(x) * a(y)) = d(x) \ d(y)" + by (simp add: a_mult_deMorgan d_dist_sup) + +lemma a_mult_deMorgan_2: + "a(d(x) * d(y)) = a(x) \ a(y)" + using a_export d_def by auto + +lemma a_plus_deMorgan: + "a(a(x) \ a(y)) = d(x) * d(y)" + by (simp add: a_dist_sup d_def) + +lemma a_plus_deMorgan_1: + "a(d(x) \ d(y)) = a(x) * a(y)" + by (simp add: a_dist_sup d_def) + +text \Theorem 45.8\ + +lemma a_mult_left_upper_bound: + "a(x) \ a(x * y)" + using a_shunting_zero d_def d_mult_sub tests_dual.less_eq_sup_top by auto + +text \Theorem 45.6\ + +lemma d_a_closed: + "a(d(x)) = a(x)" + by (simp add: d_def) + +lemma a_export_d: + "a(d(x) * y) = a(x) \ a(y)" + by (simp add: a_export d_def) + +lemma a_7: + "d(x) * a(d(y) \ d(z)) = d(x) * a(y) * a(z)" + by (simp add: a_plus_deMorgan_1 mult_assoc) + +lemma d_a_shunting: + "d(x) * a(y) \ d(z) \ d(x) \ d(z) \ d(y)" + by (simp add: d_def tests_dual.sba_dual.shunting_right) + +lemma d_d_shunting: + "d(x) * d(y) \ d(z) \ d(x) \ d(z) \ a(y)" + by (simp add: d_def tests_dual.sba_dual.shunting_right) + +lemma d_cancellation_1: + "d(x) \ d(y) \ (d(x) * a(y))" + by (smt (z3) a_d_closed d_a_shunting d_export eq_refl sup_commute) + +lemma d_cancellation_2: + "(d(z) \ d(y)) * a(y) \ d(z)" + by (metis d_a_shunting d_dist_sup eq_refl) + +lemma a_sup_closed: + "d(a(x) \ a(y)) = a(x) \ a(y)" + using a_mult_deMorgan tests_dual.sub_inf_def by auto + +lemma a_mult_closed: + "d(a(x) * a(y)) = a(x) * a(y)" + using d_def tests_dual.sub_sup_closed by auto + +lemma d_a_shunting_zero: + "d(x) * a(y) = bot \ d(x) \ d(y)" + using a_shunting_zero d_def by force + +lemma d_d_shunting_zero: + "d(x) * d(y) = bot \ d(x) \ a(y)" + using d_a_shunting_zero d_def by auto + +lemma d_compl_intro: + "d(x) \ d(y) = d(x) \ a(x) * d(y)" + by (simp add: d_def tests_dual.sba_dual.sup_complement_intro) + +lemma a_compl_intro: + "a(x) \ a(y) = a(x) \ d(x) * a(y)" + by (simp add: d_def tests_dual.sba_dual.sup_complement_intro) + +lemma kat_2: + "y * a(z) \ a(x) * y \ d(x) * y * a(z) = bot" + by (metis d_complement_zero order.eq_iff mult_assoc mult_left_zero mult_right_isotone bot_least) + +text \Theorem 45.4\ + +lemma kat_2_equiv: + "y * a(z) \ a(x) * y \ d(x) * y * a(z) = bot" + apply (rule iffI) + apply (simp add: kat_2) + by (smt (verit, best) a_Z a_below_one a_complement_one case_split_left d_def mult_assoc mult_right_isotone mult_1_right bot_least) + +lemma kat_3_equiv_opp: + "a(z) * y * d(x) = bot \ y * d(x) = d(z) * y * d(x)" + using kat_2_equiv d_def kat_4_equiv_opp by auto + +text \Theorem 45.4\ + +lemma kat_3_equiv_opp_2: + "d(z) * y * a(x) = bot \ y * a(x) = a(z) * y * a(x)" + by (metis a_d_closed kat_3_equiv_opp d_def) + +lemma kat_equiv_6: + "d(x) * y * a(z) = d(x) * y * bot \ d(x) * y * a(z) \ y * bot" + by (metis d_restrict_iff_1 order.eq_iff mult_left_sub_dist_sup_right tests_dual.sba_dual.sup_right_unit mult_assoc) + +lemma d_one_one: + "d(1) = 1" + by (simp add: d_def) + +lemma case_split_left_sup: + "-p * x \ y \ --p * x \ z \ x \ y \ z" + by (smt (z3) a_complement_one case_split_left order_lesseq_imp sup.cobounded2 sup_ge1) + +lemma test_mult_left_sub_dist_shunt: + "-p * (--p * x \ Z) \ Z" + by (simp add: a_greatest_left_absorber a_Z a_dist_sup a_export) + +lemma test_mult_left_dist_shunt: + "-p * (--p * x \ Z) = -p * Z" + by (smt (verit, ccfv_SIG) order.antisym mult_left_sub_dist_sup_right sup.orderE tests_dual.sba_dual.sup_idempotent mult_assoc test_mult_left_sub_dist_shunt tests_dual.sup_absorb) + +(* independence of axioms, checked in relative_antidomain_semiring without the respective axiom: +lemma a_restrict : "-x * x \ Z" nitpick [expect=genuine,card=3] oops +lemma a_mult_d : "-(x * y) = -(x * --y)" nitpick [expect=genuine,card=3] oops +lemma a_complement: "-x * --x = bot" nitpick [expect=genuine,card=2] oops +lemma a_Z : "-Z = 1" nitpick [expect=genuine,card=2] oops +lemma a_export : "-(--x * y) = -x \ -y" nitpick [expect=genuine,card=5] oops +lemma a_dist_sup : "-(x \ y) = -x * -y" nitpick [expect=genuine,card=3] oops +lemma d_def : "d(x) = --x" nitpick [expect=genuine,card=2] oops +*) + +end + +typedef (overloaded) 'a aImage = "{ x::'a::relative_antidomain_semiring . (\y::'a . x = a(y)) }" + by auto + +lemma simp_aImage[simp]: + "\y . Rep_aImage x = a(y)" + using Rep_aImage by simp + +setup_lifting type_definition_aImage + +text \Theorem 45.3\ + +instantiation aImage :: (relative_antidomain_semiring) boolean_algebra +begin + +lift_definition sup_aImage :: "'a aImage \ 'a aImage \ 'a aImage" is sup + using tests_dual.sba_dual.sba_dual.inf_closed by auto + +lift_definition inf_aImage :: "'a aImage \ 'a aImage \ 'a aImage" is times + using tests_dual.sba_dual.inf_closed by auto + +lift_definition minus_aImage :: "'a aImage \ 'a aImage \ 'a aImage" is "\x y . x * a(y)" + using tests_dual.sba_dual.inf_closed by blast + +lift_definition uminus_aImage :: "'a aImage \ 'a aImage" is a + by auto + +lift_definition bot_aImage :: "'a aImage" is bot + by (metis tests_dual.sba_dual.sba_dual.complement_bot) + +lift_definition top_aImage :: "'a aImage" is 1 + using a_Z by auto + +lift_definition less_eq_aImage :: "'a aImage \ 'a aImage \ bool" is less_eq . + +lift_definition less_aImage :: "'a aImage \ 'a aImage \ bool" is less . + +instance + apply intro_classes + apply (simp add: less_aImage.rep_eq less_eq_aImage.rep_eq less_le_not_le) + apply (simp add: less_eq_aImage.rep_eq) + using less_eq_aImage.rep_eq apply simp + apply (simp add: Rep_aImage_inject less_eq_aImage.rep_eq) + apply (metis (mono_tags) a_below_one inf_aImage.rep_eq less_eq_aImage.rep_eq mult.right_neutral mult_right_isotone simp_aImage) + apply (metis (mono_tags, lifting) less_eq_aImage.rep_eq a_d_closed a_export bot.extremum_unique inf_aImage.rep_eq kat_equiv_6 mult.assoc mult.left_neutral mult_left_isotone mult_left_zero simp_aImage sup.cobounded1 tests_dual.sba_dual.sba_dual.complement_top) + apply (smt (z3) less_eq_aImage.rep_eq inf_aImage.rep_eq mult_isotone simp_aImage tests_dual.sba_dual.inf_idempotent) + apply (simp add: less_eq_aImage.rep_eq sup_aImage.rep_eq) + apply (simp add: less_eq_aImage.rep_eq sup_aImage.rep_eq) + using less_eq_aImage.rep_eq sup_aImage.rep_eq apply force + apply (simp add: less_eq_aImage.rep_eq bot_aImage.rep_eq) + apply (smt (z3) less_eq_aImage.rep_eq a_below_one simp_aImage top_aImage.rep_eq) + apply (metis (mono_tags, lifting) tests_dual.sba_dual.sba_dual.inf_left_dist_sup Rep_aImage_inject inf_aImage.rep_eq sup_aImage.rep_eq simp_aImage) + apply (smt (z3) inf_aImage.rep_eq uminus_aImage.rep_eq Rep_aImage_inject a_complement bot_aImage.rep_eq simp_aImage) + apply (smt (z3) top_aImage.rep_eq Rep_aImage_inject a_complement_one simp_aImage sup_aImage.rep_eq sup_commute uminus_aImage.rep_eq) + by (metis (mono_tags) inf_aImage.rep_eq Rep_aImage_inject minus_aImage.rep_eq uminus_aImage.rep_eq) + +end + +class bounded_relative_antidomain_semiring = relative_antidomain_semiring + bounded_idempotent_left_semiring +begin + +subclass bounded_relative_domain_semiring .. + +lemma a_top: + "a(top) = bot" + by (metis a_plus_left_lower_bound bot_unique sup_right_top tests_dual.sba_dual.complement_top) + +lemma d_top: + "d(top) = 1" + using a_top d_def by auto + +lemma shunting_top_1: + "-p * x \ y \ x \ --p * top \ y" + by (metis sup_commute case_split_left_sup mult_right_isotone top_greatest) + +lemma shunting_Z: + "-p * x \ Z \ x \ --p * top \ Z" + apply (rule iffI) + apply (simp add: shunting_top_1) + by (smt a_top a_Z a_antitone a_dist_sup a_export a_greatest_left_absorber sup_commute sup_bot_right mult_left_one) + +(* +lemma a_left_dist_sup: "-p * (y \ z) = -p * y \ -p * z" nitpick [expect=genuine,card=7] oops +lemma shunting_top: "-p * x \ y \ x \ --p * top \ y" nitpick [expect=genuine,card=7] oops +*) + +end + +class relative_left_zero_antidomain_semiring = relative_antidomain_semiring + idempotent_left_zero_semiring +begin + +lemma kat_3: + "d(x) * y * a(z) = bot \ d(x) * y = d(x) * y * d(z)" + by (metis d_def mult_1_right mult_left_dist_sup sup_monoid.add_0_left tests_dual.inf_complement) + +lemma a_a_below: + "a(a(x)) * y \ y" + using d_def d_restrict_iff_1 by auto + +lemma kat_equiv_5: + "d(x) * y \ y * d(z) \ d(x) * y * a(z) = d(x) * y * bot" +proof + assume "d(x) * y \ y * d(z)" + thus "d(x) * y * a(z) = d(x) * y * bot" + by (metis d_complement_zero kat_4_equiv mult_assoc) +next + assume "d(x) * y * a(z) = d(x) * y * bot" + hence "a(a(x)) * y * a(z) \ y * a(a(z))" + by (simp add: a_a_below d_def mult_isotone) + thus "d(x) * y \ y * d(z)" + by (metis a_a_below a_complement_one case_split_right d_def mult_isotone order_refl) +qed + +lemma case_split_right_sup: + "x * -p \ y \ x * --p \ z \ x \ y \ z" + by (smt (verit, ccfv_SIG) a_complement_one order.trans mult_1_right mult_left_dist_sup sup_commute sup_right_isotone) + +end + +class bounded_relative_left_zero_antidomain_semiring = relative_left_zero_antidomain_semiring + bounded_idempotent_left_zero_semiring +begin + +lemma shunting_top: + "-p * x \ y \ x \ --p * top \ y" + apply (rule iffI) + apply (metis sup_commute case_split_left_sup mult_right_isotone top_greatest) + by (metis a_complement sup_bot_left sup_right_divisibility mult_assoc mult_left_dist_sup mult_left_one mult_left_zero mult_right_dist_sup mult_right_isotone order_trans tests_dual.inf_left_unit) + +end + +end + diff --git a/thys/Correctness_Algebras/Relative_Modal.thy b/thys/Correctness_Algebras/Relative_Modal.thy new file mode 100644 --- /dev/null +++ b/thys/Correctness_Algebras/Relative_Modal.thy @@ -0,0 +1,581 @@ +(* Title: Relative Modal Operators + Author: Walter Guttmann + Maintainer: Walter Guttmann +*) + +section \Relative Modal Operators\ + +theory Relative_Modal + +imports Relative_Domain + +begin + +class relative_diamond_semiring = relative_domain_semiring + diamond + + assumes diamond_def: "|x>y = d(x * y)" +begin + +lemma diamond_x_1: + "|x>1 = d(x)" + by (simp add: diamond_def) + +lemma diamond_x_d: + "|x>d(y) = d(x * y)" + using d_mult_d diamond_def by auto + +lemma diamond_x_und: + "|x>d(y) = |x>y" + using diamond_x_d diamond_def by auto + +lemma diamond_d_closed: + "|x>y = d( |x>y)" + by (simp add: d_involutive diamond_def) + +text \Theorem 46.11\ + +lemma diamond_bot_y: + "|bot>y = bot" + by (simp add: d_zero diamond_def) + +lemma diamond_1_y: + "|1>y = d(y)" + by (simp add: diamond_def) + +text \Theorem 46.12\ + +lemma diamond_1_d: + "|1>d(y) = d(y)" + by (simp add: diamond_1_y diamond_x_und) + +text \Theorem 46.10\ + +lemma diamond_d_y: + "|d(x)>y = d(x) * d(y)" + by (simp add: d_export diamond_def) + +text \Theorem 46.11\ + +lemma diamond_d_bot: + "|d(x)>bot = bot" + by (metis diamond_bot_y diamond_d_y d_commutative d_zero) + +text \Theorem 46.12\ + +lemma diamond_d_1: + "|d(x)>1 = d(x)" + by (simp add: diamond_x_1 d_involutive) + +lemma diamond_d_d: + "|d(x)>d(y) = d(x) * d(y)" + by (simp add: diamond_d_y diamond_x_und) + +text \Theorem 46.12\ + +lemma diamond_d_d_same: + "|d(x)>d(x) = d(x)" + by (simp add: diamond_d_d d_idempotent) + +text \Theorem 46.2\ + +lemma diamond_left_dist_sup: + "|x \ y>z = |x>z \ |y>z" + by (simp add: d_dist_sup diamond_def mult_right_dist_sup) + +text \Theorem 46.3\ + +lemma diamond_right_sub_dist_sup: + "|x>y \ |x>z \ |x>(y \ z)" + by (metis d_dist_sup diamond_def le_iff_sup mult_left_sub_dist_sup) + +text \Theorem 46.4\ + +lemma diamond_associative: + "|x * y>z = |x>(y * z)" + by (simp add: diamond_def mult_assoc) + +text \Theorem 46.4\ + +lemma diamond_left_mult: + "|x * y>z = |x>|y>z" + using diamond_x_und diamond_def mult_assoc by auto + +lemma diamond_right_mult: + "|x>(y * z) = |x>|y>z" + using diamond_associative diamond_left_mult by auto + +text \Theorem 46.6\ + +lemma diamond_d_export: + "|d(x) * y>z = d(x) * |y>z" + using diamond_d_y diamond_def mult_assoc by auto + +lemma diamond_diamond_export: + "||x>y>z = |x>y * |z>1" + using diamond_d_y diamond_def by force + +text \Theorem 46.1\ + +lemma diamond_left_isotone: + "x \ y \ |x>z \ |y>z" + by (metis diamond_left_dist_sup le_iff_sup) + +text \Theorem 46.1\ + +lemma diamond_right_isotone: + "y \ z \ |x>y \ |x>z" + by (metis diamond_right_sub_dist_sup le_iff_sup le_sup_iff) + +lemma diamond_isotone: + "w \ y \ x \ z \ |w>x \ |y>z" + by (meson diamond_left_isotone diamond_right_isotone order_trans) + +lemma diamond_left_upper_bound: + "|x>y \ |x \ z>y" + by (simp add: diamond_left_isotone) + +lemma diamond_right_upper_bound: + "|x>y \ |x>(y \ z)" + by (simp add: diamond_right_isotone) + +lemma diamond_lower_bound_right: + "|x>(d(y) * d(z)) \ |x>d(y)" + by (simp add: diamond_right_isotone d_mult_left_lower_bound) + +lemma diamond_lower_bound_left: + "|x>(d(y) * d(z)) \ |x>d(z)" + using diamond_lower_bound_right d_commutative by force + +text \Theorem 46.5\ + +lemma diamond_right_sub_dist_mult: + "|x>(d(y) * d(z)) \ |x>d(y) * |x>d(z)" + using diamond_lower_bound_left diamond_lower_bound_right d_mult_greatest_lower_bound diamond_def by force + +text \Theorem 46.13\ + +lemma diamond_demodalisation_1: + "d(x) * |y>z \ Z \ d(x) * y * d(z) \ Z" + by (metis d_weak_locality diamond_def mult_assoc) + +text \Theorem 46.14\ + +lemma diamond_demodalisation_3: + "|x>y \ d(z) \ x * d(y) \ d(z) * x \ Z" + apply (rule iffI) + apply (smt (verit) sup_commute sup_right_isotone d_below_one d_restrict diamond_def diamond_x_und mult_left_isotone mult_right_isotone mult_1_right order_trans) + by (smt sup_commute sup_bot_left d_Z d_commutative d_dist_sup d_involutive d_mult_sub d_plus_left_upper_bound diamond_d_y diamond_def diamond_x_und le_iff_sup order_trans) + +text \Theorem 46.6\ + +lemma diamond_d_export_2: + "|d(x) * y>z = d(x) * |d(x) * y>z" + by (metis diamond_d_export diamond_left_mult d_idempotent) + +text \Theorem 46.7\ + +lemma diamond_d_promote: + "|x * d(y)>z = |x * d(y)>(d(y) * z)" + by (metis d_idempotent diamond_def mult_assoc) + +text \Theorem 46.8\ + +lemma diamond_d_import_iff: + "d(x) \ |y>z \ d(x) \ |d(x) * y>z" + by (metis diamond_d_export diamond_d_y d_order diamond_def order.eq_iff) + +text \Theorem 46.9\ + +lemma diamond_d_import_iff_2: + "d(x) * d(y) \ |z>w \ d(x) * d(y) \ |d(y) * z>w" + apply (rule iffI) + apply (metis diamond_associative d_export d_mult_greatest_lower_bound diamond_def order.refl) + by (metis diamond_d_y d_mult_greatest_lower_bound diamond_def mult_assoc) + +end + +class relative_box_semiring = relative_diamond_semiring + relative_antidomain_semiring + box + + assumes box_def: "|x]y = a(x * a(y))" +begin + +text \Theorem 47.1\ + +lemma box_diamond: + "|x]y = a( |x>a(y))" + by (simp add: box_def d_a_closed diamond_def) + +text \Theorem 47.2\ + +lemma diamond_box: + "|x>y = a( |x]a(y))" + using box_def d_def d_mult_d diamond_def by auto + +lemma box_x_bot: + "|x]bot = a(x)" + by (metis box_def mult_1_right one_def) + +lemma box_x_1: + "|x]1 = a(x * bot)" + by (simp add: box_def) + +lemma box_x_d: + "|x]d(y) = a(x * a(y))" + by (simp add: box_def d_a_closed) + +lemma box_x_und: + "|x]d(y) = |x]y" + by (simp add: box_diamond d_a_closed) + +lemma box_x_a: + "|x]a(y) = a(x * y)" + using a_mult_d box_def by auto + +text \Theorem 47.15\ + +lemma box_bot_y: + "|bot]y = 1" + using box_def by auto + +lemma box_1_y: + "|1]y = d(y)" + by (simp add: box_def d_def) + +text \Theorem 47.16\ + +lemma box_1_d: + "|1]d(y) = d(y)" + by (simp add: box_1_y box_x_und) + +lemma box_1_a: + "|1]a(y) = a(y)" + by (simp add: box_x_a) + +lemma box_d_y: + "|d(x)]y = a(x) \ d(y)" + using a_export_a box_def d_def by auto + +lemma box_a_y: + "|a(x)]y = d(x) \ d(y)" + by (simp add: a_mult_deMorgan_1 box_def) + +text \Theorem 47.14\ + +lemma box_d_bot: + "|d(x)]bot = a(x)" + by (simp add: box_x_bot d_a_closed) + +lemma box_a_bot: + "|a(x)]bot = d(x)" + by (simp add: box_x_bot d_def) + +text \Theorem 47.15\ + +lemma box_d_1: + "|d(x)]1 = 1" + by (simp add: box_d_y d_one_one) + +lemma box_a_1: + "|a(x)]1 = 1" + by (simp add: box_x_1) + +text \Theorem 47.13\ + +lemma box_d_d: + "|d(x)]d(y) = a(x) \ d(y)" + by (simp add: box_d_y box_x_und) + +lemma box_a_d: + "|a(x)]d(y) = d(x) \ d(y)" + by (simp add: box_a_y box_x_und) + +lemma box_d_a: + "|d(x)]a(y) = a(x) \ a(y)" + by (simp add: box_x_a a_export_d) + +lemma box_a_a: + "|a(x)]a(y) = d(x) \ a(y)" + by (simp add: box_a_y a_d_closed) + +text \Theorem 47.15\ + +lemma box_d_d_same: + "|d(x)]d(x) = 1" + using box_x_d d_complement_zero by auto + +lemma box_a_a_same: + "|a(x)]a(x) = 1" + by (simp add: box_def) + +text \Theorem 47.16\ + +lemma box_d_below_box: + "d(x) \ |d(y)]d(x)" + by (simp add: box_d_d) + +lemma box_d_closed: + "|x]y = d( |x]y)" + by (simp add: a_d_closed box_def) + +lemma box_deMorgan_1: + "a( |x]y) = |x>a(y)" + by (simp add: diamond_box box_def) + +lemma box_deMorgan_2: + "a( |x>y) = |x]a(y)" + using box_x_a d_a_closed diamond_def by auto + +text \Theorem 47.5\ + +lemma box_left_dist_sup: + "|x \ y]z = |x]z * |y]z" + by (simp add: a_dist_sup box_def mult_right_dist_sup) + +lemma box_right_dist_sup: + "|x](y \ z) = a(x * a(y) * a(z))" + by (simp add: a_dist_sup box_def mult_assoc) + +lemma box_associative: + "|x * y]z = a(x * y * a(z))" + by (simp add: box_def) + +text \Theorem 47.6\ + +lemma box_left_mult: + "|x * y]z = |x]|y]z" + using box_x_a box_def mult_assoc by force + +lemma box_right_mult: + "|x](y * z) = a(x * a(y * z))" + by (simp add: box_def) + +text \Theorem 47.7\ + +lemma box_right_submult_d_d: + "|x](d(y) * d(z)) \ |x]d(y) * |x]d(z)" + by (smt a_antitone a_dist_sup a_export_d box_diamond d_a_closed diamond_def mult_left_sub_dist_sup) + +lemma box_right_submult_a_d: + "|x](a(y) * d(z)) \ |x]a(y) * |x]d(z)" + by (metis box_right_submult_d_d a_d_closed) + +lemma box_right_submult_d_a: + "|x](d(y) * a(z)) \ |x]d(y) * |x]a(z)" + using box_right_submult_a_d box_x_a d_def tests_dual.sub_commutative by auto + +lemma box_right_submult_a_a: + "|x](a(y) * a(z)) \ |x]a(y) * |x]a(z)" + by (metis box_right_submult_d_d a_d_closed) + +text \Theorem 47.8\ + +lemma box_d_export: + "|d(x) * y]z = a(x) \ |y]z" + by (simp add: a_export_d box_def mult_assoc) + +lemma box_a_export: + "|a(x) * y]z = d(x) \ |y]z" + using box_a_y box_d_closed box_left_mult by auto + +text \Theorem 47.4\ + +lemma box_left_antitone: + "y \ x \ |x]z \ |y]z" + by (metis a_antitone box_def mult_left_isotone) + +text \Theorem 47.3\ + +lemma box_right_isotone: + "y \ z \ |x]y \ |x]z" + by (metis a_antitone box_def mult_right_isotone) + +lemma box_antitone_isotone: + "y \ w \ x \ z \ |w]x \ |y]z" + by (meson box_left_antitone box_right_isotone order_trans) + +lemma diamond_1_a: + "|1>a(y) = a(y)" + by (simp add: d_def diamond_1_y) + +lemma diamond_a_y: + "|a(x)>y = a(x) * d(y)" + by (metis a_d_closed diamond_d_y) + +lemma diamond_a_bot: + "|a(x)>bot = bot" + by (simp add: diamond_a_y d_zero) + +lemma diamond_a_1: + "|a(x)>1 = a(x)" + by (simp add: d_def diamond_x_1) + +lemma diamond_a_d: + "|a(x)>d(y) = a(x) * d(y)" + by (simp add: diamond_a_y diamond_x_und) + +lemma diamond_d_a: + "|d(x)>a(y) = d(x) * a(y)" + by (simp add: a_d_closed diamond_d_y) + +lemma diamond_a_a: + "|a(x)>a(y) = a(x) * a(y)" + by (simp add: a_mult_closed diamond_def) + +lemma diamond_a_a_same: + "|a(x)>a(x) = a(x)" + by (simp add: diamond_a_a) + +lemma diamond_a_export: + "|a(x) * y>z = a(x) * |y>z" + using diamond_a_y diamond_associative diamond_def by auto + +lemma a_box_a_a: + "a(p) * |a(p)]a(q) = a(p) * a(q)" + using box_a_a box_a_bot box_x_bot tests_dual.sup_complement_intro by auto + +lemma box_left_lower_bound: + "|x \ y]z \ |x]z" + by (simp add: box_left_antitone) + +lemma box_right_upper_bound: + "|x]y \ |x](y \ z)" + by (simp add: box_right_isotone) + +lemma box_lower_bound_right: + "|x](d(y) * d(z)) \ |x]d(y)" + by (simp add: box_right_isotone d_mult_left_lower_bound) + +lemma box_lower_bound_left: + "|x](d(y) * d(z)) \ |x]d(z)" + by (simp add: box_right_isotone d_restrict_iff_1) + +text \Theorem 47.9\ + +lemma box_d_import: + "d(x) * |y]z = d(x) * |d(x) * y]z" + using a_box_a_a box_left_mult box_def d_def by force + +text \Theorem 47.10\ + +lemma box_d_promote: + "|x * d(y)]z = |x * d(y)](d(y) * z)" + using a_box_a_a box_x_a box_def d_def mult_assoc by auto + +text \Theorem 47.11\ + +lemma box_d_import_iff: + "d(x) \ |y]z \ d(x) \ |d(x) * y]z" + using box_d_export box_def d_def tests_dual.shunting by auto + +text \Theorem 47.12\ + +lemma box_d_import_iff_2: + "d(x) * d(y) \ |z]w \ d(x) * d(y) \ |d(y) * z]w" + apply (rule iffI) + using box_d_export le_supI2 apply simp + by (metis box_d_import d_commutative d_restrict_iff_1) + +text \Theorem 47.20\ + +lemma box_demodalisation_2: + "-p \ |y](-q) \ -p * y * --q \ Z" + by (simp add: a_greatest_left_absorber box_def mult_assoc) + +lemma box_right_sub_dist_sup: + "|x]d(y) \ |x]d(z) \ |x](d(y) \ d(z))" + by (simp add: box_right_isotone) + +lemma box_diff_var: + "|x](d(y) \ a(z)) * |x]d(z) \ |x]d(z)" + by (simp add: box_right_dist_sup box_x_d tests_dual.upper_bound_right) + +text \Theorem 47.19\ + +lemma diamond_demodalisation_2: + "|x>y \ d(z) \ a(z) * x * d(y) \ Z" + using a_antitone a_greatest_left_absorber a_mult_d d_def diamond_def mult_assoc by fastforce + +text \Theorem 47.17\ + +lemma box_below_Z: + "( |x]y) * x * a(y) \ Z" + by (simp add: a_restrict box_def mult_assoc) + +text \Theorem 47.18\ + +lemma box_partial_correctness: + "|x]1 = 1 \ x * bot \ Z" + by (simp add: box_x_1 a_strict) + +lemma diamond_split: + "|x>y = d(z) * |x>y \ a(z) * |x>y" + by (metis d_def diamond_def sup_monoid.add_commute tests_dual.sba_dual.sup_cases tests_dual.sub_commutative) + +lemma box_import_shunting: + "-p * -q \ |x](-r) \ -q \ |-p * x](-r)" + by (smt box_demodalisation_2 mult_assoc sub_comm sub_mult_closed) + +(* +lemma box_dist_mult: "|x](d(y) * d(z)) = |x](d(y)) * |x](d(z))" nitpick [expect=genuine,card=6] oops +lemma box_demodalisation_3: "d(x) \ |y]d(z) \ d(x) * y \ y * d(z) \ Z" nitpick [expect=genuine,card=6] oops +lemma fbox_diff: "|x](d(y) \ a(z)) \ |x]y \ a( |x]z)" nitpick [expect=genuine,card=6] oops +lemma diamond_diff: "|x>y * a( |x>z) \ |x>(d(y) * a(z))" nitpick [expect=genuine,card=6] oops +lemma diamond_diff_var: "|x>d(y) \ |x>(d(y) * a(z)) \ |x>d(z)" nitpick [expect=genuine,card=6] oops +*) + +end + +class relative_left_zero_diamond_semiring = relative_diamond_semiring + relative_domain_semiring + idempotent_left_zero_semiring +begin + +lemma diamond_right_dist_sup: + "|x>(y \ z) = |x>y \ |x>z" + by (simp add: d_dist_sup diamond_def mult_left_dist_sup) + +end + +class relative_left_zero_box_semiring = relative_box_semiring + relative_left_zero_antidomain_semiring +begin + +subclass relative_left_zero_diamond_semiring .. + +lemma box_right_mult_d_d: + "|x](d(y) * d(z)) = |x]d(y) * |x]d(z)" + using a_dist_sup box_d_a box_def d_def mult_left_dist_sup by auto + +lemma box_right_mult_a_d: + "|x](a(y) * d(z)) = |x]a(y) * |x]d(z)" + by (metis box_right_mult_d_d a_d_closed) + +lemma box_right_mult_d_a: + "|x](d(y) * a(z)) = |x]d(y) * |x]a(z)" + using box_right_mult_a_d box_def box_x_a d_def by auto + +lemma box_right_mult_a_a: + "|x](a(y) * a(z)) = |x]a(y) * |x]a(z)" + using a_dist_sup box_def mult_left_dist_sup tests_dual.sub_sup_demorgan by force + +lemma box_demodalisation_3: + assumes "d(x) \ |y]d(z)" + shows "d(x) * y \ y * d(z) \ Z" +proof - + have "d(x) * y * a(z) \ Z" + using assms a_greatest_left_absorber box_x_d d_def mult_assoc by auto + thus ?thesis + by (simp add: a_a_below case_split_right_sup d_def sup_commute mult_assoc) +qed + +lemma fbox_diff: + "|x](d(y) \ a(z)) \ |x]y \ a( |x]z)" + by (smt (z3) a_compl_intro a_dist_sup a_mult_d a_plus_left_lower_bound sup_commute box_def d_def mult_left_dist_sup tests_dual.sba_dual.shunting) + +lemma diamond_diff_var: + "|x>d(y) \ |x>(d(y) * a(z)) \ |x>d(z)" + by (metis d_cancellation_1 diamond_right_dist_sup diamond_right_isotone sup_commute) + +lemma diamond_diff: + "|x>y * a( |x>z) \ |x>(d(y) * a(z))" + by (metis d_a_shunting d_involutive diamond_def diamond_diff_var diamond_x_und) + +end + +end + diff --git a/thys/Correctness_Algebras/Test_Iterings.thy b/thys/Correctness_Algebras/Test_Iterings.thy new file mode 100644 --- /dev/null +++ b/thys/Correctness_Algebras/Test_Iterings.thy @@ -0,0 +1,397 @@ +(* Title: Test Iterings + Author: Walter Guttmann + Maintainer: Walter Guttmann +*) + +section \Test Iterings\ + +theory Test_Iterings + +imports Stone_Kleene_Relation_Algebras.Iterings Tests + +begin + +class test_itering = itering + tests + while + + assumes while_def: "p \ y = (p * y)\<^sup>\ * -p" +begin + +lemma wnf_lemma_5: + "(-p \ -q) * (-q * x \ --q * y) = -q * x \ --q * -p * y" + by (smt (z3) mult_left_dist_sup sup_commute tests_dual.sba_dual.sub_sup_closed tests_dual.sba_dual.sup_complement_intro tests_dual.sba_dual.sup_idempotent tests_dual.sup_idempotent mult_assoc tests_dual.wnf_lemma_3) + +lemma test_case_split_left_equal: + "-z * x = -z * y \ --z * x = --z * y \ x = y" + by (metis case_split_left_equal tests_dual.inf_complement) + +lemma preserves_equation: + "-y * x \ x * -y \ -y * x = -y * x * -y" + apply (rule iffI) + apply (simp add: test_preserves_equation tests_dual.sub_bot_least) + by (simp add: test_preserves_equation tests_dual.sub_bot_least) + +text \Theorem 5\ + +lemma preserve_test: + "-y * x \ x * -y \ -y * x\<^sup>\ = -y * x\<^sup>\ * -y" + using circ_simulate preserves_equation by blast + +text \Theorem 5\ + +lemma import_test: + "-y * x \ x * -y \ -y * x\<^sup>\ = -y * (-y * x)\<^sup>\" + by (simp add: circ_import tests_dual.sub_bot_least) + +definition ite :: "'a \ 'a \ 'a \ 'a" ("_ \ _ \ _" [58,58,58] 57) + where "x \ p \ y \ p * x \ -p * y" + +definition it :: "'a \ 'a \ 'a" ("_ \ _" [58,58] 57) + where "p \ x \ p * x \ -p" + +(* +definition while :: "'a \ 'a \ 'a" (infixr "\" 59) + where "p \ y \ (p * y)\<^sup>\ * -p" +*) + +definition assigns :: "'a \ 'a \ 'a \ bool" + where "assigns x p q \ x = x * (p * q \ -p * -q)" + +definition preserves :: "'a \ 'a \ bool" + where "preserves x p \ p * x \ x * p \ -p * x \ x * -p" + +lemma ite_neg: + "x \ -p \ y = y \ --p \ x" + by (simp add: ite_def sup_commute) + +lemma ite_import_true: + "x \ -p \ y = -p * x \ -p \ y" + by (metis ite_def tests_dual.sup_idempotent mult_assoc) + +lemma ite_import_false: + "x \ -p \ y = x \ -p \ --p * y" + by (metis ite_import_true ite_neg) + +lemma ite_import_true_false: + "x \ -p \ y = -p * x \ -p \ --p * y" + using ite_import_false ite_import_true by auto + +lemma ite_context_true: + "-p * (x \ -p \ y) = -p * x" + by (metis sup_monoid.add_0_left tests_dual.sup_right_zero tests_dual.top_double_complement wnf_lemma_5 sup_bot_right ite_def mult_assoc mult_left_zero) + +lemma ite_context_false: + "--p * (x \ -p \ y) = --p * y" + by (metis ite_neg ite_context_true) + +lemma ite_context_import: + "-p * (x \ -q \ y) = -p * (x \ -p * -q \ y)" + by (smt ite_def mult_assoc tests_dual.sup_complement_intro tests_dual.sub_sup_demorgan tests_dual.sup_idempotent mult_left_dist_sup) + +lemma ite_conjunction: + "(x \ -q \ y) \ -p \ y = x \ -p * -q \ y" + by (smt sup_assoc sup_commute ite_def mult_assoc tests_dual.sub_sup_demorgan mult_left_dist_sup mult_right_dist_sup tests_dual.inf_complement_intro) + +lemma ite_disjunction: + "x \ -p \ (x \ -q \ y) = x \ -p \ -q \ y" + by (smt (z3) tests_dual.sba_dual.sub_sup_closed sup_assoc ite_def mult_assoc tests_dual.sup_complement_intro tests_dual.sub_sup_demorgan mult_left_dist_sup mult_right_dist_sup tests_dual.inf_demorgan) + +lemma wnf_lemma_6: + "(-p \ -q) * (x \ --p * -q \ y) = (-p \ -q) * (y \ -p \ x)" + by (smt (z3) ite_conjunction ite_context_false ite_context_true semiring.distrib_right tests_dual.sba_dual.inf_cases_2 tests_dual.sba_dual.sub_inf_def tests_dual.sba_dual.sup_complement_intro tests_dual.sub_complement) + +lemma it_ite: + "-p \ x = x \ -p \ 1" + by (simp add: it_def ite_def) + +lemma it_neg: + "--p \ x = 1 \ -p \ x" + using it_ite ite_neg by auto + +lemma it_import_true: + "-p \ x = -p \ -p * x" + using it_ite ite_import_true by auto + +lemma it_context_true: + "-p * (-p \ x) = -p * x" + by (simp add: it_ite ite_context_true) + +lemma it_context_false: + "--p * (-p \ x) = --p" + using it_ite ite_context_false by force + +lemma while_unfold_it: + "-p \ x = -p \ x * (-p \ x)" + by (metis circ_loop_fixpoint it_def mult_assoc while_def) + +lemma while_context_false: + "--p * (-p \ x) = --p" + by (metis it_context_false while_unfold_it) + +lemma while_context_true: + "-p * (-p \ x) = -p * x * (-p \ x)" + by (metis it_context_true mult_assoc while_unfold_it) + +lemma while_zero: + "bot \ x = 1" + by (metis circ_zero mult_left_one mult_left_zero one_def while_def) + +lemma wnf_lemma_7: + "1 * (bot \ 1) = 1" + by (simp add: while_zero) + +lemma while_import_condition: + "-p \ x = -p \ -p * x" + by (metis mult_assoc tests_dual.sup_idempotent while_def) + +lemma while_import_condition_2: + "-p * -q \ x = -p * -q \ -p * x" + by (metis mult_assoc tests_dual.sup_idempotent sub_comm while_def) + +lemma wnf_lemma_8: + "-r * (-p \ --p * -q) \ (x \ --p * -q \ y) = -r * (-p \ -q) \ (y \ -p \ x)" + by (metis mult_assoc while_def wnf_lemma_6 tests_dual.sba_dual.sup_complement_intro) + +text \Theorem 6 - see Theorem 31 on page 329 of Back and von Wright, Acta Informatica 36:295-334, 1999\ + +lemma split_merge_loops: + assumes "--p * y \ y * --p" + shows "(-p \ -q) \ (x \ -p \ y) = (-p \ x) * (-q \ y)" +proof - + have "-p \ -q \ (x \ -p \ y) = (-p * x \ --p * -q * y)\<^sup>\ * --p * --q" + by (smt ite_def mult_assoc sup_commute tests_dual.inf_demorgan while_def wnf_lemma_5) + thus ?thesis + by (smt assms circ_sup_1 circ_slide import_test mult_assoc preserves_equation sub_comm while_context_false while_def) +qed + +lemma assigns_same: + "assigns x (-p) (-p)" + by (simp add: assigns_def) + +lemma preserves_equation_test: + "preserves x (-p) \ -p * x = -p * x * -p \ --p * x = --p * x * --p" + using preserves_def preserves_equation by auto + +lemma preserves_test: + "preserves (-q) (-p)" + using tests_dual.sub_commutative preserves_def by auto + +lemma preserves_zero: + "preserves bot (-p)" + using tests_dual.sba_dual.sub_bot_def preserves_test by blast + +lemma preserves_one: + "preserves 1 (-p)" + using preserves_def by force + +lemma preserves_sup: + "preserves x (-p) \ preserves y (-p) \ preserves (x \ y) (-p)" + by (simp add: mult_left_dist_sup mult_right_dist_sup preserves_equation_test) + +lemma preserves_mult: + "preserves x (-p) \ preserves y (-p) \ preserves (x * y) (-p)" + by (smt (verit, best) mult_assoc preserves_equation_test) + +lemma preserves_ite: + "preserves x (-p) \ preserves y (-p) \ preserves (x \ -q \ y) (-p)" + by (simp add: ite_def preserves_mult preserves_sup preserves_test) + +lemma preserves_it: + "preserves x (-p) \ preserves (-q \ x) (-p)" + by (simp add: it_ite preserves_ite preserves_one) + +lemma preserves_circ: + "preserves x (-p) \ preserves (x\<^sup>\) (-p)" + by (meson circ_simulate preserves_def) + +lemma preserves_while: + "preserves x (-p) \ preserves (-q \ x) (-p)" + using while_def preserves_circ preserves_mult preserves_test by auto + +lemma preserves_test_neg: + "preserves x (-p) \ preserves x (--p)" + using preserves_def by auto + +lemma preserves_import_circ: + "preserves x (-p) \ -p * x\<^sup>\ = -p * (-p * x)\<^sup>\" + using import_test preserves_def by blast + +lemma preserves_simulate: + "preserves x (-p) \ -p * x\<^sup>\ = -p * x\<^sup>\ * -p" + using preserve_test preserves_def by auto + +lemma preserves_import_ite: + assumes "preserves z (-p)" + shows "z * (x \ -p \ y) = z * x \ -p \ z * y" +proof - + have 1: "-p * z * (x \ -p \ y) = -p * (z * x \ -p \ z * y)" + by (smt assms ite_context_true mult_assoc preserves_equation_test) + have "--p * z * (x \ -p \ y) = --p * (z * x \ -p \ z * y)" + by (smt (z3) assms ite_context_false mult_assoc preserves_equation_test) + thus ?thesis + using 1 by (metis mult_assoc test_case_split_left_equal) +qed + +lemma preserves_while_context: + "preserves x (-p) \ -p * (-q \ x) = -p * (-p * -q \ x)" + by (smt (verit, del_insts) mult_assoc tests_dual.sup_complement_intro tests_dual.sub_sup_demorgan preserves_import_circ preserves_mult preserves_simulate preserves_test while_def) + +lemma while_ite_context_false: + assumes "preserves y (-p)" + shows "--p * (-p \ -q \ (x \ -p \ y)) = --p * (-q \ y)" +proof - + have "--p * (-p \ -q \ (x \ -p \ y)) = --p * (--p * -q * y)\<^sup>\ * -(-p \ -q)" + by (smt (z3) assms import_test mult_assoc preserves_equation preserves_equation_test sub_comm while_def tests_dual.sba_dual.sub_sup_demorgan preserves_test split_merge_loops while_context_false) + thus ?thesis + by (metis (no_types, lifting) assms preserves_def mult.assoc split_merge_loops while_context_false) +qed + +text \Theorem 7.1\ + +lemma while_ite_norm: + assumes "assigns z (-p) (-q)" + and "preserves x1 (-q)" + and "preserves x2 (-q)" + and "preserves y1 (-q)" + and "preserves y2 (-q)" + shows "z * (x1 * (-r1 \ y1) \ -p \ x2 * (-r2 \ y2)) = z * (x1 \ -q \ x2) * ((-q * -r1 \ --q * -r2) \ (y1 \ -q \ y2))" +proof - + have 1: "-(-q * -r1 \ --q * -r2) = -q * --r1 \ --q * --r2" + by (smt (z3) tests_dual.complement_2 tests_dual.sub_sup_closed tests_dual.case_duality tests_dual.sub_sup_demorgan) + have "-p * -q * x1 * (-q * -r1 * y1 \ --q * -r2 * y2)\<^sup>\ * (-q * --r1 \ --q * --r2) = -p * -q * x1 * -q * (-q * (-q * -r1 * y1 \ --q * -r2 * y2))\<^sup>\ * (-q * --r1 \ --q * --r2)" + by (smt (verit, del_insts) assms(2,4,5) mult_assoc preserves_sup preserves_equation_test preserves_import_circ preserves_mult preserves_test) + also have "... = -p * -q * x1 * -q * (-q * -r1 * y1)\<^sup>\ * (-q * --r1 \ --q * --r2)" + using ite_context_true ite_def mult_assoc by auto + finally have 2: "-p * -q * x1 * (-q * -r1 * y1 \ --q * -r2 * y2)\<^sup>\ * (-q * --r1 \ --q * --r2) = -p * -q * x1 * (-r1 * y1)\<^sup>\ * --r1" + by (smt (verit, del_insts) assms ite_context_true ite_def mult_assoc preserves_equation_test preserves_import_circ preserves_mult preserves_simulate preserves_test) + have "--p * --q * x2 * (-q * -r1 * y1 \ --q * -r2 * y2)\<^sup>\ * (-q * --r1 \ --q * --r2) = --p * --q * x2 * --q * (--q * (-q * -r1 * y1 \ --q * -r2 * y2))\<^sup>\ * (-q * --r1 \ --q * --r2)" + by (smt (verit, del_insts) assms mult_assoc preserves_sup preserves_equation_test preserves_import_circ preserves_mult preserves_test preserves_test_neg) + also have "... = --p * --q * x2 * --q * (--q * -r2 * y2)\<^sup>\ * (-q * --r1 \ --q * --r2)" + using ite_context_false ite_def mult_assoc by auto + finally have "--p * --q * x2 * (-q * -r1 * y1 \ --q * -r2 * y2)\<^sup>\ * (-q * --r1 \ --q * --r2) = --p * --q * x2 * (-r2 * y2)\<^sup>\ * --r2" + by (smt (verit, del_insts) assms(3,5) ite_context_false ite_def mult_assoc preserves_equation_test preserves_import_circ preserves_mult preserves_simulate preserves_test preserves_test_neg) + thus ?thesis + using 1 2 by (smt (z3) assms(1) assigns_def mult_assoc mult_right_dist_sup while_def ite_context_false ite_context_true tests_dual.sub_commutative) +qed + +lemma while_it_norm: + "assigns z (-p) (-q) \ preserves x (-q) \ preserves y (-q) \ z * (-p \ x * (-r \ y)) = z * (-q \ x) * (-q * -r \ y)" + by (metis sup_bot_right tests_dual.sup_right_zero it_context_true it_ite tests_dual.complement_bot preserves_one while_import_condition_2 while_ite_norm wnf_lemma_7) + +lemma while_else_norm: + "assigns z (-p) (-q) \ preserves x (-q) \ preserves y (-q) \ z * (1 \ -p \ x * (-r \ y)) = z * (1 \ -q \ x) * (--q * -r \ y)" + by (metis sup_bot_left tests_dual.sup_right_zero ite_context_false tests_dual.complement_bot preserves_one while_import_condition_2 while_ite_norm wnf_lemma_7) + +lemma while_while_pre_norm: + "-p \ x * (-q \ y) = -p \ x * (-p \ -q \ (y \ -q \ x))" + by (smt sup_commute circ_sup_1 circ_left_unfold circ_slide it_def ite_def mult_assoc mult_left_one mult_right_dist_sup tests_dual.inf_demorgan while_def wnf_lemma_5) + +text \Theorem 7.2\ + +lemma while_while_norm: + "assigns z (-p) (-r) \ preserves x (-r) \ preserves y (-r) \ z * (-p \ x * (-q \ y)) = z * (-r \ x) * (-r * (-p \ -q) \ (y \ -q \ x))" + by (smt tests_dual.double_negation tests_dual.sub_sup_demorgan tests_dual.inf_demorgan preserves_ite while_it_norm while_while_pre_norm) + +lemma while_seq_replace: + "assigns z (-p) (-q) \ z * (-p \ x * z) * y = z * (-q \ x * z) * y" + by (smt assigns_def circ_slide mult_assoc tests_dual.wnf_lemma_1 tests_dual.wnf_lemma_2 tests_dual.wnf_lemma_3 tests_dual.wnf_lemma_4 while_def) + +lemma while_ite_replace: + "assigns z (-p) (-q) \ z * (x \ -p \ y) = z * (x \ -q \ y)" + by (smt assigns_def ite_def mult_assoc mult_left_dist_sup sub_comm tests_dual.wnf_lemma_1 tests_dual.wnf_lemma_3) + +lemma while_post_norm_an: + assumes "preserves y (-p)" + shows "(-p \ x) * y = y \ --p \ (-p \ x * (--p \ y))" +proof - + have "-p * (-p * x * (--p * y \ -p))\<^sup>\ * --p = -p * x * ((--p * y \ -p) * -p * x)\<^sup>\ * (--p * y \ -p) * --p" + by (metis circ_slide_1 while_def mult_assoc while_context_true) + also have "... = -p * x * (--p * y * bot \ -p * x)\<^sup>\ * --p * y" + by (smt assms sup_bot_right mult_assoc tests_dual.sup_complement tests_dual.sup_idempotent mult_left_zero mult_right_dist_sup preserves_equation_test sub_comm) + finally have "-p * (-p * x * (--p * y \ -p))\<^sup>\ * --p = -p * x * (-p * x)\<^sup>\ * --p * y" + by (metis circ_sup_mult_zero sup_commute mult_assoc) + thus ?thesis + by (smt circ_left_unfold tests_dual.double_negation it_def ite_def mult_assoc mult_left_one mult_right_dist_sup while_def) +qed + +lemma while_post_norm: + "preserves y (-p) \ (-p \ x) * y = -p \ x * (1 \ -p \ y) \ -p \ y" + using it_neg ite_neg while_post_norm_an by force + +lemma wnf_lemma_9: + assumes "assigns z (-p) (-q)" + and "preserves x1 (-q)" + and "preserves y1 (-q)" + and "preserves x2 (-q)" + and "preserves y2 (-q)" + and "preserves x2 (-p)" + and "preserves y2 (-p)" + shows "z * (x1 \ -q \ x2) * (-q * -p \ -r \ (y1 \ -q * -p \ y2)) = z * (x1 \ -p \ x2) * (-p \ -r \ (y1 \ -p \ y2))" +proof - + have "z * --p * --q * (x1 \ -q \ x2) * (-q * -p \ -r \ (y1 \ -q * -p \ y2)) = z * --p * --q * x2 * --q * (--q * (-q * -p \ -r) \ (y1 \ -q * -p \ y2))" + by (smt (verit, del_insts) assms(3-5) tests_dual.double_negation ite_context_false mult_assoc tests_dual.sub_sup_demorgan tests_dual.inf_demorgan preserves_equation_test preserves_ite preserves_while_context) + also have "... = z * --p * --q * x2 * --q * (--q * -r \ --q * y2)" + by (smt sup_bot_left tests_dual.double_negation ite_conjunction ite_context_false mult_assoc tests_dual.sup_complement mult_left_dist_sup mult_left_zero while_import_condition_2) + also have "... = z * --p * --q * x2 * (-r \ y2)" + by (metis assms(4,5) mult_assoc preserves_equation_test preserves_test_neg preserves_while_context while_import_condition_2) + finally have 1: "z * --p * --q * (x1 \ -q \ x2) * (-q * -p \ -r \ (y1 \ -q * -p \ y2)) = z * --p * --q * (x1 \ -q \ x2) * (-p \ -r \ (y1 \ -p \ y2))" + by (smt assms(6,7) ite_context_false mult_assoc preserves_equation_test sub_comm while_ite_context_false) + have "z * -p * -q * (x1 \ -q \ x2) * (-q * -p \ -r \ (y1 \ -q * -p \ y2)) = z * -p * -q * (x1 \ -q \ x2) * -q * (-q * (-p \ -r) \ -q * (y1 \ -p \ y2))" + by (smt (verit, del_insts) assms(2-5) tests_dual.double_negation ite_context_import mult_assoc tests_dual.sub_sup_demorgan tests_dual.sup_idempotent mult_left_dist_sup tests_dual.inf_demorgan preserves_equation_test preserves_ite preserves_while_context while_import_condition_2) + hence "z * -p * -q * (x1 \ -q \ x2) * (-q * -p \ -r \ (y1 \ -q * -p \ y2)) = z * -p * -q * (x1 \ -q \ x2) * (-p \ -r \ (y1 \ -p \ y2))" + by (smt assms(2-5) tests_dual.double_negation mult_assoc tests_dual.sub_sup_demorgan tests_dual.sup_idempotent preserves_equation_test preserves_ite preserves_while_context while_import_condition_2) + thus ?thesis + using 1 by (smt assms(1) assigns_def mult_assoc mult_left_dist_sup mult_right_dist_sup while_ite_replace) +qed + +text \Theorem 7.3\ + +lemma while_seq_norm: + assumes "assigns z1 (-r1) (-q)" + and "preserves x2 (-q)" + and "preserves y2 (-q)" + and "preserves z2 (-q)" + and "z1 * z2 = z2 * z1" + and "assigns z2 (-q) (-r)" + and "preserves y1 (-r)" + and "preserves z1 (-r)" + and "preserves x2 (-r)" + and "preserves y2 (-r)" + shows "x1 * z1 * z2 * (-r1 \ y1 * z1) * x2 * (-r2 \ y2) = x1 * z1 * z2 * (y1 * z1 * (1 \ -q \ x2) \ -q \ x2) * (-q \ -r2 \ (y1 * z1 * (1 \ -q \ x2) \ -q \ y2))" +proof - + have 1: "preserves (y1 * z1 * (1 \ -q \ x2)) (-r)" + by (simp add: assms(7-9) ite_def preserves_mult preserves_sup preserves_test) + hence 2: "preserves (y1 * z1 * (1 \ -q \ x2) \ -q \ y2) (-r)" + by (simp add: assms(10) preserves_ite) + have "x1 * z1 * z2 * (-r1 \ y1 * z1) * x2 * (-r2 \ y2) = x1 * z1 * z2 * (-q \ y1 * z1) * x2 * (-r2 \ y2)" + using assms(1,5) mult_assoc while_seq_replace by auto + also have "... = x1 * z1 * z2 * (-q \ y1 * z1 * (1 \ -q \ x2 * (-r2 \ y2)) \ -q \ x2 * (-r2 \ y2))" + by (smt assms(2,3) mult_assoc preserves_mult preserves_while while_post_norm) + also have "... = x1 * z1 * (z2 * (-q \ y1 * z1 * (1 \ -q \ x2) * (--q * -r2 \ y2)) \ -q \ z2 * x2 * (-r2 \ y2))" + by (smt assms(2-4) assigns_same mult_assoc preserves_import_ite while_else_norm) + also have "... = x1 * z1 * (z2 * (-r \ y1 * z1 * (1 \ -q \ x2)) * (-r * (-q \ -r2) \ (y1 * z1 * (1 \ -q \ x2) \ -q \ y2)) \ -q \ z2 * x2 * (-r2 \ y2))" + by (smt assms(6-10) tests_dual.double_negation tests_dual.sub_sup_demorgan tests_dual.inf_demorgan preserves_ite preserves_mult preserves_one while_while_norm wnf_lemma_8) + also have "... = x1 * z1 * z2 * ((-r \ y1 * z1 * (1 \ -q \ x2)) * (-r * (-q \ -r2) \ (y1 * z1 * (1 \ -q \ x2) \ -q \ y2)) \ -r \ x2 * (-r2 \ y2))" + by (smt assms(4,6) mult_assoc preserves_import_ite while_ite_replace) + also have "... = x1 * z1 * z2 * (-r * (y1 * z1 * (1 \ -q \ x2)) * (-r * (-q \ -r2) \ (y1 * z1 * (1 \ -q \ x2) \ -q \ y2)) \ -r \ x2 * (-r2 \ y2))" + by (smt mult_assoc it_context_true ite_import_true) + also have "... = x1 * z1 * z2 * (-r * (y1 * z1 * (1 \ -q \ x2)) * -r * (-r * (-q \ -r2) \ (y1 * z1 * (1 \ -q \ x2) \ -q \ y2)) \ -r \ x2 * (-r2 \ y2))" + using 1 by (simp add: preserves_equation_test) + also have "... = x1 * z1 * z2 * (-r * (y1 * z1 * (1 \ -q \ x2)) * -r * (-q \ -r2 \ (y1 * z1 * (1 \ -q \ x2) \ -q \ y2)) \ -r \ x2 * (-r2 \ y2))" + using 2 by (smt (z3) tests_dual.sba_dual.sub_sup_closed mult_assoc preserves_while_context) + also have "... = x1 * z1 * z2 * (y1 * z1 * (1 \ -q \ x2) * (-q \ -r2 \ (y1 * z1 * (1 \ -q \ x2) \ -q \ y2)) \ -q \ x2 * (-r2 \ y2))" + by (smt assms(6-9) tests_dual.double_negation ite_import_true mult_assoc tests_dual.sup_idempotent preserves_equation_test preserves_ite preserves_one while_ite_replace) + also have "... = x1 * z1 * z2 * (y1 * z1 * (1 \ -q \ x2) \ -r \ x2) * ((-r * (-q \ -r2) \ --r * -r2) \ ((y1 * z1 * (1 \ -q \ x2) \ -q \ y2) \ -r \ y2))" + by (smt assms(6-10) tests_dual.double_negation mult_assoc tests_dual.sub_sup_demorgan tests_dual.inf_demorgan preserves_ite preserves_mult preserves_one while_ite_norm) + also have "... = x1 * z1 * z2 * (y1 * z1 * (1 \ -q \ x2) \ -r \ x2) * ((-r * (-q \ -r2) \ --r * -r2) \ (y1 * z1 * (1 \ -q \ x2) \ -r * -q \ y2))" + using ite_conjunction by simp + also have "... = x1 * z1 * z2 * (y1 * z1 * (1 \ -q \ x2) \ -r \ x2) * ((-r * -q \ -r2) \ (y1 * z1 * (1 \ -q \ x2) \ -r * -q \ y2))" + by (smt (z3) mult_left_dist_sup sup_assoc tests_dual.sba_dual.sup_cases tests_dual.sub_commutative) + also have "... = x1 * z1 * z2 * (y1 * z1 * (1 \ -q \ x2) \ -q \ x2) * (-q \ -r2 \ (y1 * z1 * (1 \ -q \ x2) \ -q \ y2))" + using 1 by (metis assms(2,3,6,9,10) mult_assoc wnf_lemma_9) + finally show ?thesis + . +qed + +end + +end + diff --git a/thys/Correctness_Algebras/Tests.thy b/thys/Correctness_Algebras/Tests.thy new file mode 100644 --- /dev/null +++ b/thys/Correctness_Algebras/Tests.thy @@ -0,0 +1,183 @@ +(* Title: Tests + Author: Walter Guttmann + Maintainer: Walter Guttmann +*) + +section \Tests\ + +theory Tests + +imports Subset_Boolean_Algebras.Subset_Boolean_Algebras Base + +begin + +context subset_boolean_algebra_extended +begin + +sublocale sba_dual: subset_boolean_algebra_extended where uminus = uminus and sup = inf and minus = "\x y . -(-x \ y)" and inf = sup and bot = top and less_eq = greater_eq and less = greater and top = bot + apply unfold_locales + apply (simp add: inf_associative) + apply (simp add: inf_commutative) + using inf_cases_2 apply simp + using inf_closed apply simp + apply simp + apply simp + using sub_sup_closed sub_sup_demorgan apply simp + apply simp + apply (simp add: inf_commutative less_eq_inf) + by (metis inf_commutative inf_idempotent inf_left_dist_sup sub_less_def sup_absorb sup_right_zero top_double_complement) + +lemma strict_leq_def: + "-x < -y \ -x \ -y \ \ (-y \ -x)" + by (simp add: sba_dual.sba_dual.sub_less_def sba_dual.sba_dual.sub_less_eq_def) + +lemma one_def: + "top = -bot" + by simp + +end + +class tests = times + uminus + one + ord + sup + bot + + assumes sub_assoc: "-x * (-y * -z) = (-x * -y) * -z" + assumes sub_comm: "-x * -y = -y * -x" + assumes sub_compl: "-x = -(--x * -y) * -(--x * --y)" + assumes sub_mult_closed: "-x * -y = --(-x * -y)" + assumes the_bot_def: "bot = (THE x . (\y . x = -y * --y))" (* define without imposing uniqueness *) + assumes one_def: "1 = - bot" + assumes sup_def: "-x \ -y = -(--x * --y)" + assumes leq_def: "-x \ -y \ -x * -y = -x" + assumes strict_leq_def: "-x < -y \ -x \ -y \ \ (-y \ -x)" +begin + +sublocale tests_dual: subset_boolean_algebra_extended where uminus = uminus and sup = times and minus = "\x y . -(-x * y)" and inf = sup and bot = 1 and less_eq = greater_eq and less = greater and top = bot + apply unfold_locales + apply (simp add: sub_assoc) + apply (simp add: sub_comm) + apply (simp add: sub_compl) + using sub_mult_closed apply simp + apply (simp add: the_bot_def) + apply (simp add: one_def the_bot_def) + apply (simp add: sup_def) + apply simp + apply (simp add: leq_def sub_comm) + by (simp add: leq_def strict_leq_def sub_comm) + +sublocale sba: subset_boolean_algebra_extended where uminus = uminus and sup = sup and minus = "\x y . -(-x \ y)" and inf = times and bot = bot and less_eq = less_eq and less = less and top = 1 .. + +text \sets and sequences of tests\ + +definition test_set :: "'a set \ bool" + where "test_set A \ \x\A . x = --x" + +lemma mult_left_dist_test_set: + "test_set A \ test_set { -p * x | x . x \ A }" + by (smt mem_Collect_eq sub_mult_closed test_set_def) + +lemma mult_right_dist_test_set: + "test_set A \ test_set { x * -p | x . x \ A }" + by (smt mem_Collect_eq sub_mult_closed test_set_def) + +lemma sup_left_dist_test_set: + "test_set A \ test_set { -p \ x | x . x \ A }" + by (smt mem_Collect_eq tests_dual.sba_dual.sub_sup_closed test_set_def) + +lemma sup_right_dist_test_set: + "test_set A \ test_set { x \ -p | x . x \ A }" + by (smt mem_Collect_eq tests_dual.sba_dual.sub_sup_closed test_set_def) + +lemma test_set_closed: + "A \ B \ test_set B \ test_set A" + using test_set_def by auto + +definition test_seq :: "(nat \ 'a) \ bool" + where "test_seq t \ \n . t n = --t n" + +lemma test_seq_test_set: + "test_seq t \ test_set { t n | n::nat . True }" + using test_seq_def test_set_def by auto + +definition nat_test :: "(nat \ 'a) \ 'a \ bool" + where "nat_test t s \ (\n . t n = --t n) \ s = --s \ (\n . t n \ s) \ (\x y . (\n . t n * -x \ -y) \ s * -x \ -y)" + +lemma nat_test_seq: + "nat_test t s \ test_seq t" + by (simp add: nat_test_def test_seq_def) + +primrec pSum :: "(nat \ 'a) \ nat \ 'a" + where "pSum f 0 = bot" + | "pSum f (Suc m) = pSum f m \ f m" + +lemma pSum_test: + "test_seq t \ pSum t m = --(pSum t m)" + apply (induct m) + apply simp + by (smt pSum.simps(2) tests_dual.sba_dual.sub_sup_closed test_seq_def) + +lemma pSum_test_nat: + "nat_test t s \ pSum t m = --(pSum t m)" + by (metis nat_test_seq pSum_test) + +lemma pSum_upper: + "test_seq t \ i t i \ pSum t m" +proof (induct m) + show "test_seq t \ i<0 \ t i \ pSum t 0" + by (smt less_zeroE) +next + fix n + assume "test_seq t \ i t i \ pSum t n" + hence "test_seq t \ i t i \ pSum t (Suc n)" + by (smt (z3) pSum.simps(2) pSum_test tests_dual.sba_dual.upper_bound_left tests_dual.transitive test_seq_def) + thus "test_seq t \ i t i \ pSum t (Suc n)" + by (metis less_Suc_eq pSum.simps(2) pSum_test tests_dual.sba_dual.upper_bound_right test_seq_def) +qed + +lemma pSum_below: + "test_seq t \ (\m -q) \ pSum t k * -p \ -q" + apply (induct k) + apply (simp add: tests_dual.top_greatest) + by (smt (verit, ccfv_threshold) tests_dual.sup_right_dist_inf pSum.simps(2) pSum_test test_seq_def sub_mult_closed less_Suc_eq tests_dual.sba_dual.sub_associative tests_dual.sba_dual.sub_less_eq_def) + +lemma pSum_below_nat: + "nat_test t s \ (\m -q) \ pSum t k * -p \ -q" + by (simp add: nat_test_seq pSum_below) + +lemma pSum_below_sum: + "nat_test t s \ pSum t x \ s" + by (smt (verit, ccfv_threshold) tests_dual.sup_right_unit nat_test_def one_def pSum_below_nat pSum_test_nat) + +lemma ascending_chain_sup_left: + "ascending_chain t \ test_seq t \ ascending_chain (\n . -p \ t n) \ test_seq (\n . -p \ t n)" + by (smt (z3) ord.ascending_chain_def tests_dual.sba_dual.sub_sup_closed tests_dual.sba_dual.sub_sup_right_isotone test_seq_def) + +lemma ascending_chain_sup_right: + "ascending_chain t \ test_seq t \ ascending_chain (\n . t n \ -p) \ test_seq (\n . t n \ -p)" + by (smt ascending_chain_def tests_dual.sba_dual.sub_sup_closed tests_dual.sba_dual.sub_sup_left_isotone test_seq_def) + +lemma ascending_chain_mult_left: + "ascending_chain t \ test_seq t \ ascending_chain (\n . -p * t n) \ test_seq (\n . -p * t n)" + by (smt (z3) ascending_chain_def sub_mult_closed tests_dual.sba_dual.reflexive tests_dual.sup_isotone test_seq_def) + +lemma ascending_chain_mult_right: + "ascending_chain t \ test_seq t \ ascending_chain (\n . t n * -p) \ test_seq (\n . t n * -p)" + by (smt (z3) ascending_chain_def sub_mult_closed tests_dual.sba_dual.reflexive tests_dual.sup_isotone test_seq_def) + +lemma descending_chain_sup_left: + "descending_chain t \ test_seq t \ descending_chain (\n . -p \ t n) \ test_seq (\n . -p \ t n)" + by (smt descending_chain_def tests_dual.sba_dual.sub_sup_closed tests_dual.sba_dual.sub_sup_right_isotone test_seq_def) + +lemma descending_chain_sup_right: + "descending_chain t \ test_seq t \ descending_chain (\n . t n \ -p) \ test_seq (\n . t n \ -p)" + by (smt descending_chain_def tests_dual.sba_dual.sub_sup_closed tests_dual.sba_dual.sub_sup_left_isotone test_seq_def) + +lemma descending_chain_mult_left: + "descending_chain t \ test_seq t \ descending_chain (\n . -p * t n) \ test_seq (\n . -p * t n)" + by (smt (z3) descending_chain_def sub_mult_closed tests_dual.sba_dual.reflexive tests_dual.sup_isotone test_seq_def) + +lemma descending_chain_mult_right: + "descending_chain t \ test_seq t \ descending_chain (\n . t n * -p) \ test_seq (\n . t n * -p)" + by (smt (z3) descending_chain_def sub_mult_closed tests_dual.sba_dual.reflexive tests_dual.sup_isotone test_seq_def) + +end + +end + diff --git a/thys/Correctness_Algebras/document/root.bib b/thys/Correctness_Algebras/document/root.bib new file mode 100644 --- /dev/null +++ b/thys/Correctness_Algebras/document/root.bib @@ -0,0 +1,240 @@ +@STRING{jlamp = {Journal of Logical and Algebraic Methods in Programming}} +@STRING{jlap = {Journal of Logic and Algebraic Programming}} +@STRING{lncs = {Lecture Notes in Computer Science}} +@STRING{sv = {Springer}} +@STRING{uc = {University of Canterbury}} +@STRING{uu = {Universit{\"a}t Ulm}} + +@InProceedings{BerghammerGuttmann2015b, + author = {Berghammer, R. and Guttmann, W.}, + title = {Closure, Properties and Closure Properties of Multirelations}, + editor = {Kahl, W. and Winter, M. and Oliveira, J. N.}, + booktitle = {Relational and Algebraic Methods in Computer Science (RAMiCS 2015)}, + publisher = sv, + series = lncs, + volume = 9348, + pages = {67--83}, + year = 2015, + note = {} +} + +@Article{BerghammerGuttmann2017, + author = {Berghammer, R. and Guttmann, W.}, + title = {An Algebraic Approach to Multirelations and their Properties}, + journal = jlamp, + volume = 88, + pages = {45--63}, + year = 2017, + note = {} +} + +@InProceedings{Guttmann2009, + author = {Guttmann, W.}, + title = {General Correctness Algebra}, + editor = {Berghammer, R. and Jaoua, A. M. and M{\"o}ller, B.}, + booktitle = {Relations and Kleene Algebra in Computer Science (RelMiCS/AKA 2009)}, + publisher = sv, + series = lncs, + volume = 5827, + pages = {150--165}, + year = 2009, + note = {} +} + +@InProceedings{Guttmann2010a, + author = {Guttmann, W.}, + title = {Partial, Total and General Correctness}, + editor = {Bolduc, C. and Desharnais, J. and Ktari, B.}, + booktitle = {Mathematics of Program Construction (MPC 2010)}, + publisher = sv, + series = lncs, + volume = 6120, + pages = {157--177}, + year = 2010, + note = {} +} + +@InProceedings{Guttmann2010d, + author = {Guttmann, W.}, + title = {Unifying Recursion in Partial, Total and General Correctness}, + editor = {Qin, S.}, + booktitle = {Unifying Theories of Programming, Third International Symposium (UTP 2010)}, + publisher = sv, + series = lncs, + volume = 6445, + pages = {207--225}, + year = 2010, + note = {} +} + +@InProceedings{Guttmann2011a, + author = {Guttmann, W.}, + title = {Towards a Typed Omega Algebra}, + editor = {de Swart, H.}, + booktitle = {Relational and Algebraic Methods in Computer Science (RAMiCS 2011)}, + publisher = sv, + series = lncs, + volume = 6663, + pages = {196--211}, + year = 2011, + note = {} +} + +@Article{Guttmann2011b, + author = {Guttmann, W.}, + title = {Fixpoints for General Correctness}, + journal = jlap, + volume = 80, + number = 6, + pages = {248--265}, + year = 2011, + note = {} +} + +@InProceedings{Guttmann2012a, + author = {Guttmann, W.}, + title = {Unifying Correctness Statements}, + editor = {Gibbons, J. and Nogueira, P.}, + booktitle = {Mathematics of Program Construction (MPC 2012)}, + publisher = sv, + series = lncs, + volume = 7342, + pages = {198--219}, + year = 2012, + note = {} +} + +@Article{Guttmann2012b, + author = {Guttmann, W.}, + title = {Typing Theorems of Omega Algebra}, + journal = jlap, + volume = 81, + number = 6, + pages = {643--659}, + year = 2012, + note = {} +} + +@Article{Guttmann2012c, + author = {Guttmann, W.}, + title = {Algebras for Iteration and Infinite Computations}, + journal = acta, + volume = 49, + number = 5, + pages = {343--359}, + year = 2012, + note = {} +} + +@InProceedings{Guttmann2012d, + author = {Guttmann, W.}, + title = {Unifying Lazy and Strict Computations}, + editor = {Kahl, W. and Griffin, T. G.}, + booktitle = {Relational and Algebraic Methods in Computer Science (RAMiCS 2012)}, + publisher = sv, + series = lncs, + volume = 7560, + pages = {17--32}, + year = 2012, + note = {} +} + +@Article{Guttmann2013, + author = {Guttmann, W.}, + title = {Extended Designs Algebraically}, + journal = scp, + volume = 78, + number = 11, + pages = {2064--2085}, + year = 2013, + note = {} +} + +@Article{Guttmann2014a, + author = {Guttmann, W.}, + title = {Multirelations with infinite computations}, + journal = jlamp, + volume = 83, + number = 2, + pages = {194--211}, + year = 2014, + note = {} +} + +@InProceedings{Guttmann2014b, + author = {Guttmann, W.}, + title = {Extended Conscriptions Algebraically}, + editor = {H{\"o}fner, P. and Jipsen, P. and Kahl, W. and M{\"u}ller, M. E.}, + booktitle = {Relational and Algebraic Methods in Computer Science (RAMiCS 2014)}, + publisher = sv, + series = lncs, + volume = 8428, + pages = {139--156}, + year = 2014, + note = {} +} + +@Article{Guttmann2014c, + author = {Guttmann, W.}, + title = {Algebras for Correctness of Sequential Computations}, + journal = scp, + volume = 85, + number = {Part B}, + pages = {224--240}, + year = 2014, + note = {} +} + +@TechReport{Guttmann2015a, + author = {Guttmann, W.}, + title = {Isabelle/{HOL} Theories of Algebras for Iteration, Infinite Executions and Correctness of Sequential Computations}, + institution = uc, + number = {{TR-COSC 02/15}}, + year = 2015, + note = {} +} + +@PhDThesis{Guttmann2015b, + author = {Guttmann, W.}, + title = {Algebras for Iteration, Infinite Executions and Correctness of Sequential Computations}, + school = uu, + type = {Habilitationsschrift}, + year = 2015, + note = {} +} + +@Article{Guttmann2015c, + author = {Guttmann, W.}, + title = {Infinite executions of lazy and strict computations}, + journal = jlamp, + volume = 84, + number = 3, + pages = {326--340}, + year = 2015, + note = {} +} + +@Article{Guttmann2016a, + author = {Guttmann, W.}, + title = {An Algebraic Approach to Computations with Progress}, + journal = jlamp, + volume = 85, + number = 4, + pages = {520--539}, + year = 2016, + note = {} +} + +@InProceedings{GuttmannStruthWeber2011b, + author = {Guttmann, W. and Struth, G. and Weber, T.}, + title = {Automating Algebraic Methods in {Isabelle}}, + editor = {Qin, S. and Qiu, Z.}, + booktitle = {Formal Methods and Software Engineering (ICFEM 2011)}, + publisher = sv, + series = lncs, + volume = 6991, + pages = {617--632}, + year = 2011, + note = {} +} + diff --git a/thys/Correctness_Algebras/document/root.tex b/thys/Correctness_Algebras/document/root.tex new file mode 100644 --- /dev/null +++ b/thys/Correctness_Algebras/document/root.tex @@ -0,0 +1,46 @@ +\documentclass[11pt,a4paper]{article} + +\usepackage[T1]{fontenc} +\usepackage{isabelle,isabellesym} +\usepackage{amssymb,cite,ragged2e,stmaryrd} +\usepackage{pdfsetup} + +\isabellestyle{it} +\renewenvironment{isamarkuptext}{\par\isastyletext\begin{isapar}\justifying\color{blue}}{\end{isapar}} +\newcommand{\flqq}{\guillemotleft} +\newcommand{\frqq}{\guillemotright} +\renewcommand\labelitemi{$*$} +\urlstyle{rm} + +\begin{document} + +\title{Algebras for Iteration, Infinite Executions and Correctness of Sequential Computations} +\author{Walter Guttmann} +\maketitle + +\begin{abstract} + We study models of state-based non-deterministic sequential computations and describe them using algebras. + We propose algebras that describe iteration for strict and non-strict computations. + They unify computation models which differ in the fixpoints used to represent iteration. + We propose algebras that describe the infinite executions of a computation. + They lead to a unified approximation order and results that connect fixpoints in the approximation and refinement orders. + This unifies the semantics of recursion for a range of computation models. + We propose algebras that describe preconditions and the effect of while-programs under postconditions. + They unify correctness statements in two dimensions: one statement applies in various computation models to various correctness claims. +\end{abstract} + +These theories consolidate results which have appeared in \cite{BerghammerGuttmann2015b,BerghammerGuttmann2017,Guttmann2009,Guttmann2010a,Guttmann2010d,Guttmann2011b,Guttmann2011a,Guttmann2012c,Guttmann2012b,Guttmann2012a,Guttmann2012d,Guttmann2013,Guttmann2014c,Guttmann2014b,Guttmann2014a,Guttmann2015b,Guttmann2015c,Guttmann2015a,Guttmann2016a,GuttmannStruthWeber2011b}. +Most are described in \cite{Guttmann2015b}. +Theorem numbers refer to \cite{Guttmann2015b} except in theory \emph{Lattice-Ordered Semirings}, where they refer to \cite{BerghammerGuttmann2017}, and in theories \emph{Capped Omega Algebras}, \emph{N-Algebras}, \emph{N-Omega-Algebras}, \emph{N-Omega Binary Iterings} and \emph{Recursion}, where they refer to \cite{Guttmann2016a}. + +\tableofcontents + +\begin{flushleft} +\input{session} +\end{flushleft} + +\bibliographystyle{abbrv} +\bibliography{root} + +\end{document} + diff --git a/thys/ROOTS b/thys/ROOTS --- a/thys/ROOTS +++ b/thys/ROOTS @@ -1,632 +1,634 @@ ADS_Functor AI_Planning_Languages_Semantics AODV AVL-Trees AWN Abortable_Linearizable_Modules Abs_Int_ITP2012 Abstract-Hoare-Logics Abstract-Rewriting Abstract_Completeness Abstract_Soundness Adaptive_State_Counting Affine_Arithmetic Aggregation_Algebras Akra_Bazzi Algebraic_Numbers Algebraic_VCs Allen_Calculus Amicable_Numbers Amortized_Complexity AnselmGod Applicative_Lifting Approximation_Algorithms Architectural_Design_Patterns Aristotles_Assertoric_Syllogistic Arith_Prog_Rel_Primes ArrowImpossibilityGS Attack_Trees Auto2_HOL Auto2_Imperative_HOL AutoFocus-Stream Automated_Stateful_Protocol_Verification Automatic_Refinement AxiomaticCategoryTheory BDD BD_Security_Compositional BNF_CC BNF_Operations BTree Banach_Steinhaus +Belief_Revision Bell_Numbers_Spivey BenOr_Kozen_Reif Berlekamp_Zassenhaus Bernoulli Bertrands_Postulate Bicategory BinarySearchTree Binding_Syntax_Theory Binomial-Heaps Binomial-Queues BirdKMP Blue_Eyes Bondy Boolean_Expression_Checkers Bounded_Deducibility_Security Buchi_Complementation Budan_Fourier Buffons_Needle Buildings BytecodeLogicJmlTypes C2KA_DistributedSystems CAVA_Automata CAVA_LTL_Modelchecker CCS CISC-Kernel CRDT CSP_RefTK CYK CZH_Elementary_Categories CZH_Foundations CZH_Universal_Constructions CakeML CakeML_Codegen Call_Arity Card_Equiv_Relations Card_Multisets Card_Number_Partitions Card_Partitions Cartan_FP Case_Labeling Catalan_Numbers Category Category2 Category3 Cauchy Cayley_Hamilton Certification_Monads Chandy_Lamport Chord_Segments Circus Clean ClockSynchInst Closest_Pair_Points CoCon CofGroups Coinductive Coinductive_Languages Collections Combinatorics_Words Combinatorics_Words_Graph_Lemma Combinatorics_Words_Lyndon Comparison_Sort_Lower_Bound Compiling-Exceptions-Correctly Complete_Non_Orders Completeness Complex_Bounded_Operators Complex_Geometry Complx ComponentDependencies ConcurrentGC ConcurrentIMP Concurrent_Ref_Alg Concurrent_Revisions Conditional_Simplification Conditional_Transfer_Rule Consensus_Refined Constructive_Cryptography Constructive_Cryptography_CM Constructor_Funs Containers CoreC++ Core_DOM Core_SC_DOM +Correctness_Algebras CoSMed CoSMeDis Count_Complex_Roots CryptHOL CryptoBasedCompositionalProperties Cubic_Quartic_Equations DFS_Framework DOM_Components DPT-SAT-Solver DataRefinementIBP Datatype_Order_Generator Decl_Sem_Fun_PL Decreasing-Diagrams Decreasing-Diagrams-II Deep_Learning Delta_System_Lemma Density_Compiler Dependent_SIFUM_Refinement Dependent_SIFUM_Type_Systems Depth-First-Search Derangements Deriving Descartes_Sign_Rule Design_Theory Dict_Construction Differential_Dynamic_Logic Differential_Game_Logic Dijkstra_Shortest_Path Diophantine_Eqns_Lin_Hom Dirichlet_L Dirichlet_Series DiscretePricing Discrete_Summation DiskPaxos Dominance_CHK DynamicArchitectures Dynamic_Tables E_Transcendental Echelon_Form EdmondsKarp_Maxflow Efficient-Mergesort Elliptic_Curves_Group_Law Encodability_Process_Calculi Epistemic_Logic Ergodic_Theory Error_Function Euler_MacLaurin Euler_Partition Example-Submission Extended_Finite_State_Machine_Inference Extended_Finite_State_Machines FFT FLP FOL-Fitting FOL_Axiomatic FOL_Harrison FOL_Seq_Calc1 Factored_Transition_System_Bounding Falling_Factorial_Sum Farkas FeatherweightJava Featherweight_OCL Fermat3_4 FileRefinement FinFun Finger-Trees Finite-Map-Extras Finite_Automata_HF Finitely_Generated_Abelian_Groups First_Order_Terms First_Welfare_Theorem Fishburn_Impossibility Fisher_Yates Flow_Networks Floyd_Warshall Flyspeck-Tame FocusStreamsCaseStudies Forcing Formal_Puiseux_Series Formal_SSA Formula_Derivatives Fourier Free-Boolean-Algebra Free-Groups Fresh_Identifiers FunWithFunctions FunWithTilings Functional-Automata Functional_Ordered_Resolution_Prover Furstenberg_Topology GPU_Kernel_PL Gabow_SCC GaleStewart_Games Game_Based_Crypto Gauss-Jordan-Elim-Fun Gauss_Jordan Gauss_Sums Gaussian_Integers GenClock General-Triangle Generalized_Counting_Sort Generic_Deriving Generic_Join GewirthPGCProof Girth_Chromatic GoedelGod Goedel_HFSet_Semantic Goedel_HFSet_Semanticless Goedel_Incompleteness Goodstein_Lambda GraphMarkingIBP Graph_Saturation Graph_Theory Green Groebner_Bases Groebner_Macaulay Gromov_Hyperbolicity Grothendieck_Schemes Group-Ring-Module HOL-CSP HOLCF-Prelude HRB-Slicing Heard_Of Hello_World HereditarilyFinite Hermite Hermite_Lindemann Hidden_Markov_Models Higher_Order_Terms Hoare_Time Hood_Melville_Queue HotelKeyCards Huffman Hybrid_Logic Hybrid_Multi_Lane_Spatial_Logic Hybrid_Systems_VCs HyperCTL IEEE_Floating_Point IFC_Tracking IMAP-CRDT IMO2019 IMP2 IMP2_Binary_Heap IMP_Compiler IP_Addresses Imperative_Insertion_Sort Impossible_Geometry Incompleteness Incredible_Proof_Machine Inductive_Confidentiality Inductive_Inference InfPathElimination InformationFlowSlicing InformationFlowSlicing_Inter Integration Interpreter_Optimizations Interval_Arithmetic_Word32 Intro_Dest_Elim Iptables_Semantics Irrational_Series_Erdos_Straus Irrationality_J_Hancl IsaGeoCoq Isabelle_C Isabelle_Marries_Dirac Isabelle_Meta_Model Jacobson_Basic_Algebra Jinja JinjaDCI JinjaThreads JiveDataStoreModel Jordan_Hoelder Jordan_Normal_Form KAD KAT_and_DRA KBPs KD_Tree Key_Agreement_Strong_Adversaries Kleene_Algebra Knot_Theory Knuth_Bendix_Order Knuth_Morris_Pratt Koenigsberg_Friendship Kruskal Kuratowski_Closure_Complement LLL_Basis_Reduction LLL_Factorization LOFT LTL LTL_Master_Theorem LTL_Normal_Form LTL_to_DRA LTL_to_GBA Lam-ml-Normalization LambdaAuth LambdaMu Lambda_Free_EPO Lambda_Free_KBOs Lambda_Free_RPOs Lambert_W Landau_Symbols Laplace_Transform Latin_Square LatticeProperties Launchbury Laws_of_Large_Numbers Lazy-Lists-II Lazy_Case Lehmer Lifting_Definition_Option Lifting_the_Exponent LightweightJava LinearQuantifierElim Linear_Inequalities Linear_Programming Linear_Recurrences Liouville_Numbers List-Index List-Infinite List_Interleaving List_Inversions List_Update LocalLexing Localization_Ring Locally-Nameless-Sigma Logging_Independent_Anonymity Lowe_Ontological_Argument Lower_Semicontinuous Lp Lucas_Theorem MFMC_Countable MFODL_Monitor_Optimized MFOTL_Monitor MSO_Regex_Equivalence Markov_Models Marriage Mason_Stothers Matrices_for_ODEs Matrix Matrix_Tensor Matroids Max-Card-Matching Median_Of_Medians_Selection Menger Mereology Mersenne_Primes Metalogic_ProofChecker MiniML MiniSail Minimal_SSA Minkowskis_Theorem Minsky_Machines Modal_Logics_for_NTS Modular_Assembly_Kit_Security Modular_arithmetic_LLL_and_HNF_algorithms Monad_Memo_DP Monad_Normalisation MonoBoolTranAlgebra MonoidalCategory Monomorphic_Monad MuchAdoAboutTwo Multi_Party_Computation Multirelations Myhill-Nerode Name_Carrying_Type_Inference Nash_Williams Nat-Interval-Logic Native_Word Nested_Multisets_Ordinals Network_Security_Policy_Verification Neumann_Morgenstern_Utility No_FTL_observers Nominal2 Noninterference_CSP Noninterference_Concurrent_Composition Noninterference_Generic_Unwinding Noninterference_Inductive_Unwinding Noninterference_Ipurge_Unwinding Noninterference_Sequential_Composition NormByEval Nullstellensatz Octonions OpSets Open_Induction Optics Optimal_BST Orbit_Stabiliser Order_Lattice_Props Ordered_Resolution_Prover Ordinal Ordinal_Partitions Ordinals_and_Cardinals Ordinary_Differential_Equations PAC_Checker PCF PLM POPLmark-deBruijn PSemigroupsConvolution Padic_Ints Pairing_Heap Paraconsistency Parity_Game Partial_Function_MR Partial_Order_Reduction Password_Authentication_Protocol Pell Perfect-Number-Thm Perron_Frobenius Physical_Quantities Pi_Calculus Pi_Transcendental Planarity_Certificates Poincare_Bendixson Poincare_Disc Polynomial_Factorization Polynomial_Interpolation Polynomials Pop_Refinement Posix-Lexing Possibilistic_Noninterference Power_Sum_Polynomials Pratt_Certificate Presburger-Automata Prim_Dijkstra_Simple Prime_Distribution_Elementary Prime_Harmonic_Series Prime_Number_Theorem Priority_Queue_Braun Priority_Search_Trees Probabilistic_Noninterference Probabilistic_Prime_Tests Probabilistic_System_Zoo Probabilistic_Timed_Automata Probabilistic_While Program-Conflict-Analysis Progress_Tracking Projective_Geometry Projective_Measurements Promela Proof_Strategy_Language PropResPI Propositional_Proof_Systems Prpu_Maxflow PseudoHoops Psi_Calculi Ptolemys_Theorem Public_Announcement_Logic QHLProver QR_Decomposition Quantales Quaternions Quick_Sort_Cost RIPEMD-160-SPARK ROBDD RSAPSS Ramsey-Infinite Random_BSTs Random_Graph_Subgraph_Threshold Randomised_BSTs Randomised_Social_Choice Rank_Nullity_Theorem Real_Impl Recursion-Addition Recursion-Theory-I Refine_Imperative_HOL Refine_Monadic RefinementReactive Regex_Equivalence Regression_Test_Selection Regular-Sets Regular_Algebras Relation_Algebra Relational-Incorrectness-Logic Relational_Disjoint_Set_Forests Relational_Forests Relational_Method Relational_Minimum_Spanning_Trees Relational_Paths Rep_Fin_Groups Residuated_Lattices Resolution_FOL Rewriting_Z Ribbon_Proofs Robbins-Conjecture Robinson_Arithmetic Root_Balanced_Tree Routing Roy_Floyd_Warshall SATSolverVerification SC_DOM_Components SDS_Impossibility SIFPL SIFUM_Type_Systems SPARCv8 Safe_Distance Safe_OCL Saturation_Framework Saturation_Framework_Extensions Schutz_Spacetime Secondary_Sylow Security_Protocol_Refinement Selection_Heap_Sort SenSocialChoice Separata Separation_Algebra Separation_Logic_Imperative_HOL SequentInvertibility Shadow_DOM Shadow_SC_DOM Shivers-CFA ShortestPath Show Sigma_Commit_Crypto Signature_Groebner Simpl Simple_Firewall Simplex Skew_Heap Skip_Lists Slicing Sliding_Window_Algorithm Smith_Normal_Form Smooth_Manifolds Sort_Encodings Source_Coding_Theorem SpecCheck Special_Function_Bounds Splay_Tree Sqrt_Babylonian Stable_Matching Statecharts Stateful_Protocol_Composition_and_Typing Stellar_Quorums Stern_Brocot Stewart_Apollonius Stirling_Formula Stochastic_Matrices Stone_Algebras Stone_Kleene_Relation_Algebras Stone_Relation_Algebras Store_Buffer_Reduction Stream-Fusion Stream_Fusion_Code Strong_Security Sturm_Sequences Sturm_Tarski Stuttering_Equivalence Subresultants Subset_Boolean_Algebras SumSquares Sunflowers SuperCalc Surprise_Paradox Symmetric_Polynomials Syntax_Independent_Logic Szpilrajn TESL_Language TLA Tail_Recursive_Functions Tarskis_Geometry Taylor_Models Three_Circles Timed_Automata Topological_Semantics Topology TortoiseHare Transcendence_Series_Hancl_Rucki Transformer_Semantics Transition_Systems_and_Automata Transitive-Closure Transitive-Closure-II Treaps Tree-Automata Tree_Decomposition Triangle Trie Twelvefold_Way Tycon Types_Tableaus_and_Goedels_God Types_To_Sets_Extension UPF UPF_Firewall UTP Universal_Turing_Machine UpDown_Scheme Valuation Van_der_Waerden VectorSpace VeriComp Verified-Prover Verified_SAT_Based_AI_Planning VerifyThis2018 VerifyThis2019 Vickrey_Clarke_Groves Virtual_Substitution VolpanoSmith WHATandWHERE_Security WOOT_Strong_Eventual_Consistency WebAssembly Weighted_Path_Order Weight_Balanced_Trees Well_Quasi_Orders Winding_Number_Eval Word_Lib WorkerWrapper XML ZFC_in_HOL Zeta_3_Irrational Zeta_Function pGCL diff --git a/web/entries/Akra_Bazzi.html b/web/entries/Akra_Bazzi.html --- a/web/entries/Akra_Bazzi.html +++ b/web/entries/Akra_Bazzi.html @@ -1,243 +1,243 @@ The Akra-Bazzi theorem and the Master theorem - Archive of Formal Proofs

 

 

 

 

 

 

The Akra-Bazzi theorem and the Master theorem

 

Title: The Akra-Bazzi theorem and the Master theorem
Author: - Manuel Eberl + Manuel Eberl
Submission date: 2015-07-14
Abstract: This article contains a formalisation of the Akra-Bazzi method based on a proof by Leighton. It is a generalisation of the well-known Master Theorem for analysing the complexity of Divide & Conquer algorithms. We also include a generalised version of the Master theorem based on the Akra-Bazzi theorem, which is easier to apply than the Akra-Bazzi theorem itself.

Some proof methods that facilitate applying the Master theorem are also included. For a more detailed explanation of the formalisation and the proof methods, see the accompanying paper (publication forthcoming).

BibTeX:
@article{Akra_Bazzi-AFP,
   author  = {Manuel Eberl},
   title   = {The Akra-Bazzi theorem and the Master theorem},
   journal = {Archive of Formal Proofs},
   month   = jul,
   year    = 2015,
   note    = {\url{https://isa-afp.org/entries/Akra_Bazzi.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: Landau_Symbols
Used by: Closest_Pair_Points

\ No newline at end of file diff --git a/web/entries/Algebraic_Numbers.html b/web/entries/Algebraic_Numbers.html --- a/web/entries/Algebraic_Numbers.html +++ b/web/entries/Algebraic_Numbers.html @@ -1,241 +1,241 @@ Algebraic Numbers in Isabelle/HOL - Archive of Formal Proofs

 

 

 

 

 

 

Algebraic Numbers in Isabelle/HOL

 

Title: Algebraic Numbers in Isabelle/HOL
Authors: René Thiemann (rene /dot/ thiemann /at/ uibk /dot/ ac /dot/ at), Akihisa Yamada (akihisa /dot/ yamada /at/ aist /dot/ go /dot/ jp) and Sebastiaan Joosten
Contributor: - Manuel Eberl + Manuel Eberl
Submission date: 2015-12-22
Abstract: Based on existing libraries for matrices, factorization of rational polynomials, and Sturm's theorem, we formalized algebraic numbers in Isabelle/HOL. Our development serves as an implementation for real and complex numbers, and it admits to compute roots and completely factorize real and complex polynomials, provided that all coefficients are rational numbers. Moreover, we provide two implementations to display algebraic numbers, an injective and expensive one, or a faster but approximative version.

To this end, we mechanized several results on resultants, which also required us to prove that polynomials over a unique factorization domain form again a unique factorization domain.

Change history: [2016-01-29]: Split off Polynomial Interpolation and Polynomial Factorization
[2017-04-16]: Use certified Berlekamp-Zassenhaus factorization, use subresultant algorithm for computing resultants, improved bisection algorithm
BibTeX:
@article{Algebraic_Numbers-AFP,
   author  = {René Thiemann and Akihisa Yamada and Sebastiaan Joosten},
   title   = {Algebraic Numbers in Isabelle/HOL},
   journal = {Archive of Formal Proofs},
   month   = dec,
   year    = 2015,
   note    = {\url{https://isa-afp.org/entries/Algebraic_Numbers.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: Berlekamp_Zassenhaus, Sturm_Sequences
Used by: BenOr_Kozen_Reif, Cubic_Quartic_Equations, Hermite_Lindemann, LLL_Basis_Reduction

\ No newline at end of file diff --git a/web/entries/Belief_Revision.html b/web/entries/Belief_Revision.html new file mode 100644 --- /dev/null +++ b/web/entries/Belief_Revision.html @@ -0,0 +1,198 @@ + + + + +Belief Revision Theory - Archive of Formal Proofs + + + + + + + + + + + + + + + + + + + + + + + + +
+

 

+ + + +

 

+

 

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

 

+

 

+
+
+

 

+

Belief + + Revision + + Theory + +

+

 

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Title:Belief Revision Theory
+ Authors: + + Valentin Fouillard (valentin /dot/ fouillard /at/ limsi /dot/ fr), + Safouan Taha (safouan /dot/ taha /at/ lri /dot/ fr), + Frédéric Boulanger (frederic /dot/ boulanger /at/ centralesupelec /dot/ fr) and + Nicolas Sabouret +
Submission date:2021-10-19
Abstract: +The 1985 paper by Carlos Alchourrón, Peter Gärdenfors, and David +Makinson (AGM), “On the Logic of Theory Change: Partial Meet +Contraction and Revision Functions” launches a large and rapidly +growing literature that employs formal models and logics to handle +changing beliefs of a rational agent and to take into account new +piece of information observed by this agent. In 2011, a review book +titled "AGM 25 Years: Twenty-Five Years of Research in Belief +Change" was edited to summarize the first twenty five years of +works based on AGM. This HOL-based AFP entry is a faithful +formalization of the AGM operators (e.g. contraction, revision, +remainder ...) axiomatized in the original paper. It also contains the +proofs of all the theorems stated in the paper that show how these +operators combine. Both proofs of Harper and Levi identities are +established.
BibTeX: +
@article{Belief_Revision-AFP,
+  author  = {Valentin Fouillard and Safouan Taha and Frédéric Boulanger and Nicolas Sabouret},
+  title   = {Belief Revision Theory},
+  journal = {Archive of Formal Proofs},
+  month   = oct,
+  year    = 2021,
+  note    = {\url{https://isa-afp.org/entries/Belief_Revision.html},
+            Formal proof development},
+  ISSN    = {2150-914x},
+}
+
License:BSD License
+ +

+ + + + + + + + + + + + + + + + + + +
+
+ + + + + + \ No newline at end of file diff --git a/web/entries/Bernoulli.html b/web/entries/Bernoulli.html --- a/web/entries/Bernoulli.html +++ b/web/entries/Bernoulli.html @@ -1,225 +1,225 @@ Bernoulli Numbers - Archive of Formal Proofs

 

 

 

 

 

 

Bernoulli Numbers

 

Title: Bernoulli Numbers
Authors: Lukas Bulwahn (lukas /dot/ bulwahn /at/ gmail /dot/ com) and - Manuel Eberl + Manuel Eberl
Submission date: 2017-01-24
Abstract:

Bernoulli numbers were first discovered in the closed-form expansion of the sum 1m + 2m + … + nm for a fixed m and appear in many other places. This entry provides three different definitions for them: a recursive one, an explicit one, and one through their exponential generating function.

In addition, we prove some basic facts, e.g. their relation to sums of powers of integers and that all odd Bernoulli numbers except the first are zero, and some advanced facts like their relationship to the Riemann zeta function on positive even integers.

We also prove the correctness of the Akiyama–Tanigawa algorithm for computing Bernoulli numbers with reasonable efficiency, and we define the periodic Bernoulli polynomials (which appear e.g. in the Euler–MacLaurin summation formula and the expansion of the log-Gamma function) and prove their basic properties.

BibTeX:
@article{Bernoulli-AFP,
   author  = {Lukas Bulwahn and Manuel Eberl},
   title   = {Bernoulli Numbers},
   journal = {Archive of Formal Proofs},
   month   = jan,
   year    = 2017,
   note    = {\url{https://isa-afp.org/entries/Bernoulli.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Used by: Euler_MacLaurin, Lambert_W, Stirling_Formula, Zeta_Function

\ No newline at end of file diff --git a/web/entries/Bertrands_Postulate.html b/web/entries/Bertrands_Postulate.html --- a/web/entries/Bertrands_Postulate.html +++ b/web/entries/Bertrands_Postulate.html @@ -1,225 +1,225 @@ Bertrand's postulate - Archive of Formal Proofs

 

 

 

 

 

 

Bertrand's postulate

 

Title: Bertrand's postulate
Authors: Julian Biendarra and - Manuel Eberl + Manuel Eberl
Contributor: Lawrence C. Paulson
Submission date: 2017-01-17
Abstract:

Bertrand's postulate is an early result on the distribution of prime numbers: For every positive integer n, there exists a prime number that lies strictly between n and 2n. The proof is ported from John Harrison's formalisation in HOL Light. It proceeds by first showing that the property is true for all n greater than or equal to 600 and then showing that it also holds for all n below 600 by case distinction.

BibTeX:
@article{Bertrands_Postulate-AFP,
   author  = {Julian Biendarra and Manuel Eberl},
   title   = {Bertrand's postulate},
   journal = {Archive of Formal Proofs},
   month   = jan,
   year    = 2017,
   note    = {\url{https://isa-afp.org/entries/Bertrands_Postulate.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: Pratt_Certificate
Used by: Dirichlet_L

\ No newline at end of file diff --git a/web/entries/Bicategory.html b/web/entries/Bicategory.html --- a/web/entries/Bicategory.html +++ b/web/entries/Bicategory.html @@ -1,223 +1,226 @@ Bicategories - Archive of Formal Proofs

 

 

 

 

 

 

Bicategories

 

+(revision 472cb2268826)
+[2021-07-22]: +Added new material: "concrete bicategories" and "bicategory of categories". +(revision 49d3aa43c180)
Title: Bicategories
Author: Eugene W. Stark (stark /at/ cs /dot/ stonybrook /dot/ edu)
Submission date: 2020-01-06
Abstract:

Taking as a starting point the author's previous work on developing aspects of category theory in Isabelle/HOL, this article gives a compatible formalization of the notion of "bicategory" and develops a framework within which formal proofs of facts about bicategories can be given. The framework includes a number of basic results, including the Coherence Theorem, the Strictness Theorem, pseudofunctors and biequivalence, and facts about internal equivalences and adjunctions in a bicategory. As a driving application and demonstration of the utility of the framework, it is used to give a formal proof of a theorem, due to Carboni, Kasangian, and Street, that characterizes up to biequivalence the bicategories of spans in a category with pullbacks. The formalization effort necessitated the filling-in of many details that were not evident from the brief presentation in the original paper, as well as identifying a few minor corrections along the way.

Revisions made subsequent to the first version of this article added additional material on pseudofunctors, pseudonatural transformations, modifications, and equivalence of bicategories; the main thrust being to give a proof that a pseudofunctor is a biequivalence if and only if it can be extended to an equivalence of bicategories.

Change history: [2020-02-15]: Move ConcreteCategory.thy from Bicategory to Category3 and use it systematically. Make other minor improvements throughout. (revision a51840d36867)
[2020-11-04]: Added new material on equivalence of bicategories, with associated changes. -(revision 472cb2268826)
BibTeX:
@article{Bicategory-AFP,
   author  = {Eugene W. Stark},
   title   = {Bicategories},
   journal = {Archive of Formal Proofs},
   month   = jan,
   year    = 2020,
   note    = {\url{https://isa-afp.org/entries/Bicategory.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: MonoidalCategory

\ No newline at end of file diff --git a/web/entries/Buffons_Needle.html b/web/entries/Buffons_Needle.html --- a/web/entries/Buffons_Needle.html +++ b/web/entries/Buffons_Needle.html @@ -1,218 +1,218 @@ Buffon's Needle Problem - Archive of Formal Proofs

 

 

 

 

 

 

Buffon's Needle Problem

 

Title: Buffon's Needle Problem
Author: - Manuel Eberl + Manuel Eberl
Submission date: 2017-06-06
Abstract: In the 18th century, Georges-Louis Leclerc, Comte de Buffon posed and later solved the following problem, which is often called the first problem ever solved in geometric probability: Given a floor divided into vertical strips of the same width, what is the probability that a needle thrown onto the floor randomly will cross two strips? This entry formally defines the problem in the case where the needle's position is chosen uniformly at random in a single strip around the origin (which is equivalent to larger arrangements due to symmetry). It then provides proofs of the simple solution in the case where the needle's length is no greater than the width of the strips and the more complicated solution in the opposite case.
BibTeX:
@article{Buffons_Needle-AFP,
   author  = {Manuel Eberl},
   title   = {Buffon's Needle Problem},
   journal = {Archive of Formal Proofs},
   month   = jun,
   year    = 2017,
   note    = {\url{https://isa-afp.org/entries/Buffons_Needle.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License

\ No newline at end of file diff --git a/web/entries/Catalan_Numbers.html b/web/entries/Catalan_Numbers.html --- a/web/entries/Catalan_Numbers.html +++ b/web/entries/Catalan_Numbers.html @@ -1,220 +1,220 @@ Catalan Numbers - Archive of Formal Proofs

 

 

 

 

 

 

Catalan Numbers

 

Title: Catalan Numbers
Author: - Manuel Eberl + Manuel Eberl
Submission date: 2016-06-21
Abstract:

In this work, we define the Catalan numbers Cn and prove several equivalent definitions (including some closed-form formulae). We also show one of their applications (counting the number of binary trees of size n), prove the asymptotic growth approximation Cn ∼ 4n / (√π · n1.5), and provide reasonably efficient executable code to compute them.

The derivation of the closed-form formulae uses algebraic manipulations of the ordinary generating function of the Catalan numbers, and the asymptotic approximation is then done using generalised binomial coefficients and the Gamma function. Thanks to these highly non-elementary mathematical tools, the proofs are very short and simple.

BibTeX:
@article{Catalan_Numbers-AFP,
   author  = {Manuel Eberl},
   title   = {Catalan Numbers},
   journal = {Archive of Formal Proofs},
   month   = jun,
   year    = 2016,
   note    = {\url{https://isa-afp.org/entries/Catalan_Numbers.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: Landau_Symbols

\ No newline at end of file diff --git a/web/entries/Category3.html b/web/entries/Category3.html --- a/web/entries/Category3.html +++ b/web/entries/Category3.html @@ -1,281 +1,285 @@ Category Theory with Adjunctions and Limits - Archive of Formal Proofs

 

 

 

 

 

 

Category Theory with Adjunctions and Limits

 

+(revision 472cb2268826)
+[2021-07-22]: +Minor changes to sublocale declarations related to functor/natural transformation to +avoid issues with global interpretations reported 2/2/2021 by Filip Smola. +(revision 49d3aa43c180)
Title: Category Theory with Adjunctions and Limits
Author: Eugene W. Stark (stark /at/ cs /dot/ stonybrook /dot/ edu)
Submission date: 2016-06-26
Abstract:

This article attempts to develop a usable framework for doing category theory in Isabelle/HOL. Our point of view, which to some extent differs from that of the previous AFP articles on the subject, is to try to explore how category theory can be done efficaciously within HOL, rather than trying to match exactly the way things are done using a traditional approach. To this end, we define the notion of category in an "object-free" style, in which a category is represented by a single partial composition operation on arrows. This way of defining categories provides some advantages in the context of HOL, including the ability to avoid the use of records and the possibility of defining functors and natural transformations simply as certain functions on arrows, rather than as composite objects. We define various constructions associated with the basic notions, including: dual category, product category, functor category, discrete category, free category, functor composition, and horizontal and vertical composite of natural transformations. A "set category" locale is defined that axiomatizes the notion "category of all sets at a type and all functions between them," and a fairly extensive set of properties of set categories is derived from the locale assumptions. The notion of a set category is used to prove the Yoneda Lemma in a general setting of a category equipped with a "hom embedding," which maps arrows of the category to the "universe" of the set category. We also give a treatment of adjunctions, defining adjunctions via left and right adjoint functors, natural bijections between hom-sets, and unit and counit natural transformations, and showing the equivalence of these definitions. We also develop the theory of limits, including representations of functors, diagrams and cones, and diagonal functors. We show that right adjoint functors preserve limits, and that limits can be constructed via products and equalizers. We characterize the conditions under which limits exist in a set category. We also examine the case of limits in a functor category, ultimately culminating in a proof that the Yoneda embedding preserves limits.

Revisions made subsequent to the first version of this article added material on equivalence of categories, cartesian categories, categories with pullbacks, categories with finite limits, and cartesian closed categories. A construction was given of the category of hereditarily finite sets and functions between them, and it was shown that this category is cartesian closed.

Change history: [2018-05-29]: Revised axioms for the category locale. Introduced notation for composition and "in hom". (revision 8318366d4575)
[2020-02-15]: Move ConcreteCategory.thy from Bicategory to Category3 and use it systematically. Make other minor improvements throughout. (revision a51840d36867)
[2020-07-10]: Added new material, mostly centered around cartesian categories. (revision 06640f317a79)
[2020-11-04]: Minor modifications and extensions made in conjunction with the addition of new material to Bicategory. -(revision 472cb2268826)
BibTeX:
@article{Category3-AFP,
   author  = {Eugene W. Stark},
   title   = {Category Theory with Adjunctions and Limits},
   journal = {Archive of Formal Proofs},
   month   = jun,
   year    = 2016,
   note    = {\url{https://isa-afp.org/entries/Category3.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: HereditarilyFinite
Used by: MonoidalCategory

\ No newline at end of file diff --git a/web/entries/Comparison_Sort_Lower_Bound.html b/web/entries/Comparison_Sort_Lower_Bound.html --- a/web/entries/Comparison_Sort_Lower_Bound.html +++ b/web/entries/Comparison_Sort_Lower_Bound.html @@ -1,229 +1,229 @@ Lower bound on comparison-based sorting algorithms - Archive of Formal Proofs

 

 

 

 

 

 

Lower bound on comparison-based sorting algorithms

 

Title: Lower bound on comparison-based sorting algorithms
Author: - Manuel Eberl + Manuel Eberl
Submission date: 2017-03-15
Abstract:

This article contains a formal proof of the well-known fact that number of comparisons that a comparison-based sorting algorithm needs to perform to sort a list of length n is at least log2 (n!) in the worst case, i. e. Ω(n log n).

For this purpose, a shallow embedding for comparison-based sorting algorithms is defined: a sorting algorithm is a recursive datatype containing either a HOL function or a query of a comparison oracle with a continuation containing the remaining computation. This makes it possible to force the algorithm to use only comparisons and to track the number of comparisons made.

BibTeX:
@article{Comparison_Sort_Lower_Bound-AFP,
   author  = {Manuel Eberl},
   title   = {Lower bound on comparison-based sorting algorithms},
   journal = {Archive of Formal Proofs},
   month   = mar,
   year    = 2017,
   note    = {\url{https://isa-afp.org/entries/Comparison_Sort_Lower_Bound.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: Landau_Symbols, List-Index, Stirling_Formula
Used by: Quick_Sort_Cost, Treaps

\ No newline at end of file diff --git a/web/entries/Correctness_Algebras.html b/web/entries/Correctness_Algebras.html new file mode 100644 --- /dev/null +++ b/web/entries/Correctness_Algebras.html @@ -0,0 +1,209 @@ + + + + +Algebras for Iteration, Infinite Executions and Correctness of Sequential Computations - Archive of Formal Proofs + + + + + + + + + + + + + + + + + + + + + + + + +
+

 

+ + + +

 

+

 

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

 

+

 

+
+
+

 

+

Algebras + + for + + Iteration, + + Infinite + + Executions + + and + + Correctness + + of + + Sequential + + Computations + +

+

 

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Title:Algebras for Iteration, Infinite Executions and Correctness of Sequential Computations
+ Author: + + Walter Guttmann +
Submission date:2021-10-12
Abstract: +We study models of state-based non-deterministic sequential +computations and describe them using algebras. We propose algebras +that describe iteration for strict and non-strict computations. They +unify computation models which differ in the fixpoints used to +represent iteration. We propose algebras that describe the infinite +executions of a computation. They lead to a unified approximation +order and results that connect fixpoints in the approximation and +refinement orders. This unifies the semantics of recursion for a range +of computation models. We propose algebras that describe preconditions +and the effect of while-programs under postconditions. They unify +correctness statements in two dimensions: one statement applies in +various computation models to various correctness claims.
BibTeX: +
@article{Correctness_Algebras-AFP,
+  author  = {Walter Guttmann},
+  title   = {Algebras for Iteration, Infinite Executions and Correctness of Sequential Computations},
+  journal = {Archive of Formal Proofs},
+  month   = oct,
+  year    = 2021,
+  note    = {\url{https://isa-afp.org/entries/Correctness_Algebras.html},
+            Formal proof development},
+  ISSN    = {2150-914x},
+}
+
License:BSD License
Depends on:MonoBoolTranAlgebra, Stone_Kleene_Relation_Algebras, Subset_Boolean_Algebras
+ +

+ + + + + + + + + + + + + + + + + + +
+
+ + + + + + \ No newline at end of file diff --git a/web/entries/Count_Complex_Roots.html b/web/entries/Count_Complex_Roots.html --- a/web/entries/Count_Complex_Roots.html +++ b/web/entries/Count_Complex_Roots.html @@ -1,219 +1,223 @@ Count the Number of Complex Roots - Archive of Formal Proofs

 

 

 

 

 

 

Count the Number of Complex Roots

 

+ + + + - +
Title: Count the Number of Complex Roots
Author: Wenda Li
Submission date: 2017-10-17
Abstract: Based on evaluating Cauchy indices through remainder sequences, this entry provides an effective procedure to count the number of complex -roots (with multiplicity) of a polynomial within a rectangle box or a -half-plane. Potential applications of this entry include certified +roots (with multiplicity) of a polynomial within various shapes (e.g., rectangle, +circle and half-plane). Potential applications of this entry include certified complex root isolation (of a polynomial) and testing the Routh-Hurwitz stability criterion (i.e., to check whether all the roots of some characteristic polynomial have negative real parts).
Change history:[2021-10-26]: resolved the roots-on-the-border problem in the rectangular case (revision 82a159e398cf).
BibTeX:
@article{Count_Complex_Roots-AFP,
   author  = {Wenda Li},
   title   = {Count the Number of Complex Roots},
   journal = {Archive of Formal Proofs},
   month   = oct,
   year    = 2017,
   note    = {\url{https://isa-afp.org/entries/Count_Complex_Roots.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on:Sturm_Tarski, Winding_Number_Eval
Polynomial_Interpolation, Sturm_Tarski, Winding_Number_Eval
Used by: Linear_Recurrences

\ No newline at end of file diff --git a/web/entries/Density_Compiler.html b/web/entries/Density_Compiler.html --- a/web/entries/Density_Compiler.html +++ b/web/entries/Density_Compiler.html @@ -1,253 +1,253 @@ A Verified Compiler for Probability Density Functions - Archive of Formal Proofs

 

 

 

 

 

 

A Verified Compiler for Probability Density Functions

 

Title: A Verified Compiler for Probability Density Functions
Authors: - Manuel Eberl, + Manuel Eberl, Johannes Hölzl and Tobias Nipkow
Submission date: 2014-10-09
Abstract: Bhat et al. [TACAS 2013] developed an inductive compiler that computes density functions for probability spaces described by programs in a probabilistic functional language. In this work, we implement such a compiler for a modified version of this language within the theorem prover Isabelle and give a formal proof of its soundness w.r.t. the semantics of the source and target language. Together with Isabelle's code generation for inductive predicates, this yields a fully verified, executable density compiler. The proof is done in two steps: First, an abstract compiler working with abstract functions modelled directly in the theorem prover's logic is defined and proved sound. Then, this compiler is refined to a concrete version that returns a target-language expression.

An article with the same title and authors is published in the proceedings of ESOP 2015. A detailed presentation of this work can be found in the first author's master's thesis.

BibTeX:
@article{Density_Compiler-AFP,
   author  = {Manuel Eberl and Johannes Hölzl and Tobias Nipkow},
   title   = {A Verified Compiler for Probability Density Functions},
   journal = {Archive of Formal Proofs},
   month   = oct,
   year    = 2014,
   note    = {\url{https://isa-afp.org/entries/Density_Compiler.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License

\ No newline at end of file diff --git a/web/entries/Descartes_Sign_Rule.html b/web/entries/Descartes_Sign_Rule.html --- a/web/entries/Descartes_Sign_Rule.html +++ b/web/entries/Descartes_Sign_Rule.html @@ -1,229 +1,229 @@ Descartes' Rule of Signs - Archive of Formal Proofs

 

 

 

 

 

 

Descartes' Rule of Signs

 

Title: Descartes' Rule of Signs
Author: - Manuel Eberl + Manuel Eberl
Submission date: 2015-12-28
Abstract:

Descartes' Rule of Signs relates the number of positive real roots of a polynomial with the number of sign changes in its coefficient sequence.

Our proof follows the simple inductive proof given by Rob Arthan, which was also used by John Harrison in his HOL Light formalisation. We proved most of the lemmas for arbitrary linearly-ordered integrity domains (e.g. integers, rationals, reals); the main result, however, requires the intermediate value theorem and was therefore only proven for real polynomials.

BibTeX:
@article{Descartes_Sign_Rule-AFP,
   author  = {Manuel Eberl},
   title   = {Descartes' Rule of Signs},
   journal = {Archive of Formal Proofs},
   month   = dec,
   year    = 2015,
   note    = {\url{https://isa-afp.org/entries/Descartes_Sign_Rule.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License

\ No newline at end of file diff --git a/web/entries/Dirichlet_L.html b/web/entries/Dirichlet_L.html --- a/web/entries/Dirichlet_L.html +++ b/web/entries/Dirichlet_L.html @@ -1,218 +1,218 @@ Dirichlet L-Functions and Dirichlet's Theorem - Archive of Formal Proofs

 

 

 

 

 

 

Dirichlet L-Functions and Dirichlet's Theorem

 

Title: Dirichlet L-Functions and Dirichlet's Theorem
Author: - Manuel Eberl + Manuel Eberl
Submission date: 2017-12-21
Abstract:

This article provides a formalisation of Dirichlet characters and Dirichlet L-functions including proofs of their basic properties – most notably their analyticity, their areas of convergence, and their non-vanishing for ℜ(s) ≥ 1. All of this is built in a very high-level style using Dirichlet series. The proof of the non-vanishing follows a very short and elegant proof by Newman, which we attempt to reproduce faithfully in a similar level of abstraction in Isabelle.

This also leads to a relatively short proof of Dirichlet’s Theorem, which states that, if h and n are coprime, there are infinitely many primes p with ph (mod n).

BibTeX:
@article{Dirichlet_L-AFP,
   author  = {Manuel Eberl},
   title   = {Dirichlet L-Functions and Dirichlet's Theorem},
   journal = {Archive of Formal Proofs},
   month   = dec,
   year    = 2017,
   note    = {\url{https://isa-afp.org/entries/Dirichlet_L.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: Bertrands_Postulate, Dirichlet_Series, Finitely_Generated_Abelian_Groups, Landau_Symbols, Zeta_Function
Used by: Gauss_Sums

\ No newline at end of file diff --git a/web/entries/Dirichlet_Series.html b/web/entries/Dirichlet_Series.html --- a/web/entries/Dirichlet_Series.html +++ b/web/entries/Dirichlet_Series.html @@ -1,222 +1,222 @@ Dirichlet Series - Archive of Formal Proofs

 

 

 

 

 

 

Dirichlet Series

 

Title: Dirichlet Series
Author: - Manuel Eberl + Manuel Eberl
Submission date: 2017-10-12
Abstract: This entry is a formalisation of much of Chapters 2, 3, and 11 of Apostol's “Introduction to Analytic Number Theory”. This includes:
  • Definitions and basic properties for several number-theoretic functions (Euler's φ, Möbius μ, Liouville's λ, the divisor function σ, von Mangoldt's Λ)
  • Executable code for most of these functions, the most efficient implementations using the factoring algorithm by Thiemann et al.
  • Dirichlet products and formal Dirichlet series
  • Analytic results connecting convergent formal Dirichlet series to complex functions
  • Euler product expansions
  • Asymptotic estimates of number-theoretic functions including the density of squarefree integers and the average number of divisors of a natural number
These results are useful as a basis for developing more number-theoretic results, such as the Prime Number Theorem.
BibTeX:
@article{Dirichlet_Series-AFP,
   author  = {Manuel Eberl},
   title   = {Dirichlet Series},
   journal = {Archive of Formal Proofs},
   month   = oct,
   year    = 2017,
   note    = {\url{https://isa-afp.org/entries/Dirichlet_Series.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: Euler_MacLaurin, Landau_Symbols, Polynomial_Factorization
Used by: Dirichlet_L, Gauss_Sums, Zeta_Function

\ No newline at end of file diff --git a/web/entries/E_Transcendental.html b/web/entries/E_Transcendental.html --- a/web/entries/E_Transcendental.html +++ b/web/entries/E_Transcendental.html @@ -1,216 +1,216 @@ The Transcendence of e - Archive of Formal Proofs

 

 

 

 

 

 

The Transcendence of e

 

Title: The Transcendence of e
Author: - Manuel Eberl + Manuel Eberl
Submission date: 2017-01-12
Abstract:

This work contains a proof that Euler's number e is transcendental. The proof follows the standard approach of assuming that e is algebraic and then using a specific integer polynomial to derive two inconsistent bounds, leading to a contradiction.

This kind of approach can be found in many different sources; this formalisation mostly follows a PlanetMath article by Roger Lipsett.

BibTeX:
@article{E_Transcendental-AFP,
   author  = {Manuel Eberl},
   title   = {The Transcendence of e},
   journal = {Archive of Formal Proofs},
   month   = jan,
   year    = 2017,
   note    = {\url{https://isa-afp.org/entries/E_Transcendental.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Used by: Pi_Transcendental, Zeta_3_Irrational

\ No newline at end of file diff --git a/web/entries/Ergodic_Theory.html b/web/entries/Ergodic_Theory.html --- a/web/entries/Ergodic_Theory.html +++ b/web/entries/Ergodic_Theory.html @@ -1,220 +1,220 @@ Ergodic Theory - Archive of Formal Proofs

 

 

 

 

 

 

Ergodic Theory

 

Title: Ergodic Theory
Author: Sebastien Gouezel
Contributor: - Manuel Eberl + Manuel Eberl
Submission date: 2015-12-01
Abstract: Ergodic theory is the branch of mathematics that studies the behaviour of measure preserving transformations, in finite or infinite measure. It interacts both with probability theory (mainly through measure theory) and with geometry as a lot of interesting examples are from geometric origin. We implement the first definitions and theorems of ergodic theory, including notably Poicaré recurrence theorem for finite measure preserving systems (together with the notion of conservativity in general), induced maps, Kac's theorem, Birkhoff theorem (arguably the most important theorem in ergodic theory), and variations around it such as conservativity of the corresponding skew product, or Atkinson lemma.
BibTeX:
@article{Ergodic_Theory-AFP,
   author  = {Sebastien Gouezel},
   title   = {Ergodic Theory},
   journal = {Archive of Formal Proofs},
   month   = dec,
   year    = 2015,
   note    = {\url{https://isa-afp.org/entries/Ergodic_Theory.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Used by: Gromov_Hyperbolicity, Laws_of_Large_Numbers, Lp

\ No newline at end of file diff --git a/web/entries/Error_Function.html b/web/entries/Error_Function.html --- a/web/entries/Error_Function.html +++ b/web/entries/Error_Function.html @@ -1,208 +1,208 @@ The Error Function - Archive of Formal Proofs

 

 

 

 

 

 

The Error Function

 

Title: The Error Function
Author: - Manuel Eberl + Manuel Eberl
Submission date: 2018-02-06
Abstract:

This entry provides the definitions and basic properties of the complex and real error function erf and the complementary error function erfc. Additionally, it gives their full asymptotic expansions.

BibTeX:
@article{Error_Function-AFP,
   author  = {Manuel Eberl},
   title   = {The Error Function},
   journal = {Archive of Formal Proofs},
   month   = feb,
   year    = 2018,
   note    = {\url{https://isa-afp.org/entries/Error_Function.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: Landau_Symbols

\ No newline at end of file diff --git a/web/entries/Euler_MacLaurin.html b/web/entries/Euler_MacLaurin.html --- a/web/entries/Euler_MacLaurin.html +++ b/web/entries/Euler_MacLaurin.html @@ -1,224 +1,224 @@ The Euler–MacLaurin Formula - Archive of Formal Proofs

 

 

 

 

 

 

The Euler–MacLaurin Formula

 

Title: The Euler–MacLaurin Formula
Author: - Manuel Eberl + Manuel Eberl
Submission date: 2017-03-10
Abstract:

The Euler-MacLaurin formula relates the value of a discrete sum to that of the corresponding integral in terms of the derivatives at the borders of the summation and a remainder term. Since the remainder term is often very small as the summation bounds grow, this can be used to compute asymptotic expansions for sums.

This entry contains a proof of this formula for functions from the reals to an arbitrary Banach space. Two variants of the formula are given: the standard textbook version and a variant outlined in Concrete Mathematics that is more useful for deriving asymptotic estimates.

As example applications, we use that formula to derive the full asymptotic expansion of the harmonic numbers and the sum of inverse squares.

BibTeX:
@article{Euler_MacLaurin-AFP,
   author  = {Manuel Eberl},
   title   = {The Euler–MacLaurin Formula},
   journal = {Archive of Formal Proofs},
   month   = mar,
   year    = 2017,
   note    = {\url{https://isa-afp.org/entries/Euler_MacLaurin.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: Bernoulli, Landau_Symbols
Used by: Dirichlet_Series, Zeta_Function

\ No newline at end of file diff --git a/web/entries/Finitely_Generated_Abelian_Groups.html b/web/entries/Finitely_Generated_Abelian_Groups.html --- a/web/entries/Finitely_Generated_Abelian_Groups.html +++ b/web/entries/Finitely_Generated_Abelian_Groups.html @@ -1,193 +1,193 @@ Finitely Generated Abelian Groups - Archive of Formal Proofs

 

 

 

 

 

 

Finitely Generated Abelian Groups

 

Title: Finitely Generated Abelian Groups
Authors: Joseph Thommes and - Manuel Eberl + Manuel Eberl
Submission date: 2021-07-07
Abstract: This article deals with the formalisation of some group-theoretic results including the fundamental theorem of finitely generated abelian groups characterising the structure of these groups as a uniquely determined product of cyclic groups. Both the invariant factor decomposition and the primary decomposition are covered. Additional work includes results about the direct product, the internal direct product and more group-theoretic lemmas.
BibTeX:
@article{Finitely_Generated_Abelian_Groups-AFP,
   author  = {Joseph Thommes and Manuel Eberl},
   title   = {Finitely Generated Abelian Groups},
   journal = {Archive of Formal Proofs},
   month   = jul,
   year    = 2021,
   note    = {\url{https://isa-afp.org/entries/Finitely_Generated_Abelian_Groups.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Used by: Dirichlet_L

\ No newline at end of file diff --git a/web/entries/Fishburn_Impossibility.html b/web/entries/Fishburn_Impossibility.html --- a/web/entries/Fishburn_Impossibility.html +++ b/web/entries/Fishburn_Impossibility.html @@ -1,228 +1,228 @@ The Incompatibility of Fishburn-Strategyproofness and Pareto-Efficiency - Archive of Formal Proofs

 

 

 

 

 

 

The Incompatibility of Fishburn-Strategyproofness and Pareto-Efficiency

 

Title: The Incompatibility of Fishburn-Strategyproofness and Pareto-Efficiency
Authors: Felix Brandt, - Manuel Eberl, + Manuel Eberl, Christian Saile and Christian Stricker
Submission date: 2018-03-22
Abstract:

This formalisation contains the proof that there is no anonymous Social Choice Function for at least three agents and alternatives that fulfils both Pareto-Efficiency and Fishburn-Strategyproofness. It was derived from a proof of Brandt et al., which relies on an unverified translation of a fixed finite instance of the original problem to SAT. This Isabelle proof contains a machine-checked version of both the statement for exactly three agents and alternatives and the lifting to the general case.

BibTeX:
@article{Fishburn_Impossibility-AFP,
   author  = {Felix Brandt and Manuel Eberl and Christian Saile and Christian Stricker},
   title   = {The Incompatibility of Fishburn-Strategyproofness and Pareto-Efficiency},
   journal = {Archive of Formal Proofs},
   month   = mar,
   year    = 2018,
   note    = {\url{https://isa-afp.org/entries/Fishburn_Impossibility.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: Randomised_Social_Choice

\ No newline at end of file diff --git a/web/entries/Fisher_Yates.html b/web/entries/Fisher_Yates.html --- a/web/entries/Fisher_Yates.html +++ b/web/entries/Fisher_Yates.html @@ -1,210 +1,210 @@ Fisher–Yates shuffle - Archive of Formal Proofs

 

 

 

 

 

 

Fisher–Yates shuffle

 

Title: Fisher–Yates shuffle
Author: - Manuel Eberl + Manuel Eberl
Submission date: 2016-09-30
Abstract:

This work defines and proves the correctness of the Fisher–Yates algorithm for shuffling – i.e. producing a random permutation – of a list. The algorithm proceeds by traversing the list and in each step swapping the current element with a random element from the remaining list.

BibTeX:
@article{Fisher_Yates-AFP,
   author  = {Manuel Eberl},
   title   = {Fisher–Yates shuffle},
   journal = {Archive of Formal Proofs},
   month   = sep,
   year    = 2016,
   note    = {\url{https://isa-afp.org/entries/Fisher_Yates.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License

\ No newline at end of file diff --git a/web/entries/Formal_Puiseux_Series.html b/web/entries/Formal_Puiseux_Series.html --- a/web/entries/Formal_Puiseux_Series.html +++ b/web/entries/Formal_Puiseux_Series.html @@ -1,192 +1,192 @@ Formal Puiseux Series - Archive of Formal Proofs

 

 

 

 

 

 

Formal Puiseux Series

 

Title: Formal Puiseux Series
Author: - Manuel Eberl + Manuel Eberl
Submission date: 2021-02-17
Abstract:

Formal Puiseux series are generalisations of formal power series and formal Laurent series that also allow for fractional exponents. They have the following general form: \[\sum_{i=N}^\infty a_{i/d} X^{i/d}\] where N is an integer and d is a positive integer.

This entry defines these series including their basic algebraic properties. Furthermore, it proves the Newton–Puiseux Theorem, namely that the Puiseux series over an algebraically closed field of characteristic 0 are also algebraically closed.

BibTeX:
@article{Formal_Puiseux_Series-AFP,
   author  = {Manuel Eberl},
   title   = {Formal Puiseux Series},
   journal = {Archive of Formal Proofs},
   month   = feb,
   year    = 2021,
   note    = {\url{https://isa-afp.org/entries/Formal_Puiseux_Series.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: Polynomial_Interpolation

\ No newline at end of file diff --git a/web/entries/Furstenberg_Topology.html b/web/entries/Furstenberg_Topology.html --- a/web/entries/Furstenberg_Topology.html +++ b/web/entries/Furstenberg_Topology.html @@ -1,215 +1,215 @@ Furstenberg's topology and his proof of the infinitude of primes - Archive of Formal Proofs

 

 

 

 

 

 

Furstenberg's topology and his proof of the infinitude of primes

 

Title: Furstenberg's topology and his proof of the infinitude of primes
Author: - Manuel Eberl + Manuel Eberl
Submission date: 2020-03-22
Abstract:

This article gives a formal version of Furstenberg's topological proof of the infinitude of primes. He defines a topology on the integers based on arithmetic progressions (or, equivalently, residue classes). Using some fairly obvious properties of this topology, the infinitude of primes is then easily obtained.

Apart from this, this topology is also fairly ‘nice’ in general: it is second countable, metrizable, and perfect. All of these (well-known) facts are formally proven, including an explicit metric for the topology given by Zulfeqarr.

BibTeX:
@article{Furstenberg_Topology-AFP,
   author  = {Manuel Eberl},
   title   = {Furstenberg's topology and his proof of the infinitude of primes},
   journal = {Archive of Formal Proofs},
   month   = mar,
   year    = 2020,
   note    = {\url{https://isa-afp.org/entries/Furstenberg_Topology.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License

\ No newline at end of file diff --git a/web/entries/Gauss_Sums.html b/web/entries/Gauss_Sums.html --- a/web/entries/Gauss_Sums.html +++ b/web/entries/Gauss_Sums.html @@ -1,211 +1,211 @@ Gauss Sums and the Pólya–Vinogradov Inequality - Archive of Formal Proofs

 

 

 

 

 

 

Gauss Sums and the Pólya–Vinogradov Inequality

 

Title: Gauss Sums and the Pólya–Vinogradov Inequality
Authors: Rodrigo Raya and - Manuel Eberl + Manuel Eberl
Submission date: 2019-12-10
Abstract:

This article provides a full formalisation of Chapter 8 of Apostol's Introduction to Analytic Number Theory. Subjects that are covered are:

  • periodic arithmetic functions and their finite Fourier series
  • (generalised) Ramanujan sums
  • Gauss sums and separable characters
  • induced moduli and primitive characters
  • the Pólya—Vinogradov inequality
BibTeX:
@article{Gauss_Sums-AFP,
   author  = {Rodrigo Raya and Manuel Eberl},
   title   = {Gauss Sums and the Pólya–Vinogradov Inequality},
   journal = {Archive of Formal Proofs},
   month   = dec,
   year    = 2019,
   note    = {\url{https://isa-afp.org/entries/Gauss_Sums.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: Dirichlet_L, Dirichlet_Series, Polynomial_Interpolation

\ No newline at end of file diff --git a/web/entries/Gaussian_Integers.html b/web/entries/Gaussian_Integers.html --- a/web/entries/Gaussian_Integers.html +++ b/web/entries/Gaussian_Integers.html @@ -1,201 +1,201 @@ Gaussian Integers - Archive of Formal Proofs

 

 

 

 

 

 

Gaussian Integers

 

Title: Gaussian Integers
Author: - Manuel Eberl + Manuel Eberl
Submission date: 2020-04-24
Abstract:

The Gaussian integers are the subring ℤ[i] of the complex numbers, i. e. the ring of all complex numbers with integral real and imaginary part. This article provides a definition of this ring as well as proofs of various basic properties, such as that they form a Euclidean ring and a full classification of their primes. An executable (albeit not very efficient) factorisation algorithm is also provided.

Lastly, this Gaussian integer formalisation is used in two short applications:

  1. The characterisation of all positive integers that can be written as sums of two squares
  2. Euclid's formula for primitive Pythagorean triples

While elementary proofs for both of these are already available in the AFP, the theory of Gaussian integers provides more concise proofs and a more high-level view.

BibTeX:
@article{Gaussian_Integers-AFP,
   author  = {Manuel Eberl},
   title   = {Gaussian Integers},
   journal = {Archive of Formal Proofs},
   month   = apr,
   year    = 2020,
   note    = {\url{https://isa-afp.org/entries/Gaussian_Integers.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: Polynomial_Factorization

\ No newline at end of file diff --git a/web/entries/Hermite_Lindemann.html b/web/entries/Hermite_Lindemann.html --- a/web/entries/Hermite_Lindemann.html +++ b/web/entries/Hermite_Lindemann.html @@ -1,210 +1,210 @@ The Hermite–Lindemann–Weierstraß Transcendence Theorem - Archive of Formal Proofs

 

 

 

 

 

 

The Hermite–Lindemann–Weierstraß Transcendence Theorem

 

Title: The Hermite–Lindemann–Weierstraß Transcendence Theorem
Author: - Manuel Eberl + Manuel Eberl
Submission date: 2021-03-03
Abstract:

This article provides a formalisation of the Hermite-Lindemann-Weierstraß Theorem (also known as simply Hermite-Lindemann or Lindemann-Weierstraß). This theorem is one of the crowning achievements of 19th century number theory.

The theorem states that if $\alpha_1, \ldots, \alpha_n\in\mathbb{C}$ are algebraic numbers that are linearly independent over $\mathbb{Z}$, then $e^{\alpha_1},\ldots,e^{\alpha_n}$ are algebraically independent over $\mathbb{Q}$.

Like the previous formalisation in Coq by Bernard, I proceeded by formalising Baker's version of the theorem and proof and then deriving the original one from that. Baker's version states that for any algebraic numbers $\beta_1, \ldots, \beta_n\in\mathbb{C}$ and distinct algebraic numbers $\alpha_i, \ldots, \alpha_n\in\mathbb{C}$, we have $\beta_1 e^{\alpha_1} + \ldots + \beta_n e^{\alpha_n} = 0$ if and only if all the $\beta_i$ are zero.

This has a number of direct corollaries, e.g.:

  • $e$ and $\pi$ are transcendental
  • $e^z$, $\sin z$, $\tan z$, etc. are transcendental for algebraic $z\in\mathbb{C}\setminus\{0\}$
  • $\ln z$ is transcendental for algebraic $z\in\mathbb{C}\setminus\{0, 1\}$
BibTeX:
@article{Hermite_Lindemann-AFP,
   author  = {Manuel Eberl},
   title   = {The Hermite–Lindemann–Weierstraß Transcendence Theorem},
   journal = {Archive of Formal Proofs},
   month   = mar,
   year    = 2021,
   note    = {\url{https://isa-afp.org/entries/Hermite_Lindemann.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: Algebraic_Numbers, Pi_Transcendental, Power_Sum_Polynomials

\ No newline at end of file diff --git a/web/entries/IMO2019.html b/web/entries/IMO2019.html --- a/web/entries/IMO2019.html +++ b/web/entries/IMO2019.html @@ -1,212 +1,212 @@ Selected Problems from the International Mathematical Olympiad 2019 - Archive of Formal Proofs

 

 

 

 

 

 

Selected Problems from the International Mathematical Olympiad 2019

 

Title: Selected Problems from the International Mathematical Olympiad 2019
Author: - Manuel Eberl + Manuel Eberl
Submission date: 2019-08-05
Abstract:

This entry contains formalisations of the answers to three of the six problem of the International Mathematical Olympiad 2019, namely Q1, Q4, and Q5.

The reason why these problems were chosen is that they are particularly amenable to formalisation: they can be solved with minimal use of libraries. The remaining three concern geometry and graph theory, which, in the author's opinion, are more difficult to formalise resp. require a more complex library.

BibTeX:
@article{IMO2019-AFP,
   author  = {Manuel Eberl},
   title   = {Selected Problems from the International Mathematical Olympiad 2019},
   journal = {Archive of Formal Proofs},
   month   = aug,
   year    = 2019,
   note    = {\url{https://isa-afp.org/entries/IMO2019.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: Prime_Distribution_Elementary

\ No newline at end of file diff --git a/web/entries/Lambert_W.html b/web/entries/Lambert_W.html --- a/web/entries/Lambert_W.html +++ b/web/entries/Lambert_W.html @@ -1,220 +1,220 @@ The Lambert W Function on the Reals - Archive of Formal Proofs

 

 

 

 

 

 

The Lambert W Function on the Reals

 

Title: The Lambert W Function on the Reals
Author: - Manuel Eberl + Manuel Eberl
Submission date: 2020-04-24
Abstract:

The Lambert W function is a multi-valued function defined as the inverse function of xx ex. Besides numerous applications in combinatorics, physics, and engineering, it also frequently occurs when solving equations containing both ex and x, or both x and log x.

This article provides a definition of the two real-valued branches W0(x) and W-1(x) and proves various properties such as basic identities and inequalities, monotonicity, differentiability, asymptotic expansions, and the MacLaurin series of W0(x) at x = 0.

BibTeX:
@article{Lambert_W-AFP,
   author  = {Manuel Eberl},
   title   = {The Lambert W Function on the Reals},
   journal = {Archive of Formal Proofs},
   month   = apr,
   year    = 2020,
   note    = {\url{https://isa-afp.org/entries/Lambert_W.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: Bernoulli, Stirling_Formula

\ No newline at end of file diff --git a/web/entries/Landau_Symbols.html b/web/entries/Landau_Symbols.html --- a/web/entries/Landau_Symbols.html +++ b/web/entries/Landau_Symbols.html @@ -1,217 +1,217 @@ Landau Symbols - Archive of Formal Proofs

 

 

 

 

 

 

Landau Symbols

 

Title: Landau Symbols
Author: - Manuel Eberl + Manuel Eberl
Submission date: 2015-07-14
Abstract: This entry provides Landau symbols to describe and reason about the asymptotic growth of functions for sufficiently large inputs. A number of simplification procedures are provided for additional convenience: cancelling of dominated terms in sums under a Landau symbol, cancelling of common factors in products, and a decision procedure for Landau expressions containing products of powers of functions like x, ln(x), ln(ln(x)) etc.
BibTeX:
@article{Landau_Symbols-AFP,
   author  = {Manuel Eberl},
   title   = {Landau Symbols},
   journal = {Archive of Formal Proofs},
   month   = jul,
   year    = 2015,
   note    = {\url{https://isa-afp.org/entries/Landau_Symbols.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Used by: Akra_Bazzi, Catalan_Numbers, Comparison_Sort_Lower_Bound, CryptHOL, Dirichlet_L, Dirichlet_Series, Error_Function, Euler_MacLaurin, Quick_Sort_Cost, Random_BSTs, Stirling_Formula

\ No newline at end of file diff --git a/web/entries/Laws_of_Large_Numbers.html b/web/entries/Laws_of_Large_Numbers.html --- a/web/entries/Laws_of_Large_Numbers.html +++ b/web/entries/Laws_of_Large_Numbers.html @@ -1,212 +1,212 @@ The Laws of Large Numbers - Archive of Formal Proofs

 

 

 

 

 

 

The Laws of Large Numbers

 

Title: The Laws of Large Numbers
Author: - Manuel Eberl + Manuel Eberl
Submission date: 2021-02-10
Abstract:

The Law of Large Numbers states that, informally, if one performs a random experiment $X$ many times and takes the average of the results, that average will be very close to the expected value $E[X]$.

More formally, let $(X_i)_{i\in\mathbb{N}}$ be a sequence of independently identically distributed random variables whose expected value $E[X_1]$ exists. Denote the running average of $X_1, \ldots, X_n$ as $\overline{X}_n$. Then:

  • The Weak Law of Large Numbers states that $\overline{X}_{n} \longrightarrow E[X_1]$ in probability for $n\to\infty$, i.e. $\mathcal{P}(|\overline{X}_{n} - E[X_1]| > \varepsilon) \longrightarrow 0$ as $n\to\infty$ for any $\varepsilon > 0$.
  • The Strong Law of Large Numbers states that $\overline{X}_{n} \longrightarrow E[X_1]$ almost surely for $n\to\infty$, i.e. $\mathcal{P}(\overline{X}_{n} \longrightarrow E[X_1]) = 1$.

In this entry, I formally prove the strong law and from it the weak law. The approach used for the proof of the strong law is a particularly quick and slick one based on ergodic theory, which was formalised by Gouëzel in another AFP entry.

BibTeX:
@article{Laws_of_Large_Numbers-AFP,
   author  = {Manuel Eberl},
   title   = {The Laws of Large Numbers},
   journal = {Archive of Formal Proofs},
   month   = feb,
   year    = 2021,
   note    = {\url{https://isa-afp.org/entries/Laws_of_Large_Numbers.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: Ergodic_Theory

\ No newline at end of file diff --git a/web/entries/Linear_Recurrences.html b/web/entries/Linear_Recurrences.html --- a/web/entries/Linear_Recurrences.html +++ b/web/entries/Linear_Recurrences.html @@ -1,219 +1,219 @@ Linear Recurrences - Archive of Formal Proofs

 

 

 

 

 

 

Linear Recurrences

 

Title: Linear Recurrences
Author: - Manuel Eberl + Manuel Eberl
Submission date: 2017-10-12
Abstract:

Linear recurrences with constant coefficients are an interesting class of recurrence equations that can be solved explicitly. The most famous example are certainly the Fibonacci numbers with the equation f(n) = f(n-1) + f(n - 2) and the quite non-obvious closed form (φn - (-φ)-n) / √5 where φ is the golden ratio.

In this work, I build on existing tools in Isabelle – such as formal power series and polynomial factorisation algorithms – to develop a theory of these recurrences and derive a fully executable solver for them that can be exported to programming languages like Haskell.

BibTeX:
@article{Linear_Recurrences-AFP,
   author  = {Manuel Eberl},
   title   = {Linear Recurrences},
   journal = {Archive of Formal Proofs},
   month   = oct,
   year    = 2017,
   note    = {\url{https://isa-afp.org/entries/Linear_Recurrences.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: Count_Complex_Roots, Polynomial_Factorization

\ No newline at end of file diff --git a/web/entries/Liouville_Numbers.html b/web/entries/Liouville_Numbers.html --- a/web/entries/Liouville_Numbers.html +++ b/web/entries/Liouville_Numbers.html @@ -1,229 +1,229 @@ Liouville numbers - Archive of Formal Proofs

 

 

 

 

 

 

Liouville numbers

 

Title: Liouville numbers
Author: - Manuel Eberl + Manuel Eberl
Submission date: 2015-12-28
Abstract:

Liouville numbers are a class of transcendental numbers that can be approximated particularly well with rational numbers. Historically, they were the first numbers whose transcendence was proven.

In this entry, we define the concept of Liouville numbers as well as the standard construction to obtain Liouville numbers (including Liouville's constant) and we prove their most important properties: irrationality and transcendence.

The proof is very elementary and requires only standard arithmetic, the Mean Value Theorem for polynomials, and the boundedness of polynomials on compact intervals.

BibTeX:
@article{Liouville_Numbers-AFP,
   author  = {Manuel Eberl},
   title   = {Liouville numbers},
   journal = {Archive of Formal Proofs},
   month   = dec,
   year    = 2015,
   note    = {\url{https://isa-afp.org/entries/Liouville_Numbers.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License

\ No newline at end of file diff --git a/web/entries/List_Inversions.html b/web/entries/List_Inversions.html --- a/web/entries/List_Inversions.html +++ b/web/entries/List_Inversions.html @@ -1,207 +1,207 @@ The Inversions of a List - Archive of Formal Proofs

 

 

 

 

 

 

The Inversions of a List

 

Title: The Inversions of a List
Author: - Manuel Eberl + Manuel Eberl
Submission date: 2019-02-01
Abstract:

This entry defines the set of inversions of a list, i.e. the pairs of indices that violate sortedness. It also proves the correctness of the well-known O(n log n) divide-and-conquer algorithm to compute the number of inversions.

BibTeX:
@article{List_Inversions-AFP,
   author  = {Manuel Eberl},
   title   = {The Inversions of a List},
   journal = {Archive of Formal Proofs},
   month   = feb,
   year    = 2019,
   note    = {\url{https://isa-afp.org/entries/List_Inversions.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License

\ No newline at end of file diff --git a/web/entries/MFODL_Monitor_Optimized.html b/web/entries/MFODL_Monitor_Optimized.html --- a/web/entries/MFODL_Monitor_Optimized.html +++ b/web/entries/MFODL_Monitor_Optimized.html @@ -1,242 +1,247 @@ Formalization of an Optimized Monitoring Algorithm for Metric First-Order Dynamic Logic with Aggregations - Archive of Formal Proofs

 

 

 

 

 

 

Formalization of an Optimized Monitoring Algorithm for Metric First-Order Dynamic Logic with Aggregations

 

+ + + +
Title: Formalization of an Optimized Monitoring Algorithm for Metric First-Order Dynamic Logic with Aggregations
Authors: Thibault Dardinier, Lukas Heimes, Martin Raszyk (martin /dot/ raszyk /at/ inf /dot/ ethz /dot/ ch), Joshua Schneider and Dmitriy Traytel
Submission date: 2020-04-09
Abstract: A monitor is a runtime verification tool that solves the following problem: Given a stream of time-stamped events and a policy formulated in a specification language, decide whether the policy is satisfied at every point in the stream. We verify the correctness of an executable monitor for specifications given as formulas in metric first-order dynamic logic (MFODL), which combines the features of metric first-order temporal logic (MFOTL) and metric dynamic logic. Thus, MFODL supports real-time constraints, first-order parameters, and regular expressions. Additionally, the monitor supports aggregation operations such as count and sum. This formalization, which is described in a forthcoming paper at IJCAR 2020, significantly extends previous work on a verified monitor for MFOTL. Apart from the addition of regular expressions and aggregations, we implemented multi-way joins and a specialized sliding window algorithm to further optimize the monitor.
Change history:[2021-10-19]: corrected a mistake in the calculation of median aggregations +(reported by Nicolas Kaletsch, revision 02b14c9bf3da)
BibTeX:
@article{MFODL_Monitor_Optimized-AFP,
   author  = {Thibault Dardinier and Lukas Heimes and Martin Raszyk and Joshua Schneider and Dmitriy Traytel},
   title   = {Formalization of an Optimized Monitoring Algorithm for Metric First-Order Dynamic Logic with Aggregations},
   journal = {Archive of Formal Proofs},
   month   = apr,
   year    = 2020,
   note    = {\url{https://isa-afp.org/entries/MFODL_Monitor_Optimized.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: Generic_Join, IEEE_Floating_Point, MFOTL_Monitor

\ No newline at end of file diff --git a/web/entries/Mason_Stothers.html b/web/entries/Mason_Stothers.html --- a/web/entries/Mason_Stothers.html +++ b/web/entries/Mason_Stothers.html @@ -1,224 +1,224 @@ The Mason–Stothers Theorem - Archive of Formal Proofs

 

 

 

 

 

 

The Mason–Stothers Theorem

 

Title: The Mason–Stothers Theorem
Author: - Manuel Eberl + Manuel Eberl
Submission date: 2017-12-21
Abstract:

This article provides a formalisation of Snyder’s simple and elegant proof of the Mason–Stothers theorem, which is the polynomial analogue of the famous abc Conjecture for integers. Remarkably, Snyder found this very elegant proof when he was still a high-school student.

In short, the statement of the theorem is that three non-zero coprime polynomials A, B, C over a field which sum to 0 and do not all have vanishing derivatives fulfil max{deg(A), deg(B), deg(C)} < deg(rad(ABC)) where the rad(P) denotes the radical of P, i. e. the product of all unique irreducible factors of P.

This theorem also implies a kind of polynomial analogue of Fermat’s Last Theorem for polynomials: except for trivial cases, An + Bn + Cn = 0 implies n ≤ 2 for coprime polynomials A, B, C over a field.

BibTeX:
@article{Mason_Stothers-AFP,
   author  = {Manuel Eberl},
   title   = {The Mason–Stothers Theorem},
   journal = {Archive of Formal Proofs},
   month   = dec,
   year    = 2017,
   note    = {\url{https://isa-afp.org/entries/Mason_Stothers.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License

\ No newline at end of file diff --git a/web/entries/Median_Of_Medians_Selection.html b/web/entries/Median_Of_Medians_Selection.html --- a/web/entries/Median_Of_Medians_Selection.html +++ b/web/entries/Median_Of_Medians_Selection.html @@ -1,212 +1,212 @@ The Median-of-Medians Selection Algorithm - Archive of Formal Proofs

 

 

 

 

 

 

The Median-of-Medians Selection Algorithm

 

Title: The Median-of-Medians Selection Algorithm
Author: - Manuel Eberl + Manuel Eberl
Submission date: 2017-12-21
Abstract:

This entry provides an executable functional implementation of the Median-of-Medians algorithm for selecting the k-th smallest element of an unsorted list deterministically in linear time. The size bounds for the recursive call that lead to the linear upper bound on the run-time of the algorithm are also proven.

BibTeX:
@article{Median_Of_Medians_Selection-AFP,
   author  = {Manuel Eberl},
   title   = {The Median-of-Medians Selection Algorithm},
   journal = {Archive of Formal Proofs},
   month   = dec,
   year    = 2017,
   note    = {\url{https://isa-afp.org/entries/Median_Of_Medians_Selection.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Used by: KD_Tree

\ No newline at end of file diff --git a/web/entries/Mersenne_Primes.html b/web/entries/Mersenne_Primes.html --- a/web/entries/Mersenne_Primes.html +++ b/web/entries/Mersenne_Primes.html @@ -1,207 +1,207 @@ Mersenne primes and the Lucas–Lehmer test - Archive of Formal Proofs

 

 

 

 

 

 

Mersenne primes and the Lucas–Lehmer test

 

Title: Mersenne primes and the Lucas–Lehmer test
Author: - Manuel Eberl + Manuel Eberl
Submission date: 2020-01-17
Abstract:

This article provides formal proofs of basic properties of Mersenne numbers, i. e. numbers of the form 2n - 1, and especially of Mersenne primes.

In particular, an efficient, verified, and executable version of the Lucas–Lehmer test is developed. This test decides primality for Mersenne numbers in time polynomial in n.

BibTeX:
@article{Mersenne_Primes-AFP,
   author  = {Manuel Eberl},
   title   = {Mersenne primes and the Lucas–Lehmer test},
   journal = {Archive of Formal Proofs},
   month   = jan,
   year    = 2020,
   note    = {\url{https://isa-afp.org/entries/Mersenne_Primes.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: Native_Word, Pell, Probabilistic_Prime_Tests

\ No newline at end of file diff --git a/web/entries/Minkowskis_Theorem.html b/web/entries/Minkowskis_Theorem.html --- a/web/entries/Minkowskis_Theorem.html +++ b/web/entries/Minkowskis_Theorem.html @@ -1,218 +1,218 @@ Minkowski's Theorem - Archive of Formal Proofs

 

 

 

 

 

 

Minkowski's Theorem

 

Title: Minkowski's Theorem
Author: - Manuel Eberl + Manuel Eberl
Submission date: 2017-07-13
Abstract:

Minkowski's theorem relates a subset of ℝn, the Lebesgue measure, and the integer lattice ℤn: It states that any convex subset of ℝn with volume greater than 2n contains at least one lattice point from ℤn\{0}, i. e. a non-zero point with integer coefficients.

A related theorem which directly implies this is Blichfeldt's theorem, which states that any subset of ℝn with a volume greater than 1 contains two different points whose difference vector has integer components.

The entry contains a proof of both theorems.

BibTeX:
@article{Minkowskis_Theorem-AFP,
   author  = {Manuel Eberl},
   title   = {Minkowski's Theorem},
   journal = {Archive of Formal Proofs},
   month   = jul,
   year    = 2017,
   note    = {\url{https://isa-afp.org/entries/Minkowskis_Theorem.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License

\ No newline at end of file diff --git a/web/entries/Monad_Normalisation.html b/web/entries/Monad_Normalisation.html --- a/web/entries/Monad_Normalisation.html +++ b/web/entries/Monad_Normalisation.html @@ -1,217 +1,217 @@ Monad normalisation - Archive of Formal Proofs

 

 

 

 

 

 

Monad normalisation

 

Title: Monad normalisation
Authors: Joshua Schneider, - Manuel Eberl and + Manuel Eberl and Andreas Lochbihler
Submission date: 2017-05-05
Abstract: The usual monad laws can directly be used as rewrite rules for Isabelle’s simplifier to normalise monadic HOL terms and decide equivalences. In a commutative monad, however, the commutativity law is a higher-order permutative rewrite rule that makes the simplifier loop. This AFP entry implements a simproc that normalises monadic expressions in commutative monads using ordered rewriting. The simproc can also permute computations across control operators like if and case.
BibTeX:
@article{Monad_Normalisation-AFP,
   author  = {Joshua Schneider and Manuel Eberl and Andreas Lochbihler},
   title   = {Monad normalisation},
   journal = {Archive of Formal Proofs},
   month   = may,
   year    = 2017,
   note    = {\url{https://isa-afp.org/entries/Monad_Normalisation.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Used by: CryptHOL, Randomised_BSTs, Skip_Lists

\ No newline at end of file diff --git a/web/entries/MonoBoolTranAlgebra.html b/web/entries/MonoBoolTranAlgebra.html --- a/web/entries/MonoBoolTranAlgebra.html +++ b/web/entries/MonoBoolTranAlgebra.html @@ -1,258 +1,260 @@ Algebra of Monotonic Boolean Transformers - Archive of Formal Proofs

 

 

 

 

 

 

Algebra of Monotonic Boolean Transformers

 

- + + +
Title: Algebra of Monotonic Boolean Transformers
Author: Viorel Preoteasa (viorel /dot/ preoteasa /at/ aalto /dot/ fi)
Submission date: 2011-09-22
Abstract: Algebras of imperative programming languages have been successful in reasoning about programs. In general an algebra of programs is an algebraic structure with programs as elements and with program compositions (sequential composition, choice, skip) as algebra operations. Various versions of these algebras were introduced to model partial correctness, total correctness, refinement, demonic choice, and other aspects. We formalize here an algebra which can be used to model total correctness, refinement, demonic and angelic choice. The basic model of this algebra are monotonic Boolean transformers (monotonic functions from a Boolean algebra to itself).
BibTeX:
@article{MonoBoolTranAlgebra-AFP,
   author  = {Viorel Preoteasa},
   title   = {Algebra of Monotonic Boolean Transformers},
   journal = {Archive of Formal Proofs},
   month   = sep,
   year    = 2011,
   note    = {\url{https://isa-afp.org/entries/MonoBoolTranAlgebra.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: LatticeProperties
Used by:Correctness_Algebras

\ No newline at end of file diff --git a/web/entries/Myhill-Nerode.html b/web/entries/Myhill-Nerode.html --- a/web/entries/Myhill-Nerode.html +++ b/web/entries/Myhill-Nerode.html @@ -1,272 +1,272 @@ The Myhill-Nerode Theorem Based on Regular Expressions - Archive of Formal Proofs

 

 

 

 

 

 

The Myhill-Nerode Theorem Based on Regular Expressions

 

Title: The Myhill-Nerode Theorem Based on Regular Expressions
Authors: Chunhan Wu, Xingyuan Zhang and Christian Urban
Contributor: - Manuel Eberl + Manuel Eberl
Submission date: 2011-08-26
Abstract: There are many proofs of the Myhill-Nerode theorem using automata. In this library we give a proof entirely based on regular expressions, since regularity of languages can be conveniently defined using regular expressions (it is more painful in HOL to define regularity in terms of automata). We prove the first direction of the Myhill-Nerode theorem by solving equational systems that involve regular expressions. For the second direction we give two proofs: one using tagging-functions and another using partial derivatives. We also establish various closure properties of regular languages. Most details of the theories are described in our ITP 2011 paper.
BibTeX:
@article{Myhill-Nerode-AFP,
   author  = {Chunhan Wu and Xingyuan Zhang and Christian Urban},
   title   = {The Myhill-Nerode Theorem Based on Regular Expressions},
   journal = {Archive of Formal Proofs},
   month   = aug,
   year    = 2011,
   note    = {\url{https://isa-afp.org/entries/Myhill-Nerode.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: Abstract-Rewriting, Open_Induction, Regular-Sets, Well_Quasi_Orders

\ No newline at end of file diff --git a/web/entries/Pell.html b/web/entries/Pell.html --- a/web/entries/Pell.html +++ b/web/entries/Pell.html @@ -1,230 +1,230 @@ Pell's Equation - Archive of Formal Proofs

 

 

 

 

 

 

Pell's Equation

 

Title: Pell's Equation
Author: - Manuel Eberl + Manuel Eberl
Submission date: 2018-06-23
Abstract:

This article gives the basic theory of Pell's equation x2 = 1 + Dy2, where D ∈ ℕ is a parameter and x, y are integer variables.

The main result that is proven is the following: If D is not a perfect square, then there exists a fundamental solution (x0, y0) that is not the trivial solution (1, 0) and which generates all other solutions (x, y) in the sense that there exists some n ∈ ℕ such that |x| + |y| √D = (x0 + y0 √D)n. This also implies that the set of solutions is infinite, and it gives us an explicit and executable characterisation of all the solutions.

Based on this, simple executable algorithms for computing the fundamental solution and the infinite sequence of all non-negative solutions are also provided.

BibTeX:
@article{Pell-AFP,
   author  = {Manuel Eberl},
   title   = {Pell's Equation},
   journal = {Archive of Formal Proofs},
   month   = jun,
   year    = 2018,
   note    = {\url{https://isa-afp.org/entries/Pell.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Used by: Mersenne_Primes

\ No newline at end of file diff --git a/web/entries/Pi_Transcendental.html b/web/entries/Pi_Transcendental.html --- a/web/entries/Pi_Transcendental.html +++ b/web/entries/Pi_Transcendental.html @@ -1,209 +1,209 @@ The Transcendence of π - Archive of Formal Proofs

 

 

 

 

 

 

The Transcendence of π

 

Title: The Transcendence of π
Author: - Manuel Eberl + Manuel Eberl
Submission date: 2018-09-28
Abstract:

This entry shows the transcendence of π based on the classic proof using the fundamental theorem of symmetric polynomials first given by von Lindemann in 1882, but the formalisation mostly follows the version by Niven. The proof reuses much of the machinery developed in the AFP entry on the transcendence of e.

BibTeX:
@article{Pi_Transcendental-AFP,
   author  = {Manuel Eberl},
   title   = {The Transcendence of π},
   journal = {Archive of Formal Proofs},
   month   = sep,
   year    = 2018,
   note    = {\url{https://isa-afp.org/entries/Pi_Transcendental.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: E_Transcendental, Symmetric_Polynomials
Used by: Hermite_Lindemann

\ No newline at end of file diff --git a/web/entries/Polynomial_Interpolation.html b/web/entries/Polynomial_Interpolation.html --- a/web/entries/Polynomial_Interpolation.html +++ b/web/entries/Polynomial_Interpolation.html @@ -1,229 +1,229 @@ Polynomial Interpolation - Archive of Formal Proofs

 

 

 

 

 

 

Polynomial Interpolation

 

- +
Title: Polynomial Interpolation
Authors: René Thiemann (rene /dot/ thiemann /at/ uibk /dot/ ac /dot/ at) and Akihisa Yamada (akihisa /dot/ yamada /at/ aist /dot/ go /dot/ jp)
Submission date: 2016-01-29
Abstract: We formalized three algorithms for polynomial interpolation over arbitrary fields: Lagrange's explicit expression, the recursive algorithm of Neville and Aitken, and the Newton interpolation in combination with an efficient implementation of divided differences. Variants of these algorithms for integer polynomials are also available, where sometimes the interpolation can fail; e.g., there is no linear integer polynomial p such that p(0) = 0 and p(2) = 1. Moreover, for the Newton interpolation for integer polynomials, we proved that all intermediate results that are computed during the algorithm must be integers. This admits an early failure detection in the implementation. Finally, we proved the uniqueness of polynomial interpolation.

The development also contains improved code equations to speed up the division of integers in target languages.

BibTeX:
@article{Polynomial_Interpolation-AFP,
   author  = {René Thiemann and Akihisa Yamada},
   title   = {Polynomial Interpolation},
   journal = {Archive of Formal Proofs},
   month   = jan,
   year    = 2016,
   note    = {\url{https://isa-afp.org/entries/Polynomial_Interpolation.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: Sqrt_Babylonian
Used by:Berlekamp_Zassenhaus, Deep_Learning, Formal_Puiseux_Series, Gauss_Sums, Polynomial_Factorization, Three_Circles
Berlekamp_Zassenhaus, Count_Complex_Roots, Deep_Learning, Formal_Puiseux_Series, Gauss_Sums, Polynomial_Factorization, Three_Circles

\ No newline at end of file diff --git a/web/entries/Power_Sum_Polynomials.html b/web/entries/Power_Sum_Polynomials.html --- a/web/entries/Power_Sum_Polynomials.html +++ b/web/entries/Power_Sum_Polynomials.html @@ -1,230 +1,230 @@ Power Sum Polynomials - Archive of Formal Proofs

 

 

 

 

 

 

Power Sum Polynomials

 

Title: Power Sum Polynomials
Author: - Manuel Eberl + Manuel Eberl
Submission date: 2020-04-24
Abstract:

This article provides a formalisation of the symmetric multivariate polynomials known as power sum polynomials. These are of the form pn(X1,…, Xk) = X1n + … + Xkn. A formal proof of the Girard–Newton Theorem is also given. This theorem relates the power sum polynomials to the elementary symmetric polynomials sk in the form of a recurrence relation (-1)k k sk = ∑i∈[0,k) (-1)i si pk-i .

As an application, this is then used to solve a generalised form of a puzzle given as an exercise in Dummit and Foote's Abstract Algebra: For k complex unknowns x1, …, xk, define pj := x1j + … + xkj. Then for each vector a ∈ ℂk, show that there is exactly one solution to the system p1 = a1, …, pk = ak up to permutation of the xi and determine the value of pi for i>k.

BibTeX:
@article{Power_Sum_Polynomials-AFP,
   author  = {Manuel Eberl},
   title   = {Power Sum Polynomials},
   journal = {Archive of Formal Proofs},
   month   = apr,
   year    = 2020,
   note    = {\url{https://isa-afp.org/entries/Power_Sum_Polynomials.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: Polynomial_Factorization, Symmetric_Polynomials
Used by: Hermite_Lindemann

\ No newline at end of file diff --git a/web/entries/Prime_Distribution_Elementary.html b/web/entries/Prime_Distribution_Elementary.html --- a/web/entries/Prime_Distribution_Elementary.html +++ b/web/entries/Prime_Distribution_Elementary.html @@ -1,223 +1,223 @@ Elementary Facts About the Distribution of Primes - Archive of Formal Proofs

 

 

 

 

 

 

Elementary Facts About the Distribution of Primes

 

Title: Elementary Facts About the Distribution of Primes
Author: - Manuel Eberl + Manuel Eberl
Submission date: 2019-02-21
Abstract:

This entry is a formalisation of Chapter 4 (and parts of Chapter 3) of Apostol's Introduction to Analytic Number Theory. The main topics that are addressed are properties of the distribution of prime numbers that can be shown in an elementary way (i. e. without the Prime Number Theorem), the various equivalent forms of the PNT (which imply each other in elementary ways), and consequences that follow from the PNT in elementary ways. The latter include, most notably, asymptotic bounds for the number of distinct prime factors of n, the divisor function d(n), Euler's totient function φ(n), and lcm(1,…,n).

BibTeX:
@article{Prime_Distribution_Elementary-AFP,
   author  = {Manuel Eberl},
   title   = {Elementary Facts About the Distribution of Primes},
   journal = {Archive of Formal Proofs},
   month   = feb,
   year    = 2019,
   note    = {\url{https://isa-afp.org/entries/Prime_Distribution_Elementary.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: Prime_Number_Theorem, Zeta_Function
Used by: IMO2019, Irrational_Series_Erdos_Straus, Zeta_3_Irrational

\ No newline at end of file diff --git a/web/entries/Prime_Harmonic_Series.html b/web/entries/Prime_Harmonic_Series.html --- a/web/entries/Prime_Harmonic_Series.html +++ b/web/entries/Prime_Harmonic_Series.html @@ -1,238 +1,238 @@ The Divergence of the Prime Harmonic Series - Archive of Formal Proofs

 

 

 

 

 

 

The Divergence of the Prime Harmonic Series

 

Title: The Divergence of the Prime Harmonic Series
Author: - Manuel Eberl + Manuel Eberl
Submission date: 2015-12-28
Abstract:

In this work, we prove the lower bound ln(H_n) - ln(5/3) for the partial sum of the Prime Harmonic series and, based on this, the divergence of the Prime Harmonic Series ∑[p prime] · 1/p.

The proof relies on the unique squarefree decomposition of natural numbers. This is similar to Euler's original proof (which was highly informal and morally questionable). Its advantage over proofs by contradiction, like the famous one by Paul Erdős, is that it provides a relatively good lower bound for the partial sums.

BibTeX:
@article{Prime_Harmonic_Series-AFP,
   author  = {Manuel Eberl},
   title   = {The Divergence of the Prime Harmonic Series},
   journal = {Archive of Formal Proofs},
   month   = dec,
   year    = 2015,
   note    = {\url{https://isa-afp.org/entries/Prime_Harmonic_Series.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License

\ No newline at end of file diff --git a/web/entries/Prime_Number_Theorem.html b/web/entries/Prime_Number_Theorem.html --- a/web/entries/Prime_Number_Theorem.html +++ b/web/entries/Prime_Number_Theorem.html @@ -1,239 +1,239 @@ The Prime Number Theorem - Archive of Formal Proofs

 

 

 

 

 

 

The Prime Number Theorem

 

Title: The Prime Number Theorem
Authors: - Manuel Eberl and + Manuel Eberl and Lawrence C. Paulson
Submission date: 2018-09-19
Abstract:

This article provides a short proof of the Prime Number Theorem in several equivalent forms, most notably π(x) ~ x/ln x where π(x) is the number of primes no larger than x. It also defines other basic number-theoretic functions related to primes like Chebyshev's functions ϑ and ψ and the “n-th prime number” function pn. We also show various bounds and relationship between these functions are shown. Lastly, we derive Mertens' First and Second Theorem, i. e. ∑px ln p/p = ln x + O(1) and ∑px 1/p = ln ln x + M + O(1/ln x). We also give explicit bounds for the remainder terms.

The proof of the Prime Number Theorem builds on a library of Dirichlet series and analytic combinatorics. We essentially follow the presentation by Newman. The core part of the proof is a Tauberian theorem for Dirichlet series, which is proven using complex analysis and then used to strengthen Mertens' First Theorem to ∑px ln p/p = ln x + c + o(1).

A variant of this proof has been formalised before by Harrison in HOL Light, and formalisations of Selberg's elementary proof exist both by Avigad et al. in Isabelle and by Carneiro in Metamath. The advantage of the analytic proof is that, while it requires more powerful mathematical tools, it is considerably shorter and clearer. This article attempts to provide a short and clear formalisation of all components of that proof using the full range of mathematical machinery available in Isabelle, staying as close as possible to Newman's simple paper proof.

BibTeX:
@article{Prime_Number_Theorem-AFP,
   author  = {Manuel Eberl and Lawrence C. Paulson},
   title   = {The Prime Number Theorem},
   journal = {Archive of Formal Proofs},
   month   = sep,
   year    = 2018,
   note    = {\url{https://isa-afp.org/entries/Prime_Number_Theorem.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: Stirling_Formula, Zeta_Function
Used by: Irrational_Series_Erdos_Straus, Prime_Distribution_Elementary, Transcendence_Series_Hancl_Rucki, Zeta_3_Irrational

\ No newline at end of file diff --git a/web/entries/Probabilistic_Prime_Tests.html b/web/entries/Probabilistic_Prime_Tests.html --- a/web/entries/Probabilistic_Prime_Tests.html +++ b/web/entries/Probabilistic_Prime_Tests.html @@ -1,211 +1,211 @@ Probabilistic Primality Testing - Archive of Formal Proofs

 

 

 

 

 

 

Probabilistic Primality Testing

 

Title: Probabilistic Primality Testing
Authors: Daniel Stüwe and - Manuel Eberl + Manuel Eberl
Submission date: 2019-02-11
Abstract:

The most efficient known primality tests are probabilistic in the sense that they use randomness and may, with some probability, mistakenly classify a composite number as prime – but never a prime number as composite. Examples of this are the Miller–Rabin test, the Solovay–Strassen test, and (in most cases) Fermat's test.

This entry defines these three tests and proves their correctness. It also develops some of the number-theoretic foundations, such as Carmichael numbers and the Jacobi symbol with an efficient executable algorithm to compute it.

BibTeX:
@article{Probabilistic_Prime_Tests-AFP,
   author  = {Daniel Stüwe and Manuel Eberl},
   title   = {Probabilistic Primality Testing},
   journal = {Archive of Formal Proofs},
   month   = feb,
   year    = 2019,
   note    = {\url{https://isa-afp.org/entries/Probabilistic_Prime_Tests.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Used by: Mersenne_Primes

\ No newline at end of file diff --git a/web/entries/Quick_Sort_Cost.html b/web/entries/Quick_Sort_Cost.html --- a/web/entries/Quick_Sort_Cost.html +++ b/web/entries/Quick_Sort_Cost.html @@ -1,226 +1,226 @@ The number of comparisons in QuickSort - Archive of Formal Proofs

 

 

 

 

 

 

The number of comparisons in QuickSort

 

Title: The number of comparisons in QuickSort
Author: - Manuel Eberl + Manuel Eberl
Submission date: 2017-03-15
Abstract:

We give a formal proof of the well-known results about the number of comparisons performed by two variants of QuickSort: first, the expected number of comparisons of randomised QuickSort (i. e. QuickSort with random pivot choice) is 2 (n+1) Hn - 4 n, which is asymptotically equivalent to 2 n ln n; second, the number of comparisons performed by the classic non-randomised QuickSort has the same distribution in the average case as the randomised one.

BibTeX:
@article{Quick_Sort_Cost-AFP,
   author  = {Manuel Eberl},
   title   = {The number of comparisons in QuickSort},
   journal = {Archive of Formal Proofs},
   month   = mar,
   year    = 2017,
   note    = {\url{https://isa-afp.org/entries/Quick_Sort_Cost.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: Comparison_Sort_Lower_Bound, Landau_Symbols, List-Index, Regular-Sets
Used by: Random_BSTs

\ No newline at end of file diff --git a/web/entries/Random_BSTs.html b/web/entries/Random_BSTs.html --- a/web/entries/Random_BSTs.html +++ b/web/entries/Random_BSTs.html @@ -1,228 +1,228 @@ Expected Shape of Random Binary Search Trees - Archive of Formal Proofs

 

 

 

 

 

 

Expected Shape of Random Binary Search Trees

 

Title: Expected Shape of Random Binary Search Trees
Author: - Manuel Eberl + Manuel Eberl
Submission date: 2017-04-04
Abstract:

This entry contains proofs for the textbook results about the distributions of the height and internal path length of random binary search trees (BSTs), i. e. BSTs that are formed by taking an empty BST and inserting elements from a fixed set in random order.

In particular, we prove a logarithmic upper bound on the expected height and the Θ(n log n) closed-form solution for the expected internal path length in terms of the harmonic numbers. We also show how the internal path length relates to the average-case cost of a lookup in a BST.

BibTeX:
@article{Random_BSTs-AFP,
   author  = {Manuel Eberl},
   title   = {Expected Shape of Random Binary Search Trees},
   journal = {Archive of Formal Proofs},
   month   = apr,
   year    = 2017,
   note    = {\url{https://isa-afp.org/entries/Random_BSTs.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: Landau_Symbols, Quick_Sort_Cost
Used by: Randomised_BSTs, Treaps

\ No newline at end of file diff --git a/web/entries/Randomised_BSTs.html b/web/entries/Randomised_BSTs.html --- a/web/entries/Randomised_BSTs.html +++ b/web/entries/Randomised_BSTs.html @@ -1,208 +1,208 @@ Randomised Binary Search Trees - Archive of Formal Proofs

 

 

 

 

 

 

Randomised Binary Search Trees

 

Title: Randomised Binary Search Trees
Author: - Manuel Eberl + Manuel Eberl
Submission date: 2018-10-19
Abstract:

This work is a formalisation of the Randomised Binary Search Trees introduced by Martínez and Roura, including definitions and correctness proofs.

Like randomised treaps, they are a probabilistic data structure that behaves exactly as if elements were inserted into a non-balancing BST in random order. However, unlike treaps, they only use discrete probability distributions, but their use of randomness is more complicated.

BibTeX:
@article{Randomised_BSTs-AFP,
   author  = {Manuel Eberl},
   title   = {Randomised Binary Search Trees},
   journal = {Archive of Formal Proofs},
   month   = oct,
   year    = 2018,
   note    = {\url{https://isa-afp.org/entries/Randomised_BSTs.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: Monad_Normalisation, Random_BSTs

\ No newline at end of file diff --git a/web/entries/Randomised_Social_Choice.html b/web/entries/Randomised_Social_Choice.html --- a/web/entries/Randomised_Social_Choice.html +++ b/web/entries/Randomised_Social_Choice.html @@ -1,234 +1,234 @@ Randomised Social Choice Theory - Archive of Formal Proofs

 

 

 

 

 

 

Randomised Social Choice Theory

 

Title: Randomised Social Choice Theory
Author: - Manuel Eberl + Manuel Eberl
Submission date: 2016-05-05
Abstract: This work contains a formalisation of basic Randomised Social Choice, including Stochastic Dominance and Social Decision Schemes (SDSs) along with some of their most important properties (Anonymity, Neutrality, ex-post- and SD-Efficiency, SD-Strategy-Proofness) and two particular SDSs – Random Dictatorship and Random Serial Dictatorship (with proofs of the properties that they satisfy). Many important properties of these concepts are also proven – such as the two equivalent characterisations of Stochastic Dominance and the fact that SD-efficiency of a lottery only depends on the support. The entry also provides convenient commands to define Preference Profiles, prove their well-formedness, and automatically derive restrictions that sufficiently nice SDSs need to satisfy on the defined profiles. Currently, the formalisation focuses on weak preferences and Stochastic Dominance, but it should be easy to extend it to other domains – such as strict preferences – or other lottery extensions – such as Bilinear Dominance or Pairwise Comparison.
BibTeX:
@article{Randomised_Social_Choice-AFP,
   author  = {Manuel Eberl},
   title   = {Randomised Social Choice Theory},
   journal = {Archive of Formal Proofs},
   month   = may,
   year    = 2016,
   note    = {\url{https://isa-afp.org/entries/Randomised_Social_Choice.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: List-Index
Used by: Fishburn_Impossibility, SDS_Impossibility

\ No newline at end of file diff --git a/web/entries/Regular-Sets.html b/web/entries/Regular-Sets.html --- a/web/entries/Regular-Sets.html +++ b/web/entries/Regular-Sets.html @@ -1,281 +1,281 @@ Regular Sets and Expressions - Archive of Formal Proofs

 

 

 

 

 

 

Regular Sets and Expressions

 

Title: Regular Sets and Expressions
Authors: Alexander Krauss and Tobias Nipkow
Contributor: - Manuel Eberl + Manuel Eberl
Submission date: 2010-05-12
Abstract: This is a library of constructions on regular expressions and languages. It provides the operations of concatenation, Kleene star and derivative on languages. Regular expressions and their meaning are defined. An executable equivalence checker for regular expressions is verified; it does not need automata but works directly on regular expressions. By mapping regular expressions to binary relations, an automatic and complete proof method for (in)equalities of binary relations over union, concatenation and (reflexive) transitive closure is obtained.

Extended regular expressions with complement and intersection are also defined and an equivalence checker is provided.

Change history: [2011-08-26]: Christian Urban added a theory about derivatives and partial derivatives of regular expressions
[2012-05-10]: Tobias Nipkow added extended regular expressions
[2012-05-10]: Tobias Nipkow added equivalence checking with partial derivatives
BibTeX:
@article{Regular-Sets-AFP,
   author  = {Alexander Krauss and Tobias Nipkow},
   title   = {Regular Sets and Expressions},
   journal = {Archive of Formal Proofs},
   month   = may,
   year    = 2010,
   note    = {\url{https://isa-afp.org/entries/Regular-Sets.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Used by: Abstract-Rewriting, Coinductive_Languages, Containers, Finite_Automata_HF, Functional-Automata, Lambda_Free_KBOs, List_Update, Myhill-Nerode, Posix-Lexing, Quick_Sort_Cost, Regex_Equivalence, Transitive-Closure-II

\ No newline at end of file diff --git a/web/entries/SDS_Impossibility.html b/web/entries/SDS_Impossibility.html --- a/web/entries/SDS_Impossibility.html +++ b/web/entries/SDS_Impossibility.html @@ -1,233 +1,233 @@ The Incompatibility of SD-Efficiency and SD-Strategy-Proofness - Archive of Formal Proofs

 

 

 

 

 

 

The Incompatibility of SD-Efficiency and SD-Strategy-Proofness

 

Title: The Incompatibility of SD-Efficiency and SD-Strategy-Proofness
Author: - Manuel Eberl + Manuel Eberl
Submission date: 2016-05-04
Abstract: This formalisation contains the proof that there is no anonymous and neutral Social Decision Scheme for at least four voters and alternatives that fulfils both SD-Efficiency and SD-Strategy- Proofness. The proof is a fully structured and quasi-human-redable one. It was derived from the (unstructured) SMT proof of the case for exactly four voters and alternatives by Brandl et al. Their proof relies on an unverified translation of the original problem to SMT, and the proof that lifts the argument for exactly four voters and alternatives to the general case is also not machine-checked. In this Isabelle proof, on the other hand, all of these steps are fully proven and machine-checked. This is particularly important seeing as a previously published informal proof of a weaker statement contained a mistake in precisely this lifting step.
BibTeX:
@article{SDS_Impossibility-AFP,
   author  = {Manuel Eberl},
   title   = {The Incompatibility of SD-Efficiency and SD-Strategy-Proofness},
   journal = {Archive of Formal Proofs},
   month   = may,
   year    = 2016,
   note    = {\url{https://isa-afp.org/entries/SDS_Impossibility.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: Randomised_Social_Choice

\ No newline at end of file diff --git a/web/entries/Skip_Lists.html b/web/entries/Skip_Lists.html --- a/web/entries/Skip_Lists.html +++ b/web/entries/Skip_Lists.html @@ -1,202 +1,202 @@ Skip Lists - Archive of Formal Proofs

 

 

 

 

 

 

Skip Lists

 

Title: Skip Lists
Authors: Max W. Haslbeck and - Manuel Eberl + Manuel Eberl
Submission date: 2020-01-09
Abstract:

Skip lists are sorted linked lists enhanced with shortcuts and are an alternative to binary search trees. A skip lists consists of multiple levels of sorted linked lists where a list on level n is a subsequence of the list on level n − 1. In the ideal case, elements are skipped in such a way that a lookup in a skip lists takes O(log n) time. In a randomised skip list the skipped elements are choosen randomly.

This entry contains formalized proofs of the textbook results about the expected height and the expected length of a search path in a randomised skip list.

BibTeX:
@article{Skip_Lists-AFP,
   author  = {Max W. Haslbeck and Manuel Eberl},
   title   = {Skip Lists},
   journal = {Archive of Formal Proofs},
   month   = jan,
   year    = 2020,
   note    = {\url{https://isa-afp.org/entries/Skip_Lists.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: Monad_Normalisation

\ No newline at end of file diff --git a/web/entries/Stirling_Formula.html b/web/entries/Stirling_Formula.html --- a/web/entries/Stirling_Formula.html +++ b/web/entries/Stirling_Formula.html @@ -1,217 +1,217 @@ Stirling's formula - Archive of Formal Proofs

 

 

 

 

 

 

Stirling's formula

 

Title: Stirling's formula
Author: - Manuel Eberl + Manuel Eberl
Submission date: 2016-09-01
Abstract:

This work contains a proof of Stirling's formula both for the factorial $n! \sim \sqrt{2\pi n} (n/e)^n$ on natural numbers and the real Gamma function $\Gamma(x)\sim \sqrt{2\pi/x} (x/e)^x$. The proof is based on work by Graham Jameson.

This is then extended to the full asymptotic expansion $$\log\Gamma(z) = \big(z - \tfrac{1}{2}\big)\log z - z + \tfrac{1}{2}\log(2\pi) + \sum_{k=1}^{n-1} \frac{B_{k+1}}{k(k+1)} z^{-k}\\ {} - \frac{1}{n} \int_0^\infty B_n([t])(t + z)^{-n}\,\text{d}t$$ uniformly for all complex $z\neq 0$ in the cone $\text{arg}(z)\leq \alpha$ for any $\alpha\in(0,\pi)$, with which the above asymptotic relation for Γ is also extended to complex arguments.

BibTeX:
@article{Stirling_Formula-AFP,
   author  = {Manuel Eberl},
   title   = {Stirling's formula},
   journal = {Archive of Formal Proofs},
   month   = sep,
   year    = 2016,
   note    = {\url{https://isa-afp.org/entries/Stirling_Formula.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: Bernoulli, Landau_Symbols
Used by: Comparison_Sort_Lower_Bound, Lambert_W, Prime_Number_Theorem

\ No newline at end of file diff --git a/web/entries/Stone_Kleene_Relation_Algebras.html b/web/entries/Stone_Kleene_Relation_Algebras.html --- a/web/entries/Stone_Kleene_Relation_Algebras.html +++ b/web/entries/Stone_Kleene_Relation_Algebras.html @@ -1,215 +1,215 @@ Stone-Kleene Relation Algebras - Archive of Formal Proofs

 

 

 

 

 

 

Stone-Kleene Relation Algebras

 

- +
Title: Stone-Kleene Relation Algebras
Author: Walter Guttmann
Submission date: 2017-07-06
Abstract: We develop Stone-Kleene relation algebras, which expand Stone relation algebras with a Kleene star operation to describe reachability in weighted graphs. Many properties of the Kleene star arise as a special case of a more general theory of iteration based on Conway semirings extended by simulation axioms. This includes several theorems representing complex program transformations. We formally prove the correctness of Conway's automata-based construction of the Kleene star of a matrix. We prove numerous results useful for reasoning about weighted graphs.
BibTeX:
@article{Stone_Kleene_Relation_Algebras-AFP,
   author  = {Walter Guttmann},
   title   = {Stone-Kleene Relation Algebras},
   journal = {Archive of Formal Proofs},
   month   = jul,
   year    = 2017,
   note    = {\url{https://isa-afp.org/entries/Stone_Kleene_Relation_Algebras.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: Stone_Relation_Algebras
Used by:Aggregation_Algebras, Relational_Disjoint_Set_Forests, Relational_Forests
Aggregation_Algebras, Correctness_Algebras, Relational_Disjoint_Set_Forests, Relational_Forests

\ No newline at end of file diff --git a/web/entries/Sturm_Sequences.html b/web/entries/Sturm_Sequences.html --- a/web/entries/Sturm_Sequences.html +++ b/web/entries/Sturm_Sequences.html @@ -1,234 +1,234 @@ Sturm's Theorem - Archive of Formal Proofs

 

 

 

 

 

 

Sturm's Theorem

 

Title: Sturm's Theorem
Author: - Manuel Eberl + Manuel Eberl
Submission date: 2014-01-11
Abstract: Sturm's Theorem states that polynomial sequences with certain properties, so-called Sturm sequences, can be used to count the number of real roots of a real polynomial. This work contains a proof of Sturm's Theorem and code for constructing Sturm sequences efficiently. It also provides the “sturm” proof method, which can decide certain statements about the roots of real polynomials, such as “the polynomial P has exactly n roots in the interval I” or “P(x) > Q(x) for all x ∈ ℝ”.
BibTeX:
@article{Sturm_Sequences-AFP,
   author  = {Manuel Eberl},
   title   = {Sturm's Theorem},
   journal = {Archive of Formal Proofs},
   month   = jan,
   year    = 2014,
   note    = {\url{https://isa-afp.org/entries/Sturm_Sequences.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Used by: Algebraic_Numbers, Perron_Frobenius, Safe_Distance, Special_Function_Bounds

\ No newline at end of file diff --git a/web/entries/Subset_Boolean_Algebras.html b/web/entries/Subset_Boolean_Algebras.html --- a/web/entries/Subset_Boolean_Algebras.html +++ b/web/entries/Subset_Boolean_Algebras.html @@ -1,210 +1,212 @@ A Hierarchy of Algebras for Boolean Subsets - Archive of Formal Proofs

 

 

 

 

 

 

A Hierarchy of Algebras for Boolean Subsets

 

- + + +
Title: A Hierarchy of Algebras for Boolean Subsets
Authors: Walter Guttmann and Bernhard Möller
Submission date: 2020-01-31
Abstract: We present a collection of axiom systems for the construction of Boolean subalgebras of larger overall algebras. The subalgebras are defined as the range of a complement-like operation on a semilattice. This technique has been used, for example, with the antidomain operation, dynamic negation and Stone algebras. We present a common ground for these constructions based on a new equational axiomatisation of Boolean algebras.
BibTeX:
@article{Subset_Boolean_Algebras-AFP,
   author  = {Walter Guttmann and Bernhard Möller},
   title   = {A Hierarchy of Algebras for Boolean Subsets},
   journal = {Archive of Formal Proofs},
   month   = jan,
   year    = 2020,
   note    = {\url{https://isa-afp.org/entries/Subset_Boolean_Algebras.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: Stone_Algebras
Used by:Correctness_Algebras

\ No newline at end of file diff --git a/web/entries/Symmetric_Polynomials.html b/web/entries/Symmetric_Polynomials.html --- a/web/entries/Symmetric_Polynomials.html +++ b/web/entries/Symmetric_Polynomials.html @@ -1,224 +1,224 @@ Symmetric Polynomials - Archive of Formal Proofs

 

 

 

 

 

 

Symmetric Polynomials

 

Title: Symmetric Polynomials
Author: - Manuel Eberl + Manuel Eberl
Submission date: 2018-09-25
Abstract:

A symmetric polynomial is a polynomial in variables X1,…,Xn that does not discriminate between its variables, i. e. it is invariant under any permutation of them. These polynomials are important in the study of the relationship between the coefficients of a univariate polynomial and its roots in its algebraic closure.

This article provides a definition of symmetric polynomials and the elementary symmetric polynomials e1,…,en and proofs of their basic properties, including three notable ones:

  • Vieta's formula, which gives an explicit expression for the k-th coefficient of a univariate monic polynomial in terms of its roots x1,…,xn, namely ck = (-1)n-k en-k(x1,…,xn).
  • Second, the Fundamental Theorem of Symmetric Polynomials, which states that any symmetric polynomial is itself a uniquely determined polynomial combination of the elementary symmetric polynomials.
  • Third, as a corollary of the previous two, that given a polynomial over some ring R, any symmetric polynomial combination of its roots is also in R even when the roots are not.

Both the symmetry property itself and the witness for the Fundamental Theorem are executable.

BibTeX:
@article{Symmetric_Polynomials-AFP,
   author  = {Manuel Eberl},
   title   = {Symmetric Polynomials},
   journal = {Archive of Formal Proofs},
   month   = sep,
   year    = 2018,
   note    = {\url{https://isa-afp.org/entries/Symmetric_Polynomials.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: Polynomials
Used by: Pi_Transcendental, Power_Sum_Polynomials

\ No newline at end of file diff --git a/web/entries/Treaps.html b/web/entries/Treaps.html --- a/web/entries/Treaps.html +++ b/web/entries/Treaps.html @@ -1,223 +1,223 @@ Treaps - Archive of Formal Proofs

 

 

 

 

 

 

Treaps

 

Title: Treaps
Authors: Maximilian Haslbeck, - Manuel Eberl and + Manuel Eberl and Tobias Nipkow
Submission date: 2018-02-06
Abstract:

A Treap is a binary tree whose nodes contain pairs consisting of some payload and an associated priority. It must have the search-tree property w.r.t. the payloads and the heap property w.r.t. the priorities. Treaps are an interesting data structure that is related to binary search trees (BSTs) in the following way: if one forgets all the priorities of a treap, the resulting BST is exactly the same as if one had inserted the elements into an empty BST in order of ascending priority. This means that a treap behaves like a BST where we can pretend the elements were inserted in a different order from the one in which they were actually inserted.

In particular, by choosing these priorities at random upon insertion of an element, we can pretend that we inserted the elements in random order, so that the shape of the resulting tree is that of a random BST no matter in what order we insert the elements. This is the main result of this formalisation.

BibTeX:
@article{Treaps-AFP,
   author  = {Maximilian Haslbeck and Manuel Eberl and Tobias Nipkow},
   title   = {Treaps},
   journal = {Archive of Formal Proofs},
   month   = feb,
   year    = 2018,
   note    = {\url{https://isa-afp.org/entries/Treaps.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: Comparison_Sort_Lower_Bound, Random_BSTs

\ No newline at end of file diff --git a/web/entries/Triangle.html b/web/entries/Triangle.html --- a/web/entries/Triangle.html +++ b/web/entries/Triangle.html @@ -1,234 +1,234 @@ Basic Geometric Properties of Triangles - Archive of Formal Proofs

 

 

 

 

 

 

Basic Geometric Properties of Triangles

 

Title: Basic Geometric Properties of Triangles
Author: - Manuel Eberl + Manuel Eberl
Submission date: 2015-12-28
Abstract:

This entry contains a definition of angles between vectors and between three points. Building on this, we prove basic geometric properties of triangles, such as the Isosceles Triangle Theorem, the Law of Sines and the Law of Cosines, that the sum of the angles of a triangle is π, and the congruence theorems for triangles.

The definitions and proofs were developed following those by John Harrison in HOL Light. However, due to Isabelle's type class system, all definitions and theorems in the Isabelle formalisation hold for all real inner product spaces.

BibTeX:
@article{Triangle-AFP,
   author  = {Manuel Eberl},
   title   = {Basic Geometric Properties of Triangles},
   journal = {Archive of Formal Proofs},
   month   = dec,
   year    = 2015,
   note    = {\url{https://isa-afp.org/entries/Triangle.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Used by: Chord_Segments, Ordinary_Differential_Equations, Stewart_Apollonius

\ No newline at end of file diff --git a/web/entries/Van_der_Waerden.html b/web/entries/Van_der_Waerden.html --- a/web/entries/Van_der_Waerden.html +++ b/web/entries/Van_der_Waerden.html @@ -1,194 +1,194 @@ Van der Waerden's Theorem - Archive of Formal Proofs

 

 

 

 

 

 

Van der Waerden's Theorem

 

Title: Van der Waerden's Theorem
Authors: Katharina Kreuzer and - Manuel Eberl + Manuel Eberl
Submission date: 2021-06-22
Abstract: This article formalises the proof of Van der Waerden's Theorem from Ramsey theory. Van der Waerden's Theorem states that for integers $k$ and $l$ there exists a number $N$ which guarantees that if an integer interval of length at least $N$ is coloured with $k$ colours, there will always be an arithmetic progression of length $l$ of the same colour in said interval. The proof goes along the lines of \cite{Swan}. The smallest number $N_{k,l}$ fulfilling Van der Waerden's Theorem is then called the Van der Waerden Number. Finding the Van der Waerden Number is still an open problem for most values of $k$ and $l$.
BibTeX:
@article{Van_der_Waerden-AFP,
   author  = {Katharina Kreuzer and Manuel Eberl},
   title   = {Van der Waerden's Theorem},
   journal = {Archive of Formal Proofs},
   month   = jun,
   year    = 2021,
   note    = {\url{https://isa-afp.org/entries/Van_der_Waerden.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License

\ No newline at end of file diff --git a/web/entries/Zeta_3_Irrational.html b/web/entries/Zeta_3_Irrational.html --- a/web/entries/Zeta_3_Irrational.html +++ b/web/entries/Zeta_3_Irrational.html @@ -1,203 +1,203 @@ The Irrationality of ζ(3) - Archive of Formal Proofs

 

 

 

 

 

 

The Irrationality of ζ(3)

 

Title: The Irrationality of ζ(3)
Author: - Manuel Eberl + Manuel Eberl
Submission date: 2019-12-27
Abstract:

This article provides a formalisation of Beukers's straightforward analytic proof that ζ(3) is irrational. This was first proven by Apéry (which is why this result is also often called ‘Apéry's Theorem’) using a more algebraic approach. This formalisation follows Filaseta's presentation of Beukers's proof.

BibTeX:
@article{Zeta_3_Irrational-AFP,
   author  = {Manuel Eberl},
   title   = {The Irrationality of ζ(3)},
   journal = {Archive of Formal Proofs},
   month   = dec,
   year    = 2019,
   note    = {\url{https://isa-afp.org/entries/Zeta_3_Irrational.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: E_Transcendental, Prime_Distribution_Elementary, Prime_Number_Theorem

\ No newline at end of file diff --git a/web/entries/Zeta_Function.html b/web/entries/Zeta_Function.html --- a/web/entries/Zeta_Function.html +++ b/web/entries/Zeta_Function.html @@ -1,233 +1,233 @@ The Hurwitz and Riemann ζ Functions - Archive of Formal Proofs

 

 

 

 

 

 

The Hurwitz and Riemann ζ Functions

 

Title: The Hurwitz and Riemann ζ Functions
Author: - Manuel Eberl + Manuel Eberl
Submission date: 2017-10-12
Abstract:

This entry builds upon the results about formal and analytic Dirichlet series to define the Hurwitz ζ function ζ(a,s) and, based on that, the Riemann ζ function ζ(s). This is done by first defining them for ℜ(z) > 1 and then successively extending the domain to the left using the Euler–MacLaurin formula.

Apart from the most basic facts such as analyticity, the following results are provided:

  • the Stieltjes constants and the Laurent expansion of ζ(s) at s = 1
  • the non-vanishing of ζ(s) for ℜ(z) ≥ 1
  • the relationship between ζ(a,s) and Γ
  • the special values at negative integers and positive even integers
  • Hurwitz's formula and the reflection formula for ζ(s)
  • the Hadjicostas–Chapman formula

The entry also contains Euler's analytic proof of the infinitude of primes, based on the fact that ζ(s) has a pole at s = 1.

BibTeX:
@article{Zeta_Function-AFP,
   author  = {Manuel Eberl},
   title   = {The Hurwitz and Riemann ζ Functions},
   journal = {Archive of Formal Proofs},
   month   = oct,
   year    = 2017,
   note    = {\url{https://isa-afp.org/entries/Zeta_Function.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: Bernoulli, Dirichlet_Series, Euler_MacLaurin, Winding_Number_Eval
Used by: Dirichlet_L, Prime_Distribution_Elementary, Prime_Number_Theorem

\ No newline at end of file diff --git a/web/index.html b/web/index.html --- a/web/index.html +++ b/web/index.html @@ -1,5770 +1,5789 @@ Archive of Formal Proofs

 

 

 

 

 

 

Archive of Formal Proofs

 

The Archive of Formal Proofs is a collection of proof libraries, examples, and larger scientific developments, mechanically checked in the theorem prover Isabelle. It is organized in the way of a scientific journal, is indexed by dblp and has an ISSN: 2150-914x. Submissions are refereed. The preferred citation style is available [here]. We encourage companion AFP submissions to conference and journal publications.

A development version of the archive is available as well.

 

 

+ + + + + +
2021
+ 2021-10-19: Belief Revision Theory +
+ Authors: + Valentin Fouillard, + Safouan Taha, + Frédéric Boulanger + and Nicolas Sabouret +
+ 2021-10-12: Algebras for Iteration, Infinite Executions and Correctness of Sequential Computations +
+ Author: + Walter Guttmann +
2021-10-02: Verified Quadratic Virtual Substitution for Real Arithmetic
Authors: Matias Scharager, Katherine Cordwell, Stefan Mitsch and André Platzer
2021-09-24: Soundness and Completeness of an Axiomatic System for First-Order Logic
Author: Asta Halkjær From
2021-09-18: Complex Bounded Operators
Authors: Jose Manuel Rodriguez Caballero and Dominique Unruh
2021-09-16: A Formalization of Weighted Path Orders and Recursive Path Orders
Authors: Christian Sternagel, René Thiemann and Akihisa Yamada
2021-09-06: Extension of Types-To-Sets
Author: Mihails Milehins
2021-09-06: IDE: Introduction, Destruction, Elimination
Author: Mihails Milehins
2021-09-06: Conditional Transfer Rule
Author: Mihails Milehins
2021-09-06: Conditional Simplification
Author: Mihails Milehins
2021-09-06: Category Theory for ZFC in HOL III: Universal Constructions
Author: Mihails Milehins
2021-09-06: Category Theory for ZFC in HOL I: Foundations: Design Patterns, Set Theory, Digraphs, Semicategories
Author: Mihails Milehins
2021-09-06: Category Theory for ZFC in HOL II: Elementary Theory of 1-Categories
Author: Mihails Milehins
2021-09-05: A data flow analysis algorithm for computing dominators
Author: Nan Jiang
2021-09-03: Solving Cubic and Quartic Equations
Author: René Thiemann
2021-08-26: Logging-independent Message Anonymity in the Relational Method
Author: Pasquale Noce
2021-08-21: The Theorem of Three Circles
Authors: Fox Thomson and Wenda Li
2021-08-16: Fresh identifiers
Authors: Andrei Popescu and Thomas Bauereiss
2021-08-16: CoSMed: A confidentiality-verified social media platform
Authors: Thomas Bauereiss and Andrei Popescu
2021-08-16: CoSMeDis: A confidentiality-verified distributed social media platform
Authors: Thomas Bauereiss and Andrei Popescu
2021-08-16: CoCon: A Confidentiality-Verified Conference Management System
Authors: Andrei Popescu, Peter Lammich and Thomas Bauereiss
2021-08-16: Compositional BD Security
Authors: Thomas Bauereiss and Andrei Popescu
2021-08-13: Combinatorial Design Theory
Authors: Chelsea Edmonds and Lawrence Paulson
2021-08-03: Relational Forests
Author: Walter Guttmann
2021-07-27: Schutz' Independent Axioms for Minkowski Spacetime
Authors: Richard Schmoetten, Jake Palmer and Jacques Fleuriot
2021-07-07: Finitely Generated Abelian Groups
Authors: Joseph Thommes - and Manuel Eberl + and Manuel Eberl
2021-07-01: SpecCheck - Specification-Based Testing for Isabelle/ML
Authors: Kevin Kappelmann, Lukas Bulwahn and Sebastian Willenbrink
2021-06-22: Van der Waerden's Theorem
Authors: Katharina Kreuzer - and Manuel Eberl + and Manuel Eberl
2021-06-18: MiniSail - A kernel language for the ISA specification language SAIL
Author: Mark Wassell
2021-06-17: Public Announcement Logic
Author: Asta Halkjær From
2021-06-04: A Shorter Compiler Correctness Proof for Language IMP
Author: Pasquale Noce
2021-05-24: Lyndon words
Authors: Štěpán Holub and Štěpán Starosta
2021-05-24: Graph Lemma
Authors: Štěpán Holub and Štěpán Starosta
2021-05-24: Combinatorics on Words Basics
Authors: Štěpán Holub, Martin Raška and Štěpán Starosta
2021-04-30: Regression Test Selection
Author: Susannah Mansky
2021-04-27: Isabelle's Metalogic: Formalization and Proof Checker
Authors: Tobias Nipkow and Simon Roßkopf
2021-04-27: Lifting the Exponent
Author: Jakub Kądziołka
2021-04-24: The BKR Decision Procedure for Univariate Real Arithmetic
Authors: Katherine Cordwell, Yong Kiam Tan and André Platzer
2021-04-23: Gale-Stewart Games
Author: Sebastiaan Joosten
2021-04-13: Formalization of Timely Dataflow's Progress Tracking Protocol
Authors: Matthias Brun, Sára Decova, Andrea Lattuada and Dmitriy Traytel
2021-04-01: Information Flow Control via Dependency Tracking
Author: Benedikt Nordhoff
2021-03-29: Grothendieck's Schemes in Algebraic Geometry
Authors: Anthony Bordg, Lawrence Paulson and Wenda Li
2021-03-23: Hensel's Lemma for the p-adic Integers
Author: Aaron Crighton
2021-03-17: Constructive Cryptography in HOL: the Communication Modeling Aspect
Authors: Andreas Lochbihler and S. Reza Sefidgar
2021-03-12: Two algorithms based on modular arithmetic: lattice basis reduction and Hermite normal form computation
Authors: Ralph Bottesch, Jose Divasón and René Thiemann
2021-03-03: Quantum projective measurements and the CHSH inequality
Author: Mnacho Echenim
2021-03-03: The Hermite–Lindemann–Weierstraß Transcendence Theorem
Author: - Manuel Eberl + Manuel Eberl
2021-03-01: Mereology
Author: Ben Blumson
2021-02-25: The Sunflower Lemma of Erdős and Rado
Author: René Thiemann
2021-02-24: A Verified Imperative Implementation of B-Trees
Author: Niels Mündler
2021-02-17: Formal Puiseux Series
Author: - Manuel Eberl + Manuel Eberl
2021-02-10: The Laws of Large Numbers
Author: - Manuel Eberl + Manuel Eberl
2021-01-31: Tarski's Parallel Postulate implies the 5th Postulate of Euclid, the Postulate of Playfair and the original Parallel Postulate of Euclid
Author: Roland Coghetto
2021-01-30: Solution to the xkcd Blue Eyes puzzle
Author: Jakub Kądziołka
2021-01-18: Hood-Melville Queue
Author: Alejandro Gómez-Londoño
2021-01-11: JinjaDCI: a Java semantics with dynamic class initialization
Author: Susannah Mansky

 

2020
2020-12-27: Cofinality and the Delta System Lemma
Author: Pedro Sánchez Terraf
2020-12-17: Topological semantics for paraconsistent and paracomplete logics
Author: David Fuenmayor
2020-12-08: Relational Minimum Spanning Tree Algorithms
Authors: Walter Guttmann and Nicolas Robinson-O'Brien
2020-12-07: Inline Caching and Unboxing Optimization for Interpreters
Author: Martin Desharnais
2020-12-05: The Relational Method with Message Anonymity for the Verification of Cryptographic Protocols
Author: Pasquale Noce
2020-11-22: Isabelle Marries Dirac: a Library for Quantum Computation and Quantum Information
Authors: Anthony Bordg, Hanna Lachnitt and Yijun He
2020-11-19: The HOL-CSP Refinement Toolkit
Authors: Safouan Taha, Burkhart Wolff and Lina Ye
2020-10-29: Verified SAT-Based AI Planning
Authors: Mohammad Abdulaziz and Friedrich Kurz
2020-10-29: AI Planning Languages Semantics
Authors: Mohammad Abdulaziz and Peter Lammich
2020-10-20: A Sound Type System for Physical Quantities, Units, and Measurements
Authors: Simon Foster and Burkhart Wolff
2020-10-12: Finite Map Extras
Author: Javier Díaz
2020-09-28: A Formal Model of the Safely Composable Document Object Model with Shadow Roots
Authors: Achim D. Brucker and Michael Herzberg
2020-09-28: A Formal Model of the Document Object Model with Shadow Roots
Authors: Achim D. Brucker and Michael Herzberg
2020-09-28: A Formalization of Safely Composable Web Components
Authors: Achim D. Brucker and Michael Herzberg
2020-09-28: A Formalization of Web Components
Authors: Achim D. Brucker and Michael Herzberg
2020-09-28: The Safely Composable DOM
Authors: Achim D. Brucker and Michael Herzberg
2020-09-16: Syntax-Independent Logic Infrastructure
Authors: Andrei Popescu and Dmitriy Traytel
2020-09-16: Robinson Arithmetic
Authors: Andrei Popescu and Dmitriy Traytel
2020-09-16: An Abstract Formalization of Gödel's Incompleteness Theorems
Authors: Andrei Popescu and Dmitriy Traytel
2020-09-16: From Abstract to Concrete Gödel's Incompleteness Theorems—Part II
Authors: Andrei Popescu and Dmitriy Traytel
2020-09-16: From Abstract to Concrete Gödel's Incompleteness Theorems—Part I
Authors: Andrei Popescu and Dmitriy Traytel
2020-09-07: A Formal Model of Extended Finite State Machines
Authors: Michael Foster, Achim D. Brucker, Ramsay G. Taylor and John Derrick
2020-09-07: Inference of Extended Finite State Machines
Authors: Michael Foster, Achim D. Brucker, Ramsay G. Taylor and John Derrick
2020-08-31: Practical Algebraic Calculus Checker
Authors: Mathias Fleury and Daniela Kaufmann
2020-08-31: Some classical results in inductive inference of recursive functions
Author: Frank J. Balbach
2020-08-26: Relational Disjoint-Set Forests
Author: Walter Guttmann
2020-08-25: Extensions to the Comprehensive Framework for Saturation Theorem Proving
Authors: Jasmin Blanchette and Sophie Tourret
2020-08-25: Putting the `K' into Bird's derivation of Knuth-Morris-Pratt string matching
Author: Peter Gammie
2020-08-04: Amicable Numbers
Author: Angeliki Koutsoukou-Argyraki
2020-08-03: Ordinal Partitions
Author: Lawrence C. Paulson
2020-07-21: A Formal Proof of The Chandy--Lamport Distributed Snapshot Algorithm
Authors: Ben Fiedler and Dmitriy Traytel
2020-07-13: Relational Characterisations of Paths
Authors: Walter Guttmann and Peter Höfner
2020-06-01: A Formally Verified Checker of the Safe Distance Traffic Rules for Autonomous Vehicles
Authors: Albert Rizaldi and Fabian Immler
2020-05-23: A verified algorithm for computing the Smith normal form of a matrix
Author: Jose Divasón
2020-05-16: The Nash-Williams Partition Theorem
Author: Lawrence C. Paulson
2020-05-13: A Formalization of Knuth–Bendix Orders
Authors: Christian Sternagel and René Thiemann
2020-05-12: Irrationality Criteria for Series by Erdős and Straus
Authors: Angeliki Koutsoukou-Argyraki and Wenda Li
2020-05-11: Recursion Theorem in ZF
Author: Georgy Dunaev
2020-05-08: An Efficient Normalisation Procedure for Linear Temporal Logic: Isabelle/HOL Formalisation
Author: Salomon Sickert
2020-05-06: Formalization of Forcing in Isabelle/ZF
Authors: Emmanuel Gunther, Miguel Pagano and Pedro Sánchez Terraf
2020-05-02: Banach-Steinhaus Theorem
Authors: Dominique Unruh and Jose Manuel Rodriguez Caballero
2020-04-27: Attack Trees in Isabelle for GDPR compliance of IoT healthcare systems
Author: Florian Kammueller
2020-04-24: Power Sum Polynomials
Author: - Manuel Eberl + Manuel Eberl
2020-04-24: The Lambert W Function on the Reals
Author: - Manuel Eberl + Manuel Eberl
2020-04-24: Gaussian Integers
Author: - Manuel Eberl + Manuel Eberl
2020-04-19: Matrices for ODEs
Author: Jonathan Julian Huerta y Munive
2020-04-16: Authenticated Data Structures As Functors
Authors: Andreas Lochbihler and Ognjen Marić
2020-04-10: Formalization of an Algorithm for Greedily Computing Associative Aggregations on Sliding Windows
Authors: Lukas Heimes, Dmitriy Traytel and Joshua Schneider
2020-04-09: A Comprehensive Framework for Saturation Theorem Proving
Author: Sophie Tourret
2020-04-09: Formalization of an Optimized Monitoring Algorithm for Metric First-Order Dynamic Logic with Aggregations
Authors: Thibault Dardinier, Lukas Heimes, Martin Raszyk, Joshua Schneider and Dmitriy Traytel
2020-04-08: Stateful Protocol Composition and Typing
Authors: Andreas V. Hess, Sebastian Mödersheim and Achim D. Brucker
2020-04-08: Automated Stateful Protocol Verification
Authors: Andreas V. Hess, Sebastian Mödersheim, Achim D. Brucker and Anders Schlichtkrull
2020-04-07: Lucas's Theorem
Author: Chelsea Edmonds
2020-03-25: Strong Eventual Consistency of the Collaborative Editing Framework WOOT
Authors: Emin Karayel and Edgar Gonzàlez
2020-03-22: Furstenberg's topology and his proof of the infinitude of primes
Author: - Manuel Eberl + Manuel Eberl
2020-03-12: An Under-Approximate Relational Logic
Author: Toby Murray
2020-03-07: Hello World
Authors: Cornelius Diekmann and Lars Hupel
2020-02-21: Implementing the Goodstein Function in λ-Calculus
Author: Bertram Felgenhauer
2020-02-10: A Generic Framework for Verified Compilers
Author: Martin Desharnais
2020-02-01: Arithmetic progressions and relative primes
Author: José Manuel Rodríguez Caballero
2020-01-31: A Hierarchy of Algebras for Boolean Subsets
Authors: Walter Guttmann and Bernhard Möller
2020-01-17: Mersenne primes and the Lucas–Lehmer test
Author: - Manuel Eberl + Manuel Eberl
2020-01-16: Verified Approximation Algorithms
Authors: Robin Eßmann, Tobias Nipkow, Simon Robillard and Ujkan Sulejmani
2020-01-13: Closest Pair of Points Algorithms
Authors: Martin Rau and Tobias Nipkow
2020-01-09: Skip Lists
Authors: Max W. Haslbeck - and Manuel Eberl + and Manuel Eberl
2020-01-06: Bicategories
Author: Eugene W. Stark

 

2019
2019-12-27: The Irrationality of ζ(3)
Author: - Manuel Eberl + Manuel Eberl
2019-12-20: Formalizing a Seligman-Style Tableau System for Hybrid Logic
Author: Asta Halkjær From
2019-12-18: The Poincaré-Bendixson Theorem
Authors: Fabian Immler and Yong Kiam Tan
2019-12-16: Poincaré Disc Model
Authors: Danijela Simić, Filip Marić and Pierre Boutry
2019-12-16: Complex Geometry
Authors: Filip Marić and Danijela Simić
2019-12-10: Gauss Sums and the Pólya–Vinogradov Inequality
Authors: Rodrigo Raya - and Manuel Eberl + and Manuel Eberl
2019-12-04: An Efficient Generalization of Counting Sort for Large, possibly Infinite Key Ranges
Author: Pasquale Noce
2019-11-27: Interval Arithmetic on 32-bit Words
Author: Brandon Bohrer
2019-10-24: Zermelo Fraenkel Set Theory in Higher-Order Logic
Author: Lawrence C. Paulson
2019-10-22: Isabelle/C
Authors: Frédéric Tuong and Burkhart Wolff
2019-10-16: VerifyThis 2019 -- Polished Isabelle Solutions
Authors: Peter Lammich and Simon Wimmer
2019-10-08: Aristotle's Assertoric Syllogistic
Author: Angeliki Koutsoukou-Argyraki
2019-10-07: Sigma Protocols and Commitment Schemes
Authors: David Butler and Andreas Lochbihler
2019-10-04: Clean - An Abstract Imperative Programming Language and its Theory
Authors: Frédéric Tuong and Burkhart Wolff
2019-09-16: Formalization of Multiway-Join Algorithms
Author: Thibault Dardinier
2019-09-10: Verification Components for Hybrid Systems
Author: Jonathan Julian Huerta y Munive
2019-09-06: Fourier Series
Author: Lawrence C Paulson
2019-08-30: A Case Study in Basic Algebra
Author: Clemens Ballarin
2019-08-16: Formalisation of an Adaptive State Counting Algorithm
Author: Robert Sachtleben
2019-08-14: Laplace Transform
Author: Fabian Immler
2019-08-06: Linear Programming
Authors: Julian Parsert and Cezary Kaliszyk
2019-08-06: Communicating Concurrent Kleene Algebra for Distributed Systems Specification
Authors: Maxime Buyse and Jason Jaskolka
2019-08-05: Selected Problems from the International Mathematical Olympiad 2019
Author: - Manuel Eberl + Manuel Eberl
2019-08-01: Stellar Quorum Systems
Author: Giuliano Losa
2019-07-30: A Formal Development of a Polychronous Polytimed Coordination Language
Authors: Hai Nguyen Van, Frédéric Boulanger and Burkhart Wolff
2019-07-27: Order Extension and Szpilrajn's Extension Theorem
Authors: Peter Zeller and Lukas Stevens
2019-07-18: A Sequent Calculus for First-Order Logic
Author: Asta Halkjær From
2019-07-08: A Verified Code Generator from Isabelle/HOL to CakeML
Author: Lars Hupel
2019-07-04: Formalization of a Monitoring Algorithm for Metric First-Order Temporal Logic
Authors: Joshua Schneider and Dmitriy Traytel
2019-06-27: Complete Non-Orders and Fixed Points
Authors: Akihisa Yamada and Jérémy Dubut
2019-06-25: Priority Search Trees
Authors: Peter Lammich and Tobias Nipkow
2019-06-25: Purely Functional, Simple, and Efficient Implementation of Prim and Dijkstra
Authors: Peter Lammich and Tobias Nipkow
2019-06-21: Linear Inequalities
Authors: Ralph Bottesch, Alban Reynaud and René Thiemann
2019-06-16: Hilbert's Nullstellensatz
Author: Alexander Maletzky
2019-06-15: Gröbner Bases, Macaulay Matrices and Dubé's Degree Bounds
Author: Alexander Maletzky
2019-06-13: Binary Heaps for IMP2
Author: Simon Griebel
2019-06-03: Differential Game Logic
Author: André Platzer
2019-05-30: Multidimensional Binary Search Trees
Author: Martin Rau
2019-05-14: Formalization of Generic Authenticated Data Structures
Authors: Matthias Brun and Dmitriy Traytel
2019-05-09: Multi-Party Computation
Authors: David Aspinall and David Butler
2019-04-26: HOL-CSP Version 2.0
Authors: Safouan Taha, Lina Ye and Burkhart Wolff
2019-04-16: A Compositional and Unified Translation of LTL into ω-Automata
Authors: Benedikt Seidl and Salomon Sickert
2019-04-06: A General Theory of Syntax with Bindings
Authors: Lorenzo Gheri and Andrei Popescu
2019-03-27: The Transcendence of Certain Infinite Series
Authors: Angeliki Koutsoukou-Argyraki and Wenda Li
2019-03-24: Quantum Hoare Logic
Authors: Junyi Liu, Bohua Zhan, Shuling Wang, Shenggang Ying, Tao Liu, Yangjia Li, Mingsheng Ying and Naijun Zhan
2019-03-09: Safe OCL
Author: Denis Nikiforov
2019-02-21: Elementary Facts About the Distribution of Primes
Author: - Manuel Eberl + Manuel Eberl
2019-02-14: Kruskal's Algorithm for Minimum Spanning Forest
Authors: Maximilian P.L. Haslbeck, Peter Lammich and Julian Biendarra
2019-02-11: Probabilistic Primality Testing
Authors: Daniel Stüwe - and Manuel Eberl + and Manuel Eberl
2019-02-08: Universal Turing Machine
Authors: Jian Xu, Xingyuan Zhang, Christian Urban and Sebastiaan J. C. Joosten
2019-02-01: Isabelle/UTP: Mechanised Theory Engineering for Unifying Theories of Programming
Authors: Simon Foster, Frank Zeyda, Yakoub Nemouchi, Pedro Ribeiro and Burkhart Wolff
2019-02-01: The Inversions of a List
Author: - Manuel Eberl + Manuel Eberl
2019-01-17: Farkas' Lemma and Motzkin's Transposition Theorem
Authors: Ralph Bottesch, Max W. Haslbeck and René Thiemann
2019-01-15: IMP2 – Simple Program Verification in Isabelle/HOL
Authors: Peter Lammich and Simon Wimmer
2019-01-15: An Algebra for Higher-Order Terms
Author: Lars Hupel
2019-01-07: A Reduction Theorem for Store Buffers
Authors: Ernie Cohen and Norbert Schirmer

 

2018
2018-12-26: A Formal Model of the Document Object Model
Authors: Achim D. Brucker and Michael Herzberg
2018-12-25: Formalization of Concurrent Revisions
Author: Roy Overbeek
2018-12-21: Verifying Imperative Programs using Auto2
Author: Bohua Zhan
2018-12-17: Constructive Cryptography in HOL
Authors: Andreas Lochbihler and S. Reza Sefidgar
2018-12-11: Transformer Semantics
Author: Georg Struth
2018-12-11: Quantales
Author: Georg Struth
2018-12-11: Properties of Orderings and Lattices
Author: Georg Struth
2018-11-23: Graph Saturation
Author: Sebastiaan J. C. Joosten
2018-11-23: A Verified Functional Implementation of Bachmair and Ganzinger's Ordered Resolution Prover
Authors: Anders Schlichtkrull, Jasmin Christian Blanchette and Dmitriy Traytel
2018-11-20: Auto2 Prover
Author: Bohua Zhan
2018-11-16: Matroids
Author: Jonas Keinholz
2018-11-06: Deriving generic class instances for datatypes
Authors: Jonas Rädle and Lars Hupel
2018-10-30: Formalisation and Evaluation of Alan Gewirth's Proof for the Principle of Generic Consistency in Isabelle/HOL
Authors: David Fuenmayor and Christoph Benzmüller
2018-10-29: Epistemic Logic: Completeness of Modal Logics
Author: Asta Halkjær From
2018-10-22: Smooth Manifolds
Authors: Fabian Immler and Bohua Zhan
2018-10-19: Randomised Binary Search Trees
Author: - Manuel Eberl + Manuel Eberl
2018-10-19: Formalization of the Embedding Path Order for Lambda-Free Higher-Order Terms
Author: Alexander Bentkamp
2018-10-12: Upper Bounding Diameters of State Spaces of Factored Transition Systems
Authors: Friedrich Kurz and Mohammad Abdulaziz
2018-09-28: The Transcendence of π
Author: - Manuel Eberl + Manuel Eberl
2018-09-25: Symmetric Polynomials
Author: - Manuel Eberl + Manuel Eberl
2018-09-20: Signature-Based Gröbner Basis Algorithms
Author: Alexander Maletzky
2018-09-19: The Prime Number Theorem
Authors: - Manuel Eberl + Manuel Eberl and Lawrence C. Paulson
2018-09-15: Aggregation Algebras
Author: Walter Guttmann
2018-09-14: Octonions
Author: Angeliki Koutsoukou-Argyraki
2018-09-05: Quaternions
Author: Lawrence C. Paulson
2018-09-02: The Budan-Fourier Theorem and Counting Real Roots with Multiplicity
Author: Wenda Li
2018-08-24: An Incremental Simplex Algorithm with Unsatisfiable Core Generation
Authors: Filip Marić, Mirko Spasić and René Thiemann
2018-08-14: Minsky Machines
Author: Bertram Felgenhauer
2018-07-16: Pricing in discrete financial models
Author: Mnacho Echenim
2018-07-04: Von-Neumann-Morgenstern Utility Theorem
Authors: Julian Parsert and Cezary Kaliszyk
2018-06-23: Pell's Equation
Author: - Manuel Eberl + Manuel Eberl
2018-06-14: Projective Geometry
Author: Anthony Bordg
2018-06-14: The Localization of a Commutative Ring
Author: Anthony Bordg
2018-06-05: Partial Order Reduction
Author: Julian Brunner
2018-05-27: Optimal Binary Search Trees
Authors: Tobias Nipkow and Dániel Somogyi
2018-05-25: Hidden Markov Models
Author: Simon Wimmer
2018-05-24: Probabilistic Timed Automata
Authors: Simon Wimmer and Johannes Hölzl
2018-05-23: Irrational Rapidly Convergent Series
Authors: Angeliki Koutsoukou-Argyraki and Wenda Li
2018-05-23: Axiom Systems for Category Theory in Free Logic
Authors: Christoph Benzmüller and Dana Scott
2018-05-22: Monadification, Memoization and Dynamic Programming
Authors: Simon Wimmer, Shuwei Hu and Tobias Nipkow
2018-05-10: OpSets: Sequential Specifications for Replicated Datatypes
Authors: Martin Kleppmann, Victor B. F. Gomes, Dominic P. Mulligan and Alastair R. Beresford
2018-05-07: An Isabelle/HOL Formalization of the Modular Assembly Kit for Security Properties
Authors: Oliver Bračevac, Richard Gay, Sylvia Grewe, Heiko Mantel, Henning Sudbrock and Markus Tasch
2018-04-29: WebAssembly
Author: Conrad Watt
2018-04-27: VerifyThis 2018 - Polished Isabelle Solutions
Authors: Peter Lammich and Simon Wimmer
2018-04-24: Bounded Natural Functors with Covariance and Contravariance
Authors: Andreas Lochbihler and Joshua Schneider
2018-03-22: The Incompatibility of Fishburn-Strategyproofness and Pareto-Efficiency
Authors: Felix Brandt, - Manuel Eberl, + Manuel Eberl, Christian Saile and Christian Stricker
2018-03-13: Weight-Balanced Trees
Authors: Tobias Nipkow and Stefan Dirix
2018-03-12: CakeML
Authors: Lars Hupel and Yu Zhang
2018-03-01: A Theory of Architectural Design Patterns
Author: Diego Marmsoler
2018-02-26: Hoare Logics for Time Bounds
Authors: Maximilian P. L. Haslbeck and Tobias Nipkow
2018-02-06: Treaps
Authors: Maximilian Haslbeck, - Manuel Eberl + Manuel Eberl and Tobias Nipkow
2018-02-06: A verified factorization algorithm for integer polynomials with polynomial complexity
Authors: Jose Divasón, Sebastiaan Joosten, René Thiemann and Akihisa Yamada
2018-02-06: First-Order Terms
Authors: Christian Sternagel and René Thiemann
2018-02-06: The Error Function
Author: - Manuel Eberl + Manuel Eberl
2018-02-02: A verified LLL algorithm
Authors: Ralph Bottesch, Jose Divasón, Maximilian Haslbeck, Sebastiaan Joosten, René Thiemann and Akihisa Yamada
2018-01-18: Formalization of Bachmair and Ganzinger's Ordered Resolution Prover
Authors: Anders Schlichtkrull, Jasmin Christian Blanchette, Dmitriy Traytel and Uwe Waldmann
2018-01-16: Gromov Hyperbolicity
Author: Sebastien Gouezel
2018-01-11: An Isabelle/HOL formalisation of Green's Theorem
Authors: Mohammad Abdulaziz and Lawrence C. Paulson
2018-01-08: Taylor Models
Authors: Christoph Traut and Fabian Immler

 

2017
2017-12-22: The Falling Factorial of a Sum
Author: Lukas Bulwahn
2017-12-21: The Median-of-Medians Selection Algorithm
Author: - Manuel Eberl + Manuel Eberl
2017-12-21: The Mason–Stothers Theorem
Author: - Manuel Eberl + Manuel Eberl
2017-12-21: Dirichlet L-Functions and Dirichlet's Theorem
Author: - Manuel Eberl + Manuel Eberl
2017-12-19: Operations on Bounded Natural Functors
Authors: Jasmin Christian Blanchette, Andrei Popescu and Dmitriy Traytel
2017-12-18: The string search algorithm by Knuth, Morris and Pratt
Authors: Fabian Hellauer and Peter Lammich
2017-11-22: Stochastic Matrices and the Perron-Frobenius Theorem
Author: René Thiemann
2017-11-09: The IMAP CmRDT
Authors: Tim Jungnickel, Lennart Oldenburg and Matthias Loibl
2017-11-06: Hybrid Multi-Lane Spatial Logic
Author: Sven Linker
2017-10-26: The Kuratowski Closure-Complement Theorem
Authors: Peter Gammie and Gianpaolo Gioiosa
2017-10-19: Transition Systems and Automata
Author: Julian Brunner
2017-10-19: Büchi Complementation
Author: Julian Brunner
2017-10-17: Evaluate Winding Numbers through Cauchy Indices
Author: Wenda Li
2017-10-17: Count the Number of Complex Roots
Author: Wenda Li
2017-10-14: Homogeneous Linear Diophantine Equations
Authors: Florian Messner, Julian Parsert, Jonas Schöpf and Christian Sternagel
2017-10-12: The Hurwitz and Riemann ζ Functions
Author: - Manuel Eberl + Manuel Eberl
2017-10-12: Linear Recurrences
Author: - Manuel Eberl + Manuel Eberl
2017-10-12: Dirichlet Series
Author: - Manuel Eberl + Manuel Eberl
2017-09-21: Computer-assisted Reconstruction and Assessment of E. J. Lowe's Modal Ontological Argument
Authors: David Fuenmayor and Christoph Benzmüller
2017-09-17: Representation and Partial Automation of the Principia Logico-Metaphysica in Isabelle/HOL
Author: Daniel Kirchner
2017-09-06: Anselm's God in Isabelle/HOL
Author: Ben Blumson
2017-09-01: Microeconomics and the First Welfare Theorem
Authors: Julian Parsert and Cezary Kaliszyk
2017-08-20: Root-Balanced Tree
Author: Tobias Nipkow
2017-08-20: Orbit-Stabiliser Theorem with Application to Rotational Symmetries
Author: Jonas Rädle
2017-08-16: The LambdaMu-calculus
Authors: Cristina Matache, Victor B. F. Gomes and Dominic P. Mulligan
2017-07-31: Stewart's Theorem and Apollonius' Theorem
Author: Lukas Bulwahn
2017-07-28: Dynamic Architectures
Author: Diego Marmsoler
2017-07-21: Declarative Semantics for Functional Languages
Author: Jeremy Siek
2017-07-15: HOLCF-Prelude
Authors: Joachim Breitner, Brian Huffman, Neil Mitchell and Christian Sternagel
2017-07-13: Minkowski's Theorem
Author: - Manuel Eberl + Manuel Eberl
2017-07-09: Verified Metatheory and Type Inference for a Name-Carrying Simply-Typed Lambda Calculus
Author: Michael Rawson
2017-07-07: A framework for establishing Strong Eventual Consistency for Conflict-free Replicated Datatypes
Authors: Victor B. F. Gomes, Martin Kleppmann, Dominic P. Mulligan and Alastair R. Beresford
2017-07-06: Stone-Kleene Relation Algebras
Author: Walter Guttmann
2017-06-21: Propositional Proof Systems
Authors: Julius Michaelis and Tobias Nipkow
2017-06-13: Partial Semigroups and Convolution Algebras
Authors: Brijesh Dongol, Victor B. F. Gomes, Ian J. Hayes and Georg Struth
2017-06-06: Buffon's Needle Problem
Author: - Manuel Eberl + Manuel Eberl
2017-06-01: Formalizing Push-Relabel Algorithms
Authors: Peter Lammich and S. Reza Sefidgar
2017-06-01: Flow Networks and the Min-Cut-Max-Flow Theorem
Authors: Peter Lammich and S. Reza Sefidgar
2017-05-25: Optics
Authors: Simon Foster and Frank Zeyda
2017-05-24: Developing Security Protocols by Refinement
Authors: Christoph Sprenger and Ivano Somaini
2017-05-24: Dictionary Construction
Author: Lars Hupel
2017-05-08: The Floyd-Warshall Algorithm for Shortest Paths
Authors: Simon Wimmer and Peter Lammich
2017-05-05: Probabilistic while loop
Author: Andreas Lochbihler
2017-05-05: Effect polymorphism in higher-order logic
Author: Andreas Lochbihler
2017-05-05: Monad normalisation
Authors: Joshua Schneider, - Manuel Eberl + Manuel Eberl and Andreas Lochbihler
2017-05-05: Game-based cryptography in HOL
Authors: Andreas Lochbihler, S. Reza Sefidgar and Bhargav Bhatt
2017-05-05: CryptHOL
Author: Andreas Lochbihler
2017-05-04: Monoidal Categories
Author: Eugene W. Stark
2017-05-01: Types, Tableaus and Gödel’s God in Isabelle/HOL
Authors: David Fuenmayor and Christoph Benzmüller
2017-04-28: Local Lexing
Author: Steven Obua
2017-04-19: Constructor Functions
Author: Lars Hupel
2017-04-18: Lazifying case constants
Author: Lars Hupel
2017-04-06: Subresultants
Authors: Sebastiaan Joosten, René Thiemann and Akihisa Yamada
2017-04-04: Expected Shape of Random Binary Search Trees
Author: - Manuel Eberl + Manuel Eberl
2017-03-15: The number of comparisons in QuickSort
Author: - Manuel Eberl + Manuel Eberl
2017-03-15: Lower bound on comparison-based sorting algorithms
Author: - Manuel Eberl + Manuel Eberl
2017-03-10: The Euler–MacLaurin Formula
Author: - Manuel Eberl + Manuel Eberl
2017-02-28: The Group Law for Elliptic Curves
Author: Stefan Berghofer
2017-02-26: Menger's Theorem
Author: Christoph Dittmann
2017-02-13: Differential Dynamic Logic
Author: Brandon Bohrer
2017-02-10: Abstract Soundness
Authors: Jasmin Christian Blanchette, Andrei Popescu and Dmitriy Traytel
2017-02-07: Stone Relation Algebras
Author: Walter Guttmann
2017-01-31: Refining Authenticated Key Agreement with Strong Adversaries
Authors: Joseph Lallemand and Christoph Sprenger
2017-01-24: Bernoulli Numbers
Authors: Lukas Bulwahn - and Manuel Eberl + and Manuel Eberl
2017-01-17: Minimal Static Single Assignment Form
Authors: Max Wagner and Denis Lohner
2017-01-17: Bertrand's postulate
Authors: Julian Biendarra - and Manuel Eberl + and Manuel Eberl
2017-01-12: The Transcendence of e
Author: - Manuel Eberl + Manuel Eberl
2017-01-08: Formal Network Models and Their Application to Firewall Policies
Authors: Achim D. Brucker, Lukas Brügger and Burkhart Wolff
2017-01-03: Verification of a Diffie-Hellman Password-based Authentication Protocol by Extending the Inductive Method
Author: Pasquale Noce
2017-01-01: First-Order Logic According to Harrison
Authors: Alexander Birch Jensen, Anders Schlichtkrull and Jørgen Villadsen

 

2016
2016-12-30: Concurrent Refinement Algebra and Rely Quotients
Authors: Julian Fell, Ian J. Hayes and Andrius Velykis
2016-12-29: The Twelvefold Way
Author: Lukas Bulwahn
2016-12-20: Proof Strategy Language
Author: Yutaka Nagashima
2016-12-07: Paraconsistency
Authors: Anders Schlichtkrull and Jørgen Villadsen
2016-11-29: COMPLX: A Verification Framework for Concurrent Imperative Programs
Authors: Sidney Amani, June Andronick, Maksym Bortin, Corey Lewis, Christine Rizkallah and Joseph Tuong
2016-11-23: Abstract Interpretation of Annotated Commands
Author: Tobias Nipkow
2016-11-16: Separata: Isabelle tactics for Separation Algebra
Authors: Zhe Hou, David Sanan, Alwen Tiu, Rajeev Gore and Ranald Clouston
2016-11-12: Formalization of Nested Multisets, Hereditary Multisets, and Syntactic Ordinals
Authors: Jasmin Christian Blanchette, Mathias Fleury and Dmitriy Traytel
2016-11-12: Formalization of Knuth–Bendix Orders for Lambda-Free Higher-Order Terms
Authors: Heiko Becker, Jasmin Christian Blanchette, Uwe Waldmann and Daniel Wand
2016-11-10: Expressiveness of Deep Learning
Author: Alexander Bentkamp
2016-10-25: Modal Logics for Nominal Transition Systems
Authors: Tjark Weber, Lars-Henrik Eriksson, Joachim Parrow, Johannes Borgström and Ramunas Gutkovas
2016-10-24: Stable Matching
Author: Peter Gammie
2016-10-21: LOFT — Verified Migration of Linux Firewalls to SDN
Authors: Julius Michaelis and Cornelius Diekmann
2016-10-19: Source Coding Theorem
Authors: Quentin Hibon and Lawrence C. Paulson
2016-10-19: A formal model for the SPARCv8 ISA and a proof of non-interference for the LEON3 processor
Authors: Zhe Hou, David Sanan, Alwen Tiu and Yang Liu
2016-10-14: The Factorization Algorithm of Berlekamp and Zassenhaus
Authors: Jose Divasón, Sebastiaan Joosten, René Thiemann and Akihisa Yamada
2016-10-11: Intersecting Chords Theorem
Author: Lukas Bulwahn
2016-10-05: Lp spaces
Author: Sebastien Gouezel
2016-09-30: Fisher–Yates shuffle
Author: - Manuel Eberl + Manuel Eberl
2016-09-29: Allen's Interval Calculus
Author: Fadoua Ghourabi
2016-09-23: Formalization of Recursive Path Orders for Lambda-Free Higher-Order Terms
Authors: Jasmin Christian Blanchette, Uwe Waldmann and Daniel Wand
2016-09-09: Iptables Semantics
Authors: Cornelius Diekmann and Lars Hupel
2016-09-06: A Variant of the Superposition Calculus
Author: Nicolas Peltier
2016-09-06: Stone Algebras
Author: Walter Guttmann
2016-09-01: Stirling's formula
Author: - Manuel Eberl + Manuel Eberl
2016-08-31: Routing
Authors: Julius Michaelis and Cornelius Diekmann
2016-08-24: Simple Firewall
Authors: Cornelius Diekmann, Julius Michaelis and Maximilian Haslbeck
2016-08-18: Infeasible Paths Elimination by Symbolic Execution Techniques: Proof of Correctness and Preservation of Paths
Authors: Romain Aissat, Frederic Voisin and Burkhart Wolff
2016-08-12: Formalizing the Edmonds-Karp Algorithm
Authors: Peter Lammich and S. Reza Sefidgar
2016-08-08: The Imperative Refinement Framework
Author: Peter Lammich
2016-08-07: Ptolemy's Theorem
Author: Lukas Bulwahn
2016-07-17: Surprise Paradox
Author: Joachim Breitner
2016-07-14: Pairing Heap
Authors: Hauke Brinkop and Tobias Nipkow
2016-07-05: A Framework for Verifying Depth-First Search Algorithms
Authors: Peter Lammich and René Neumann
2016-07-01: Chamber Complexes, Coxeter Systems, and Buildings
Author: Jeremy Sylvestre
2016-06-30: The Z Property
Authors: Bertram Felgenhauer, Julian Nagele, Vincent van Oostrom and Christian Sternagel
2016-06-30: The Resolution Calculus for First-Order Logic
Author: Anders Schlichtkrull
2016-06-28: IP Addresses
Authors: Cornelius Diekmann, Julius Michaelis and Lars Hupel
2016-06-28: Compositional Security-Preserving Refinement for Concurrent Imperative Programs
Authors: Toby Murray, Robert Sison, Edward Pierzchalski and Christine Rizkallah
2016-06-26: Category Theory with Adjunctions and Limits
Author: Eugene W. Stark
2016-06-26: Cardinality of Multisets
Author: Lukas Bulwahn
2016-06-25: A Dependent Security Type System for Concurrent Imperative Programs
Authors: Toby Murray, Robert Sison, Edward Pierzchalski and Christine Rizkallah
2016-06-21: Catalan Numbers
Author: - Manuel Eberl + Manuel Eberl
2016-06-18: Program Construction and Verification Components Based on Kleene Algebra
Authors: Victor B. F. Gomes and Georg Struth
2016-06-13: Conservation of CSP Noninterference Security under Concurrent Composition
Author: Pasquale Noce
2016-06-09: Finite Machine Word Library
Authors: Joel Beeren, Matthew Fernandez, Xin Gao, Gerwin Klein, Rafal Kolanski, Japheth Lim, Corey Lewis, Daniel Matichuk and Thomas Sewell
2016-05-31: Tree Decomposition
Author: Christoph Dittmann
2016-05-24: POSIX Lexing with Derivatives of Regular Expressions
Authors: Fahad Ausaf, Roy Dyckhoff and Christian Urban
2016-05-24: Cardinality of Equivalence Relations
Author: Lukas Bulwahn
2016-05-20: Perron-Frobenius Theorem for Spectral Radius Analysis
Authors: Jose Divasón, Ondřej Kunčar, René Thiemann and Akihisa Yamada
2016-05-20: The meta theory of the Incredible Proof Machine
Authors: Joachim Breitner and Denis Lohner
2016-05-18: A Constructive Proof for FLP
Authors: Benjamin Bisping, Paul-David Brodmann, Tim Jungnickel, Christina Rickmann, Henning Seidler, Anke Stüber, Arno Wilhelm-Weidner, Kirstin Peters and Uwe Nestmann
2016-05-09: A Formal Proof of the Max-Flow Min-Cut Theorem for Countable Networks
Author: Andreas Lochbihler
2016-05-05: Randomised Social Choice Theory
Author: - Manuel Eberl + Manuel Eberl
2016-05-04: The Incompatibility of SD-Efficiency and SD-Strategy-Proofness
Author: - Manuel Eberl + Manuel Eberl
2016-05-04: Spivey's Generalized Recurrence for Bell Numbers
Author: Lukas Bulwahn
2016-05-02: Gröbner Bases Theory
Authors: Fabian Immler and Alexander Maletzky
2016-04-28: No Faster-Than-Light Observers
Authors: Mike Stannett and István Németi
2016-04-27: Algorithms for Reduced Ordered Binary Decision Diagrams
Authors: Julius Michaelis, Maximilian Haslbeck, Peter Lammich and Lars Hupel
2016-04-27: A formalisation of the Cocke-Younger-Kasami algorithm
Author: Maksym Bortin
2016-04-26: Conservation of CSP Noninterference Security under Sequential Composition
Author: Pasquale Noce
2016-04-12: Kleene Algebras with Domain
Authors: Victor B. F. Gomes, Walter Guttmann, Peter Höfner, Georg Struth and Tjark Weber
2016-03-11: Propositional Resolution and Prime Implicates Generation
Author: Nicolas Peltier
2016-03-08: Timed Automata
Author: Simon Wimmer
2016-03-08: The Cartan Fixed Point Theorems
Author: Lawrence C. Paulson
2016-03-01: Linear Temporal Logic
Author: Salomon Sickert
2016-02-17: Analysis of List Update Algorithms
Authors: Maximilian P.L. Haslbeck and Tobias Nipkow
2016-02-05: Verified Construction of Static Single Assignment Form
Authors: Sebastian Ullrich and Denis Lohner
2016-01-29: Polynomial Interpolation
Authors: René Thiemann and Akihisa Yamada
2016-01-29: Polynomial Factorization
Authors: René Thiemann and Akihisa Yamada
2016-01-20: Knot Theory
Author: T.V.H. Prathamesh
2016-01-18: Tensor Product of Matrices
Author: T.V.H. Prathamesh
2016-01-14: Cardinality of Number Partitions
Author: Lukas Bulwahn

 

2015
2015-12-28: Basic Geometric Properties of Triangles
Author: - Manuel Eberl + Manuel Eberl
2015-12-28: The Divergence of the Prime Harmonic Series
Author: - Manuel Eberl + Manuel Eberl
2015-12-28: Liouville numbers
Author: - Manuel Eberl + Manuel Eberl
2015-12-28: Descartes' Rule of Signs
Author: - Manuel Eberl + Manuel Eberl
2015-12-22: The Stern-Brocot Tree
Authors: Peter Gammie and Andreas Lochbihler
2015-12-22: Applicative Lifting
Authors: Andreas Lochbihler and Joshua Schneider
2015-12-22: Algebraic Numbers in Isabelle/HOL
Authors: René Thiemann, Akihisa Yamada and Sebastiaan Joosten
2015-12-12: Cardinality of Set Partitions
Author: Lukas Bulwahn
2015-12-02: Latin Square
Author: Alexander Bentkamp
2015-12-01: Ergodic Theory
Author: Sebastien Gouezel
2015-11-19: Euler's Partition Theorem
Author: Lukas Bulwahn
2015-11-18: The Tortoise and Hare Algorithm
Author: Peter Gammie
2015-11-11: Planarity Certificates
Author: Lars Noschinski
2015-11-02: Positional Determinacy of Parity Games
Author: Christoph Dittmann
2015-09-16: A Meta-Model for the Isabelle API
Authors: Frédéric Tuong and Burkhart Wolff
2015-09-04: Converting Linear Temporal Logic to Deterministic (Generalized) Rabin Automata
Author: Salomon Sickert
2015-08-21: Matrices, Jordan Normal Forms, and Spectral Radius Theory
Authors: René Thiemann and Akihisa Yamada
2015-08-20: Decreasing Diagrams II
Author: Bertram Felgenhauer
2015-08-18: The Inductive Unwinding Theorem for CSP Noninterference Security
Author: Pasquale Noce
2015-08-12: Representations of Finite Groups
Author: Jeremy Sylvestre
2015-08-10: Analysing and Comparing Encodability Criteria for Process Calculi
Authors: Kirstin Peters and Rob van Glabbeek
2015-07-21: Generating Cases from Labeled Subgoals
Author: Lars Noschinski
2015-07-14: Landau Symbols
Author: - Manuel Eberl + Manuel Eberl
2015-07-14: The Akra-Bazzi theorem and the Master theorem
Author: - Manuel Eberl + Manuel Eberl
2015-07-07: Hermite Normal Form
Authors: Jose Divasón and Jesús Aransay
2015-06-27: Derangements Formula
Author: Lukas Bulwahn
2015-06-11: The Ipurge Unwinding Theorem for CSP Noninterference Security
Author: Pasquale Noce
2015-06-11: The Generic Unwinding Theorem for CSP Noninterference Security
Author: Pasquale Noce
2015-06-11: Binary Multirelations
Authors: Hitoshi Furusawa and Georg Struth
2015-06-11: Reasoning about Lists via List Interleaving
Author: Pasquale Noce
2015-06-07: Parameterized Dynamic Tables
Author: Tobias Nipkow
2015-05-28: Derivatives of Logical Formulas
Author: Dmitriy Traytel
2015-05-27: A Zoo of Probabilistic Systems
Authors: Johannes Hölzl, Andreas Lochbihler and Dmitriy Traytel
2015-04-30: VCG - Combinatorial Vickrey-Clarke-Groves Auctions
Authors: Marco B. Caminati, Manfred Kerber, Christoph Lange and Colin Rowat
2015-04-15: Residuated Lattices
Authors: Victor B. F. Gomes and Georg Struth
2015-04-13: Concurrent IMP
Author: Peter Gammie
2015-04-13: Relaxing Safely: Verified On-the-Fly Garbage Collection for x86-TSO
Authors: Peter Gammie, Tony Hosking and Kai Engelhardt
2015-03-30: Trie
Authors: Andreas Lochbihler and Tobias Nipkow
2015-03-18: Consensus Refined
Authors: Ognjen Maric and Christoph Sprenger
2015-03-11: Deriving class instances for datatypes
Authors: Christian Sternagel and René Thiemann
2015-02-20: The Safety of Call Arity
Author: Joachim Breitner
2015-02-12: QR Decomposition
Authors: Jose Divasón and Jesús Aransay
2015-02-12: Echelon Form
Authors: Jose Divasón and Jesús Aransay
2015-02-05: Finite Automata in Hereditarily Finite Set Theory
Author: Lawrence C. Paulson
2015-01-28: Verification of the UpDown Scheme
Author: Johannes Hölzl

 

2014
2014-11-28: The Unified Policy Framework (UPF)
Authors: Achim D. Brucker, Lukas Brügger and Burkhart Wolff
2014-10-23: Loop freedom of the (untimed) AODV routing protocol
Authors: Timothy Bourke and Peter Höfner
2014-10-13: Lifting Definition Option
Author: René Thiemann
2014-10-10: Stream Fusion in HOL with Code Generation
Authors: Andreas Lochbihler and Alexandra Maximova
2014-10-09: A Verified Compiler for Probability Density Functions
Authors: - Manuel Eberl, + Manuel Eberl, Johannes Hölzl and Tobias Nipkow
2014-10-08: Formalization of Refinement Calculus for Reactive Systems
Author: Viorel Preoteasa
2014-10-03: XML
Authors: Christian Sternagel and René Thiemann
2014-10-03: Certification Monads
Authors: Christian Sternagel and René Thiemann
2014-09-25: Imperative Insertion Sort
Author: Christian Sternagel
2014-09-19: The Sturm-Tarski Theorem
Author: Wenda Li
2014-09-15: The Cayley-Hamilton Theorem
Authors: Stephan Adelsberger, Stefan Hetzl and Florian Pollak
2014-09-09: The Jordan-Hölder Theorem
Author: Jakob von Raumer
2014-09-04: Priority Queues Based on Braun Trees
Author: Tobias Nipkow
2014-09-03: Gauss-Jordan Algorithm and Its Applications
Authors: Jose Divasón and Jesús Aransay
2014-08-29: Vector Spaces
Author: Holden Lee
2014-08-29: Real-Valued Special Functions: Upper and Lower Bounds
Author: Lawrence C. Paulson
2014-08-13: Skew Heap
Author: Tobias Nipkow
2014-08-12: Splay Tree
Author: Tobias Nipkow
2014-07-29: Haskell's Show Class in Isabelle/HOL
Authors: Christian Sternagel and René Thiemann
2014-07-18: Formal Specification of a Generic Separation Kernel
Authors: Freek Verbeek, Sergey Tverdyshev, Oto Havle, Holger Blasum, Bruno Langenstein, Werner Stephan, Yakoub Nemouchi, Abderrahmane Feliachi, Burkhart Wolff and Julien Schmaltz
2014-07-13: pGCL for Isabelle
Author: David Cock
2014-07-07: Amortized Complexity Verified
Author: Tobias Nipkow
2014-07-04: Network Security Policy Verification
Author: Cornelius Diekmann
2014-07-03: Pop-Refinement
Author: Alessandro Coglio
2014-06-12: Decision Procedures for MSO on Words Based on Derivatives of Regular Expressions
Authors: Dmitriy Traytel and Tobias Nipkow
2014-06-08: Boolean Expression Checkers
Author: Tobias Nipkow
2014-05-28: Promela Formalization
Author: René Neumann
2014-05-28: Converting Linear-Time Temporal Logic to Generalized Büchi Automata
Authors: Alexander Schimpf and Peter Lammich
2014-05-28: Verified Efficient Implementation of Gabow's Strongly Connected Components Algorithm
Author: Peter Lammich
2014-05-28: A Fully Verified Executable LTL Model Checker
Authors: Javier Esparza, Peter Lammich, René Neumann, Tobias Nipkow, Alexander Schimpf and Jan-Georg Smaus
2014-05-28: The CAVA Automata Library
Author: Peter Lammich
2014-05-23: Transitive closure according to Roy-Floyd-Warshall
Author: Makarius Wenzel
2014-05-23: Noninterference Security in Communicating Sequential Processes
Author: Pasquale Noce
2014-05-21: Regular Algebras
Authors: Simon Foster and Georg Struth
2014-04-28: Formalisation and Analysis of Component Dependencies
Author: Maria Spichkova
2014-04-23: A Formalization of Declassification with WHAT-and-WHERE-Security
Authors: Sylvia Grewe, Alexander Lux, Heiko Mantel and Jens Sauer
2014-04-23: A Formalization of Strong Security
Authors: Sylvia Grewe, Alexander Lux, Heiko Mantel and Jens Sauer
2014-04-23: A Formalization of Assumptions and Guarantees for Compositional Noninterference
Authors: Sylvia Grewe, Heiko Mantel and Daniel Schoepe
2014-04-22: Bounded-Deducibility Security
Authors: Andrei Popescu, Peter Lammich and Thomas Bauereiss
2014-04-16: A shallow embedding of HyperCTL*
Authors: Markus N. Rabe, Peter Lammich and Andrei Popescu
2014-04-16: Abstract Completeness
Authors: Jasmin Christian Blanchette, Andrei Popescu and Dmitriy Traytel
2014-04-13: Discrete Summation
Author: Florian Haftmann
2014-04-03: Syntax and semantics of a GPU kernel programming language
Author: John Wickerson
2014-03-11: Probabilistic Noninterference
Authors: Andrei Popescu and Johannes Hölzl
2014-03-08: Mechanization of the Algebra for Wireless Networks (AWN)
Author: Timothy Bourke
2014-02-18: Mutually Recursive Partial Functions
Author: René Thiemann
2014-02-13: Properties of Random Graphs -- Subgraph Containment
Author: Lars Hupel
2014-02-11: Verification of Selection and Heap Sort Using Locales
Author: Danijela Petrovic
2014-02-07: Affine Arithmetic
Author: Fabian Immler
2014-02-06: Implementing field extensions of the form Q[sqrt(b)]
Author: René Thiemann
2014-01-30: Unified Decision Procedures for Regular Expression Equivalence
Authors: Tobias Nipkow and Dmitriy Traytel
2014-01-28: Secondary Sylow Theorems
Author: Jakob von Raumer
2014-01-25: Relation Algebra
Authors: Alasdair Armstrong, Simon Foster, Georg Struth and Tjark Weber
2014-01-23: Kleene Algebra with Tests and Demonic Refinement Algebras
Authors: Alasdair Armstrong, Victor B. F. Gomes and Georg Struth
2014-01-16: Featherweight OCL: A Proposal for a Machine-Checked Formal Semantics for OCL 2.5
Authors: Achim D. Brucker, Frédéric Tuong and Burkhart Wolff
2014-01-11: Sturm's Theorem
Author: - Manuel Eberl + Manuel Eberl
2014-01-11: Compositional Properties of Crypto-Based Components
Author: Maria Spichkova

 

2013
2013-12-01: A General Method for the Proof of Theorems on Tail-recursive Functions
Author: Pasquale Noce
2013-11-17: Gödel's Incompleteness Theorems
Author: Lawrence C. Paulson
2013-11-17: The Hereditarily Finite Sets
Author: Lawrence C. Paulson
2013-11-15: A Codatatype of Formal Languages
Author: Dmitriy Traytel
2013-11-14: Stream Processing Components: Isabelle/HOL Formalisation and Case Studies
Author: Maria Spichkova
2013-11-12: Gödel's God in Isabelle/HOL
Authors: Christoph Benzmüller and Bruno Woltzenlogel Paleo
2013-11-01: Decreasing Diagrams
Author: Harald Zankl
2013-10-02: Automatic Data Refinement
Author: Peter Lammich
2013-09-17: Native Word
Author: Andreas Lochbihler
2013-07-27: A Formal Model of IEEE Floating Point Arithmetic
Author: Lei Yu
2013-07-22: Pratt's Primality Certificates
Authors: Simon Wimmer and Lars Noschinski
2013-07-22: Lehmer's Theorem
Authors: Simon Wimmer and Lars Noschinski
2013-07-19: The Königsberg Bridge Problem and the Friendship Theorem
Author: Wenda Li
2013-06-27: Sound and Complete Sort Encodings for First-Order Logic
Authors: Jasmin Christian Blanchette and Andrei Popescu
2013-05-22: An Axiomatic Characterization of the Single-Source Shortest Path Problem
Author: Christine Rizkallah
2013-04-28: Graph Theory
Author: Lars Noschinski
2013-04-15: Light-weight Containers
Author: Andreas Lochbihler
2013-02-21: Nominal 2
Authors: Christian Urban, Stefan Berghofer and Cezary Kaliszyk
2013-01-31: The Correctness of Launchbury's Natural Semantics for Lazy Evaluation
Author: Joachim Breitner
2013-01-19: Ribbon Proofs
Author: John Wickerson
2013-01-16: Rank-Nullity Theorem in Linear Algebra
Authors: Jose Divasón and Jesús Aransay
2013-01-15: Kleene Algebra
Authors: Alasdair Armstrong, Georg Struth and Tjark Weber
2013-01-03: Computing N-th Roots using the Babylonian Method
Author: René Thiemann

 

2012
2012-11-14: A Separation Logic Framework for Imperative HOL
Authors: Peter Lammich and Rene Meis
2012-11-02: Open Induction
Authors: Mizuhito Ogawa and Christian Sternagel
2012-10-30: The independence of Tarski's Euclidean axiom
Author: T. J. M. Makarios
2012-10-27: Bondy's Theorem
Authors: Jeremy Avigad and Stefan Hetzl
2012-09-10: Possibilistic Noninterference
Authors: Andrei Popescu and Johannes Hölzl
2012-08-07: Generating linear orders for datatypes
Author: René Thiemann
2012-08-05: Proving the Impossibility of Trisecting an Angle and Doubling the Cube
Authors: Ralph Romanos and Lawrence C. Paulson
2012-07-27: Verifying Fault-Tolerant Distributed Algorithms in the Heard-Of Model
Authors: Henri Debrat and Stephan Merz
2012-07-01: Logical Relations for PCF
Author: Peter Gammie
2012-06-26: Type Constructor Classes and Monad Transformers
Author: Brian Huffman
2012-05-29: Psi-calculi in Isabelle
Author: Jesper Bengtson
2012-05-29: The pi-calculus in nominal logic
Author: Jesper Bengtson
2012-05-29: CCS in nominal logic
Author: Jesper Bengtson
2012-05-27: Isabelle/Circus
Authors: Abderrahmane Feliachi, Burkhart Wolff and Marie-Claude Gaudel
2012-05-11: Separation Algebra
Authors: Gerwin Klein, Rafal Kolanski and Andrew Boyton
2012-05-07: Stuttering Equivalence
Author: Stephan Merz
2012-05-02: Inductive Study of Confidentiality
Author: Giampaolo Bella
2012-04-26: Ordinary Differential Equations
Authors: Fabian Immler and Johannes Hölzl
2012-04-13: Well-Quasi-Orders
Author: Christian Sternagel
2012-03-01: Abortable Linearizable Modules
Authors: Rachid Guerraoui, Viktor Kuncak and Giuliano Losa
2012-02-29: Executable Transitive Closures
Author: René Thiemann
2012-02-06: A Probabilistic Proof of the Girth-Chromatic Number Theorem
Author: Lars Noschinski
2012-01-30: Refinement for Monadic Programs
Author: Peter Lammich
2012-01-30: Dijkstra's Shortest Path Algorithm
Authors: Benedikt Nordhoff and Peter Lammich
2012-01-03: Markov Models
Authors: Johannes Hölzl and Tobias Nipkow

 

2011
2011-11-19: A Definitional Encoding of TLA* in Isabelle/HOL
Authors: Gudmund Grov and Stephan Merz
2011-11-09: Efficient Mergesort
Author: Christian Sternagel
2011-09-22: Pseudo Hoops
Authors: George Georgescu, Laurentiu Leustean and Viorel Preoteasa
2011-09-22: Algebra of Monotonic Boolean Transformers
Author: Viorel Preoteasa
2011-09-22: Lattice Properties
Author: Viorel Preoteasa
2011-08-26: The Myhill-Nerode Theorem Based on Regular Expressions
Authors: Chunhan Wu, Xingyuan Zhang and Christian Urban
2011-08-19: Gauss-Jordan Elimination for Matrices Represented as Functions
Author: Tobias Nipkow
2011-07-21: Maximum Cardinality Matching
Author: Christine Rizkallah
2011-05-17: Knowledge-based programs
Author: Peter Gammie
2011-04-01: The General Triangle Is Unique
Author: Joachim Breitner
2011-03-14: Executable Transitive Closures of Finite Relations
Authors: Christian Sternagel and René Thiemann
2011-02-23: Interval Temporal Logic on Natural Numbers
Author: David Trachtenherz
2011-02-23: Infinite Lists
Author: David Trachtenherz
2011-02-23: AutoFocus Stream Processing for Single-Clocking and Multi-Clocking Semantics
Author: David Trachtenherz
2011-02-07: Lightweight Java
Authors: Rok Strniša and Matthew Parkinson
2011-01-10: RIPEMD-160
Author: Fabian Immler
2011-01-08: Lower Semicontinuous Functions
Author: Bogdan Grechuk

 

2010
2010-12-17: Hall's Marriage Theorem
Authors: Dongchen Jiang and Tobias Nipkow
2010-11-16: Shivers' Control Flow Analysis
Author: Joachim Breitner
2010-10-28: Finger Trees
Authors: Benedikt Nordhoff, Stefan Körner and Peter Lammich
2010-10-28: Functional Binomial Queues
Author: René Neumann
2010-10-28: Binomial Heaps and Skew Binomial Heaps
Authors: Rene Meis, Finn Nielsen and Peter Lammich
2010-08-29: Strong Normalization of Moggis's Computational Metalanguage
Author: Christian Doczkal
2010-08-10: Executable Multivariate Polynomials
Authors: Christian Sternagel, René Thiemann, Alexander Maletzky, Fabian Immler, Florian Haftmann, Andreas Lochbihler and Alexander Bentkamp
2010-08-08: Formalizing Statecharts using Hierarchical Automata
Authors: Steffen Helke and Florian Kammüller
2010-06-24: Free Groups
Author: Joachim Breitner
2010-06-20: Category Theory
Author: Alexander Katovsky
2010-06-17: Executable Matrix Operations on Matrices of Arbitrary Dimensions
Authors: Christian Sternagel and René Thiemann
2010-06-14: Abstract Rewriting
Authors: Christian Sternagel and René Thiemann
2010-05-28: Verification of the Deutsch-Schorr-Waite Graph Marking Algorithm using Data Refinement
Authors: Viorel Preoteasa and Ralph-Johan Back
2010-05-28: Semantics and Data Refinement of Invariant Based Programs
Authors: Viorel Preoteasa and Ralph-Johan Back
2010-05-22: A Complete Proof of the Robbins Conjecture
Author: Matthew Wampler-Doty
2010-05-12: Regular Sets and Expressions
Authors: Alexander Krauss and Tobias Nipkow
2010-04-30: Locally Nameless Sigma Calculus
Authors: Ludovic Henrio, Florian Kammüller, Bianca Lutz and Henry Sudhof
2010-03-29: Free Boolean Algebra
Author: Brian Huffman
2010-03-23: Inter-Procedural Information Flow Noninterference via Slicing
Author: Daniel Wasserrab
2010-03-23: Information Flow Noninterference via Slicing
Author: Daniel Wasserrab
2010-02-20: List Index
Author: Tobias Nipkow
2010-02-12: Coinductive
Author: Andreas Lochbihler

 

2009
2009-12-09: A Fast SAT Solver for Isabelle in Standard ML
Author: Armin Heller
2009-12-03: Formalizing the Logic-Automaton Connection
Authors: Stefan Berghofer and Markus Reiter
2009-11-25: Tree Automata
Author: Peter Lammich
2009-11-25: Collections Framework
Author: Peter Lammich
2009-11-22: Perfect Number Theorem
Author: Mark Ijbema
2009-11-13: Backing up Slicing: Verifying the Interprocedural Two-Phase Horwitz-Reps-Binkley Slicer
Author: Daniel Wasserrab
2009-10-30: The Worker/Wrapper Transformation
Author: Peter Gammie
2009-09-01: Ordinals and Cardinals
Author: Andrei Popescu
2009-08-28: Invertibility in Sequent Calculi
Author: Peter Chapman
2009-08-04: An Example of a Cofinitary Group in Isabelle/HOL
Author: Bart Kastermans
2009-05-06: Code Generation for Functions as Data
Author: Andreas Lochbihler
2009-04-29: Stream Fusion
Author: Brian Huffman

 

2008
2008-12-12: A Bytecode Logic for JML and Types
Authors: Lennart Beringer and Martin Hofmann
2008-11-10: Secure information flow and program logics
Authors: Lennart Beringer and Martin Hofmann
2008-11-09: Some classical results in Social Choice Theory
Author: Peter Gammie
2008-11-07: Fun With Tilings
Authors: Tobias Nipkow and Lawrence C. Paulson
2008-10-15: The Textbook Proof of Huffman's Algorithm
Author: Jasmin Christian Blanchette
2008-09-16: Towards Certified Slicing
Author: Daniel Wasserrab
2008-09-02: A Correctness Proof for the Volpano/Smith Security Typing System
Authors: Gregor Snelting and Daniel Wasserrab
2008-09-01: Arrow and Gibbard-Satterthwaite
Author: Tobias Nipkow
2008-08-26: Fun With Functions
Author: Tobias Nipkow
2008-07-23: Formal Verification of Modern SAT Solvers
Author: Filip Marić
2008-04-05: Recursion Theory I
Author: Michael Nedzelsky
2008-02-29: A Sequential Imperative Programming Language Syntax, Semantics, Hoare Logics and Verification Environment
Author: Norbert Schirmer
2008-02-29: BDD Normalisation
Authors: Veronika Ortner and Norbert Schirmer
2008-02-18: Normalization by Evaluation
Authors: Klaus Aehlig and Tobias Nipkow
2008-01-11: Quantifier Elimination for Linear Arithmetic
Author: Tobias Nipkow

 

2007
2007-12-14: Formalization of Conflict Analysis of Programs with Procedures, Thread Creation, and Monitors
Authors: Peter Lammich and Markus Müller-Olm
2007-12-03: Jinja with Threads
Author: Andreas Lochbihler
2007-11-06: Much Ado About Two
Author: Sascha Böhme
2007-08-12: Sums of Two and Four Squares
Author: Roelof Oosterhuis
2007-08-12: Fermat's Last Theorem for Exponents 3 and 4 and the Parametrisation of Pythagorean Triples
Author: Roelof Oosterhuis
2007-08-08: Fundamental Properties of Valuation Theory and Hensel's Lemma
Author: Hidetsune Kobayashi
2007-08-02: POPLmark Challenge Via de Bruijn Indices
Author: Stefan Berghofer
2007-08-02: First-Order Logic According to Fitting
Author: Stefan Berghofer

 

2006
2006-09-09: Hotel Key Card System
Author: Tobias Nipkow
2006-08-08: Abstract Hoare Logics
Author: Tobias Nipkow
2006-05-22: Flyspeck I: Tame Graphs
Authors: Gertrud Bauer and Tobias Nipkow
2006-05-15: CoreC++
Author: Daniel Wasserrab
2006-03-31: A Theory of Featherweight Java in Isabelle/HOL
Authors: J. Nathan Foster and Dimitrios Vytiniotis
2006-03-15: Instances of Schneider's generalized protocol of clock synchronization
Author: Damián Barsotti
2006-03-14: Cauchy's Mean Theorem and the Cauchy-Schwarz Inequality
Author: Benjamin Porter

 

2005
2005-11-11: Countable Ordinals
Author: Brian Huffman
2005-10-12: Fast Fourier Transform
Author: Clemens Ballarin
2005-06-24: Formalization of a Generalized Protocol for Clock Synchronization
Author: Alwen Tiu
2005-06-22: Proving the Correctness of Disk Paxos
Authors: Mauro Jaskelioff and Stephan Merz
2005-06-20: Jive Data and Store Model
Authors: Nicole Rauch and Norbert Schirmer
2005-06-01: Jinja is not Java
Authors: Gerwin Klein and Tobias Nipkow
2005-05-02: SHA1, RSA, PSS and more
Authors: Christina Lindenberg and Kai Wirt
2005-04-21: Category Theory to Yoneda's Lemma
Author: Greg O'Keefe

 

2004
2004-12-09: File Refinement
Authors: Karen Zee and Viktor Kuncak
2004-11-19: Integration theory and random variables
Author: Stefan Richter
2004-09-28: A Mechanically Verified, Efficient, Sound and Complete Theorem Prover For First Order Logic
Author: Tom Ridge
2004-09-20: Ramsey's theorem, infinitary version
Author: Tom Ridge
2004-09-20: Completeness theorem
Authors: James Margetson and Tom Ridge
2004-07-09: Compiling Exceptions Correctly
Author: Tobias Nipkow
2004-06-24: Depth First Search
Authors: Toshiaki Nishihara and Yasuhiko Minamide
2004-05-18: Groups, Rings and Modules
Authors: Hidetsune Kobayashi, L. Chen and H. Murao
2004-04-26: Topology
Author: Stefan Friedrich
2004-04-26: Lazy Lists II
Author: Stefan Friedrich
2004-04-05: Binary Search Trees
Author: Viktor Kuncak
2004-03-30: Functional Automata
Author: Tobias Nipkow
2004-03-19: Mini ML
Authors: Wolfgang Naraschewski and Tobias Nipkow
2004-03-19: AVL Trees
Authors: Tobias Nipkow and Cornelia Pusch
\ No newline at end of file diff --git a/web/rss.xml b/web/rss.xml --- a/web/rss.xml +++ b/web/rss.xml @@ -1,562 +1,569 @@ Archive of Formal Proofs https://www.isa-afp.org The Archive of Formal Proofs is a collection of proof libraries, examples, and larger scientific developments, mechanically checked in the theorem prover Isabelle. - 02 Oct 2021 00:00:00 +0000 + 19 Oct 2021 00:00:00 +0000 + + Belief Revision Theory + https://www.isa-afp.org/entries/Belief_Revision.html + https://www.isa-afp.org/entries/Belief_Revision.html + Valentin Fouillard, Safouan Taha, Frédéric Boulanger, Nicolas Sabouret + 19 Oct 2021 00:00:00 +0000 + +The 1985 paper by Carlos Alchourrón, Peter Gärdenfors, and David +Makinson (AGM), “On the Logic of Theory Change: Partial Meet +Contraction and Revision Functions” launches a large and rapidly +growing literature that employs formal models and logics to handle +changing beliefs of a rational agent and to take into account new +piece of information observed by this agent. In 2011, a review book +titled "AGM 25 Years: Twenty-Five Years of Research in Belief +Change" was edited to summarize the first twenty five years of +works based on AGM. This HOL-based AFP entry is a faithful +formalization of the AGM operators (e.g. contraction, revision, +remainder ...) axiomatized in the original paper. It also contains the +proofs of all the theorems stated in the paper that show how these +operators combine. Both proofs of Harper and Levi identities are +established. + + + Algebras for Iteration, Infinite Executions and Correctness of Sequential Computations + https://www.isa-afp.org/entries/Correctness_Algebras.html + https://www.isa-afp.org/entries/Correctness_Algebras.html + Walter Guttmann + 12 Oct 2021 00:00:00 +0000 + +We study models of state-based non-deterministic sequential +computations and describe them using algebras. We propose algebras +that describe iteration for strict and non-strict computations. They +unify computation models which differ in the fixpoints used to +represent iteration. We propose algebras that describe the infinite +executions of a computation. They lead to a unified approximation +order and results that connect fixpoints in the approximation and +refinement orders. This unifies the semantics of recursion for a range +of computation models. We propose algebras that describe preconditions +and the effect of while-programs under postconditions. They unify +correctness statements in two dimensions: one statement applies in +various computation models to various correctness claims. + Verified Quadratic Virtual Substitution for Real Arithmetic https://www.isa-afp.org/entries/Virtual_Substitution.html https://www.isa-afp.org/entries/Virtual_Substitution.html Matias Scharager, Katherine Cordwell, Stefan Mitsch, André Platzer 02 Oct 2021 00:00:00 +0000 This paper presents a formally verified quantifier elimination (QE) algorithm for first-order real arithmetic by linear and quadratic virtual substitution (VS) in Isabelle/HOL. The Tarski-Seidenberg theorem established that the first-order logic of real arithmetic is decidable by QE. However, in practice, QE algorithms are highly complicated and often combine multiple methods for performance. VS is a practically successful method for QE that targets formulas with low-degree polynomials. To our knowledge, this is the first work to formalize VS for quadratic real arithmetic including inequalities. The proofs necessitate various contributions to the existing multivariate polynomial libraries in Isabelle/HOL. Our framework is modularized and easily expandable (to facilitate integrating future optimizations), and could serve as a basis for developing practical general-purpose QE algorithms. Further, as our formalization is designed with practicality in mind, we export our development to SML and test the resulting code on 378 benchmarks from the literature, comparing to Redlog, Z3, Wolfram Engine, and SMT-RAT. This identified inconsistencies in some tools, underscoring the significance of a verified approach for the intricacies of real arithmetic. Soundness and Completeness of an Axiomatic System for First-Order Logic https://www.isa-afp.org/entries/FOL_Axiomatic.html https://www.isa-afp.org/entries/FOL_Axiomatic.html Asta Halkjær From 24 Sep 2021 00:00:00 +0000 This work is a formalization of the soundness and completeness of an axiomatic system for first-order logic. The proof system is based on System Q1 by Smullyan and the completeness proof follows his textbook "First-Order Logic" (Springer-Verlag 1968). The completeness proof is in the Henkin style where a consistent set is extended to a maximal consistent set using Lindenbaum's construction and Henkin witnesses are added during the construction to ensure saturation as well. The resulting set is a Hintikka set which, by the model existence theorem, is satisfiable in the Herbrand universe. Complex Bounded Operators https://www.isa-afp.org/entries/Complex_Bounded_Operators.html https://www.isa-afp.org/entries/Complex_Bounded_Operators.html Jose Manuel Rodriguez Caballero, Dominique Unruh 18 Sep 2021 00:00:00 +0000 We present a formalization of bounded operators on complex vector spaces. Our formalization contains material on complex vector spaces (normed spaces, Banach spaces, Hilbert spaces) that complements and goes beyond the developments of real vectors spaces in the Isabelle/HOL standard library. We define the type of bounded operators between complex vector spaces (<em>cblinfun</em>) and develop the theory of unitaries, projectors, extension of bounded linear functions (BLT theorem), adjoints, Loewner order, closed subspaces and more. For the finite-dimensional case, we provide code generation support by identifying finite-dimensional operators with matrices as formalized in the <a href="Jordan_Normal_Form.html">Jordan_Normal_Form</a> AFP entry. A Formalization of Weighted Path Orders and Recursive Path Orders https://www.isa-afp.org/entries/Weighted_Path_Order.html https://www.isa-afp.org/entries/Weighted_Path_Order.html Christian Sternagel, René Thiemann, Akihisa Yamada 16 Sep 2021 00:00:00 +0000 We define the weighted path order (WPO) and formalize several properties such as strong normalization, the subterm property, and closure properties under substitutions and contexts. Our definition of WPO extends the original definition by also permitting multiset comparisons of arguments instead of just lexicographic extensions. Therefore, our WPO not only subsumes lexicographic path orders (LPO), but also recursive path orders (RPO). We formally prove these subsumptions and therefore all of the mentioned properties of WPO are automatically transferable to LPO and RPO as well. Such a transformation is not required for Knuth&ndash;Bendix orders (KBO), since they have already been formalized. Nevertheless, we still provide a proof that WPO subsumes KBO and thereby underline the generality of WPO. Extension of Types-To-Sets https://www.isa-afp.org/entries/Types_To_Sets_Extension.html https://www.isa-afp.org/entries/Types_To_Sets_Extension.html Mihails Milehins 06 Sep 2021 00:00:00 +0000 In their article titled <i>From Types to Sets by Local Type Definitions in Higher-Order Logic</i> and published in the proceedings of the conference <i>Interactive Theorem Proving</i> in 2016, Ondřej Kunčar and Andrei Popescu propose an extension of the logic Isabelle/HOL and an associated algorithm for the relativization of the <i>type-based theorems</i> to more flexible <i>set-based theorems</i>, collectively referred to as <i>Types-To-Sets</i>. One of the aims of their work was to open an opportunity for the development of a software tool for applied relativization in the implementation of the logic Isabelle/HOL of the proof assistant Isabelle. In this article, we provide a prototype of a software framework for the interactive automated relativization of theorems in Isabelle/HOL, developed as an extension of the proof language Isabelle/Isar. The software framework incorporates the implementation of the proposed extension of the logic, and builds upon some of the ideas for further work expressed in the original article on Types-To-Sets by Ondřej Kunčar and Andrei Popescu and the subsequent article <i>Smooth Manifolds and Types to Sets for Linear Algebra in Isabelle/HOL</i> that was written by Fabian Immler and Bohua Zhan and published in the proceedings of the <i>International Conference on Certified Programs and Proofs</i> in 2019. IDE: Introduction, Destruction, Elimination https://www.isa-afp.org/entries/Intro_Dest_Elim.html https://www.isa-afp.org/entries/Intro_Dest_Elim.html Mihails Milehins 06 Sep 2021 00:00:00 +0000 The article provides the command <b>mk_ide</b> for the object logic Isabelle/HOL of the formal proof assistant Isabelle. The command <b>mk_ide</b> enables the automated synthesis of the introduction, destruction and elimination rules from arbitrary definitions of constant predicates stated in Isabelle/HOL. Conditional Transfer Rule https://www.isa-afp.org/entries/Conditional_Transfer_Rule.html https://www.isa-afp.org/entries/Conditional_Transfer_Rule.html Mihails Milehins 06 Sep 2021 00:00:00 +0000 This article provides a collection of experimental utilities for unoverloading of definitions and synthesis of conditional transfer rules for the object logic Isabelle/HOL of the formal proof assistant Isabelle written in Isabelle/ML. Conditional Simplification https://www.isa-afp.org/entries/Conditional_Simplification.html https://www.isa-afp.org/entries/Conditional_Simplification.html Mihails Milehins 06 Sep 2021 00:00:00 +0000 The article provides a collection of experimental general-purpose proof methods for the object logic Isabelle/HOL of the formal proof assistant Isabelle. The methods in the collection offer functionality that is similar to certain aspects of the functionality provided by the standard proof methods of Isabelle that combine classical reasoning and rewriting, such as the method <i>auto</i>, but use a different approach for rewriting. More specifically, these methods allow for the side conditions of the rewrite rules to be solved via intro-resolution. Category Theory for ZFC in HOL III: Universal Constructions https://www.isa-afp.org/entries/CZH_Universal_Constructions.html https://www.isa-afp.org/entries/CZH_Universal_Constructions.html Mihails Milehins 06 Sep 2021 00:00:00 +0000 The article provides a formalization of elements of the theory of universal constructions for 1-categories (such as limits, adjoints and Kan extensions) in the object logic ZFC in HOL of the formal proof assistant Isabelle. The article builds upon the foundations established in the AFP entry <i>Category Theory for ZFC in HOL II: Elementary Theory of 1-Categories</i>. Category Theory for ZFC in HOL I: Foundations: Design Patterns, Set Theory, Digraphs, Semicategories https://www.isa-afp.org/entries/CZH_Foundations.html https://www.isa-afp.org/entries/CZH_Foundations.html Mihails Milehins 06 Sep 2021 00:00:00 +0000 This article provides a foundational framework for the formalization of category theory in the object logic ZFC in HOL of the formal proof assistant Isabelle. More specifically, this article provides a formalization of canonical set-theoretic constructions internalized in the type <i>V</i> associated with the ZFC in HOL, establishes a design pattern for the formalization of mathematical structures using sequences and locales, and showcases the developed infrastructure by providing formalizations of the elementary theories of digraphs and semicategories. The methodology chosen for the formalization of the theories of digraphs and semicategories (and categories in future articles) rests on the ideas that were originally expressed in the article <i>Set-Theoretical Foundations of Category Theory</i> written by Solomon Feferman and Georg Kreisel. Thus, in the context of this work, each of the aforementioned mathematical structures is represented as a term of the type <i>V</i> embedded into a stage of the von Neumann hierarchy. Category Theory for ZFC in HOL II: Elementary Theory of 1-Categories https://www.isa-afp.org/entries/CZH_Elementary_Categories.html https://www.isa-afp.org/entries/CZH_Elementary_Categories.html Mihails Milehins 06 Sep 2021 00:00:00 +0000 This article provides a formalization of the foundations of the theory of 1-categories in the object logic ZFC in HOL of the formal proof assistant Isabelle. The article builds upon the foundations that were established in the AFP entry <i>Category Theory for ZFC in HOL I: Foundations: Design Patterns, Set Theory, Digraphs, Semicategories</i>. A data flow analysis algorithm for computing dominators https://www.isa-afp.org/entries/Dominance_CHK.html https://www.isa-afp.org/entries/Dominance_CHK.html Nan Jiang 05 Sep 2021 00:00:00 +0000 This entry formalises the fast iterative algorithm for computing dominators due to Cooper, Harvey and Kennedy. It gives a specification of computing dominators on a control flow graph where each node refers to its reverse post order number. A semilattice of reversed-ordered list which represents dominators is built and a Kildall-style algorithm on the semilattice is defined for computing dominators. Finally the soundness and completeness of the algorithm are proved w.r.t. the specification. Solving Cubic and Quartic Equations https://www.isa-afp.org/entries/Cubic_Quartic_Equations.html https://www.isa-afp.org/entries/Cubic_Quartic_Equations.html René Thiemann 03 Sep 2021 00:00:00 +0000 <p>We formalize Cardano's formula to solve a cubic equation $$ax^3 + bx^2 + cx + d = 0,$$ as well as Ferrari's formula to solve a quartic equation. We further turn both formulas into executable algorithms based on the algebraic number implementation in the AFP. To this end we also slightly extended this library, namely by making the minimal polynomial of an algebraic number executable, and by defining and implementing $n$-th roots of complex numbers.</p> Logging-independent Message Anonymity in the Relational Method https://www.isa-afp.org/entries/Logging_Independent_Anonymity.html https://www.isa-afp.org/entries/Logging_Independent_Anonymity.html Pasquale Noce 26 Aug 2021 00:00:00 +0000 In the context of formal cryptographic protocol verification, logging-independent message anonymity is the property for a given message to remain anonymous despite the attacker's capability of mapping messages of that sort to agents based on some intrinsic feature of such messages, rather than by logging the messages exchanged by legitimate agents as with logging-dependent message anonymity. This paper illustrates how logging-independent message anonymity can be formalized according to the relational method for formal protocol verification by considering a real-world protocol, namely the Restricted Identification one by the BSI. This sample model is used to verify that the pseudonymous identifiers output by user identification tokens remain anonymous under the expected conditions. The Theorem of Three Circles https://www.isa-afp.org/entries/Three_Circles.html https://www.isa-afp.org/entries/Three_Circles.html Fox Thomson, Wenda Li 21 Aug 2021 00:00:00 +0000 The Descartes test based on Bernstein coefficients and Descartes’ rule of signs effectively (over-)approximates the number of real roots of a univariate polynomial over an interval. In this entry we formalise the theorem of three circles, which gives sufficient conditions for when the Descartes test returns 0 or 1. This is the first step for efficient root isolation. Fresh identifiers https://www.isa-afp.org/entries/Fresh_Identifiers.html https://www.isa-afp.org/entries/Fresh_Identifiers.html Andrei Popescu, Thomas Bauereiss 16 Aug 2021 00:00:00 +0000 This entry defines a type class with an operator returning a fresh identifier, given a set of already used identifiers and a preferred identifier. The entry provides a default instantiation for any infinite type, as well as executable instantiations for natural numbers and strings. CoSMed: A confidentiality-verified social media platform https://www.isa-afp.org/entries/CoSMed.html https://www.isa-afp.org/entries/CoSMed.html Thomas Bauereiss, Andrei Popescu 16 Aug 2021 00:00:00 +0000 This entry contains the confidentiality verification of the (functional kernel of) the CoSMed social media platform. The confidentiality properties are formalized as instances of BD Security [<a href="https://doi.org/10.4230/LIPIcs.ITP.2021.3">1</a>, <a href="https://www.isa-afp.org/entries/Bounded_Deducibility_Security.html">2</a>]. An innovation in the deployment of BD Security compared to previous work is the use of dynamic declassification triggers, incorporated as part of inductive bounds, for providing stronger guarantees that account for the repeated opening and closing of access windows. To further strengthen the confidentiality guarantees, we also prove "traceback" properties about the accessibility decisions affecting the information managed by the system. CoSMeDis: A confidentiality-verified distributed social media platform https://www.isa-afp.org/entries/CoSMeDis.html https://www.isa-afp.org/entries/CoSMeDis.html Thomas Bauereiss, Andrei Popescu 16 Aug 2021 00:00:00 +0000 This entry contains the confidentiality verification of the (functional kernel of) the CoSMeDis distributed social media platform presented in [<a href="https://doi.org/10.1109/SP.2017.24">1</a>]. CoSMeDis is a multi-node extension the CoSMed prototype social media platform [<a href="https://doi.org/10.1007/978-3-319-43144-4_6">2</a>, <a href="https://doi.org/10.1007/s10817-017-9443-3">3</a>, <a href="https://www.isa-afp.org/entries/CoSMed.html">4</a>]. The confidentiality properties are formalized as instances of BD Security [<a href="https://doi.org/10.4230/LIPIcs.ITP.2021.3">5</a>, <a href="https://www.isa-afp.org/entries/Bounded_Deducibility_Security.html">6</a>]. The lifting of confidentiality properties from single nodes to the entire CoSMeDis network is performed using compositionality and transport theorems for BD Security, which are described in [<a href="https://doi.org/10.1109/SP.2017.24">1</a>] and formalized in a separate <a href="https://www.isa-afp.org/entries/BD_Security_Compositional.html">AFP entry</a>. CoCon: A Confidentiality-Verified Conference Management System https://www.isa-afp.org/entries/CoCon.html https://www.isa-afp.org/entries/CoCon.html Andrei Popescu, Peter Lammich, Thomas Bauereiss 16 Aug 2021 00:00:00 +0000 This entry contains the confidentiality verification of the (functional kernel of) the CoCon conference management system [<a href="https://doi.org/10.1007/978-3-319-08867-9_11">1</a>, <a href="https://doi.org/10.1007/s10817-020-09566-9">2</a>]. The confidentiality properties refer to the documents managed by the system, namely papers, reviews, discussion logs and acceptance/rejection decisions, and also to the assignment of reviewers to papers. They have all been formulated as instances of BD Security [<a href="https://doi.org/10.4230/LIPIcs.ITP.2021.3">3</a>, <a href="https://www.isa-afp.org/entries/Bounded_Deducibility_Security.html">4</a>] and verified using the BD Security unwinding technique. Compositional BD Security https://www.isa-afp.org/entries/BD_Security_Compositional.html https://www.isa-afp.org/entries/BD_Security_Compositional.html Thomas Bauereiss, Andrei Popescu 16 Aug 2021 00:00:00 +0000 Building on a previous <a href="https://www.isa-afp.org/entries/Bounded_Deducibility_Security.html">AFP entry</a> that formalizes the Bounded-Deducibility Security (BD Security) framework <a href="https://doi.org/10.4230/LIPIcs.ITP.2021.3">[1]</a>, we formalize compositionality and transport theorems for information flow security. These results allow lifting BD Security properties from individual components specified as transition systems, to a composition of systems specified as communicating products of transition systems. The underlying ideas of these results are presented in the papers <a href="https://doi.org/10.4230/LIPIcs.ITP.2021.3">[1]</a> and <a href="https://doi.org/10.1109/SP.2017.24">[2]</a>. The latter paper also describes a major case study where these results have been used: on verifying the CoSMeDis distributed social media platform (itself formalized as an <a href="https://www.isa-afp.org/entries/CoSMeDis.html">AFP entry</a> that builds on this entry). Combinatorial Design Theory https://www.isa-afp.org/entries/Design_Theory.html https://www.isa-afp.org/entries/Design_Theory.html Chelsea Edmonds, Lawrence Paulson 13 Aug 2021 00:00:00 +0000 Combinatorial design theory studies incidence set systems with certain balance and symmetry properties. It is closely related to hypergraph theory. This formalisation presents a general library for formal reasoning on incidence set systems, designs and their applications, including formal definitions and proofs for many key properties, operations, and theorems on the construction and existence of designs. Notably, this includes formalising t-designs, balanced incomplete block designs (BIBD), group divisible designs (GDD), pairwise balanced designs (PBD), design isomorphisms, and the relationship between graphs and designs. A locale-centric approach has been used to manage the relationships between the many different types of designs. Theorems of particular interest include the necessary conditions for existence of a BIBD, Wilson's construction on GDDs, and Bose's inequality on resolvable designs. Parts of this formalisation are explored in the paper "A Modular First Formalisation of Combinatorial Design Theory", presented at CICM 2021. Relational Forests https://www.isa-afp.org/entries/Relational_Forests.html https://www.isa-afp.org/entries/Relational_Forests.html Walter Guttmann 03 Aug 2021 00:00:00 +0000 We study second-order formalisations of graph properties expressed as first-order formulas in relation algebras extended with a Kleene star. The formulas quantify over relations while still avoiding quantification over elements of the base set. We formalise the property of undirected graphs being acyclic this way. This involves a study of various kinds of orientation of graphs. We also verify basic algorithms to constructively prove several second-order properties. Schutz' Independent Axioms for Minkowski Spacetime https://www.isa-afp.org/entries/Schutz_Spacetime.html https://www.isa-afp.org/entries/Schutz_Spacetime.html Richard Schmoetten, Jake Palmer, Jacques Fleuriot 27 Jul 2021 00:00:00 +0000 This is a formalisation of Schutz' system of axioms for Minkowski spacetime published under the name "Independent axioms for Minkowski space-time" in 1997, as well as most of the results in the third chapter ("Temporal Order on a Path") of the above monograph. Many results are proven here that cannot be found in Schutz, either preceding the theorem they are needed for, or within their own thematic section. Finitely Generated Abelian Groups https://www.isa-afp.org/entries/Finitely_Generated_Abelian_Groups.html https://www.isa-afp.org/entries/Finitely_Generated_Abelian_Groups.html Joseph Thommes, Manuel Eberl 07 Jul 2021 00:00:00 +0000 This article deals with the formalisation of some group-theoretic results including the fundamental theorem of finitely generated abelian groups characterising the structure of these groups as a uniquely determined product of cyclic groups. Both the invariant factor decomposition and the primary decomposition are covered. Additional work includes results about the direct product, the internal direct product and more group-theoretic lemmas. SpecCheck - Specification-Based Testing for Isabelle/ML https://www.isa-afp.org/entries/SpecCheck.html https://www.isa-afp.org/entries/SpecCheck.html Kevin Kappelmann, Lukas Bulwahn, Sebastian Willenbrink 01 Jul 2021 00:00:00 +0000 SpecCheck is a <a href="https://en.wikipedia.org/wiki/QuickCheck">QuickCheck</a>-like testing framework for Isabelle/ML. You can use it to write specifications for ML functions. SpecCheck then checks whether your specification holds by testing your function against a given number of generated inputs. It helps you to identify bugs by printing counterexamples on failure and provides you timing information. SpecCheck is customisable and allows you to specify your own input generators, test output formats, as well as pretty printers and shrinking functions for counterexamples among other things. Van der Waerden's Theorem https://www.isa-afp.org/entries/Van_der_Waerden.html https://www.isa-afp.org/entries/Van_der_Waerden.html Katharina Kreuzer, Manuel Eberl 22 Jun 2021 00:00:00 +0000 This article formalises the proof of Van der Waerden's Theorem from Ramsey theory. Van der Waerden's Theorem states that for integers $k$ and $l$ there exists a number $N$ which guarantees that if an integer interval of length at least $N$ is coloured with $k$ colours, there will always be an arithmetic progression of length $l$ of the same colour in said interval. The proof goes along the lines of \cite{Swan}. The smallest number $N_{k,l}$ fulfilling Van der Waerden's Theorem is then called the Van der Waerden Number. Finding the Van der Waerden Number is still an open problem for most values of $k$ and $l$. MiniSail - A kernel language for the ISA specification language SAIL https://www.isa-afp.org/entries/MiniSail.html https://www.isa-afp.org/entries/MiniSail.html Mark Wassell 18 Jun 2021 00:00:00 +0000 MiniSail is a kernel language for Sail, an instruction set architecture (ISA) specification language. Sail is an imperative language with a light-weight dependent type system similar to refinement type systems. From an ISA specification, the Sail compiler can generate theorem prover code and C (or OCaml) to give an executable emulator for an architecture. The idea behind MiniSail is to capture the key and novel features of Sail in terms of their syntax, typing rules and operational semantics, and to confirm that they work together by proving progress and preservation lemmas. We use the Nominal2 library to handle binding. Public Announcement Logic https://www.isa-afp.org/entries/Public_Announcement_Logic.html https://www.isa-afp.org/entries/Public_Announcement_Logic.html Asta Halkjær From 17 Jun 2021 00:00:00 +0000 This work is a formalization of public announcement logic with countably many agents. It includes proofs of soundness and completeness for a variant of the axiom system PA + DIST! + NEC!. The completeness proof builds on the Epistemic Logic theory. - - A Shorter Compiler Correctness Proof for Language IMP - https://www.isa-afp.org/entries/IMP_Compiler.html - https://www.isa-afp.org/entries/IMP_Compiler.html - Pasquale Noce - 04 Jun 2021 00:00:00 +0000 - -This paper presents a compiler correctness proof for the didactic -imperative programming language IMP, introduced in Nipkow and -Klein's book on formal programming language semantics (version of -March 2021), whose size is just two thirds of the book's proof in -the number of formal text lines. As such, it promises to constitute a -further enhanced reference for the formal verification of compilers -meant for larger, real-world programming languages. The presented -proof does not depend on language determinism, so that the proposed -approach can be applied to non-deterministic languages as well. As a -confirmation, this paper extends IMP with an additional -non-deterministic choice command, and proves compiler correctness, -viz. the simulation of compiled code execution by source code, for -such extended language. - - - Lyndon words - https://www.isa-afp.org/entries/Combinatorics_Words_Lyndon.html - https://www.isa-afp.org/entries/Combinatorics_Words_Lyndon.html - Štěpán Holub, Štěpán Starosta - 24 May 2021 00:00:00 +0000 - -Lyndon words are words lexicographically minimal in their conjugacy -class. We formalize their basic properties and characterizations, in -particular the concepts of the longest Lyndon suffix and the Lyndon -factorization. Most of the work assumes a fixed lexicographical order. -Nevertheless we also define the smallest relation guaranteeing -lexicographical minimality of a given word (in its conjugacy class). - diff --git a/web/statistics.html b/web/statistics.html --- a/web/statistics.html +++ b/web/statistics.html @@ -1,307 +1,307 @@ Archive of Formal Proofs

 

 

 

 

 

 

Statistics

 

Statistics

- - - - + + + +
Number of Articles:631
Number of Authors:400
Number of lemmas:~183,600
Lines of Code:~3,206,700
Number of Articles:633
Number of Authors:402
Number of lemmas:~186,300
Lines of Code:~3,225,900

Most used AFP articles:

NameUsed by ? articles
1. List-Index 19
2. Show 14
3. Coinductive 12
Collections 12
Regular-Sets 12
4. Jordan_Normal_Form 11
Landau_Symbols 11
Polynomial_Factorization 11
5. Abstract-Rewriting 10
6. Automatic_Refinement 9
Deriving 9
Native_Word 9

Growth in number of articles:

Growth in lines of code:

Growth in number of authors:

Size of articles:

\ No newline at end of file diff --git a/web/topics.html b/web/topics.html --- a/web/topics.html +++ b/web/topics.html @@ -1,995 +1,997 @@ Archive of Formal Proofs

 

 

 

 

 

 

Index by Topic

 

Computer science

Artificial intelligence

Automata and formal languages

Algorithms

Knuth_Morris_Pratt   Probabilistic_While   Comparison_Sort_Lower_Bound   Quick_Sort_Cost   TortoiseHare   Selection_Heap_Sort   VerifyThis2018   CYK   Boolean_Expression_Checkers   Efficient-Mergesort   SATSolverVerification   MuchAdoAboutTwo   First_Order_Terms   Monad_Memo_DP   Hidden_Markov_Models   Imperative_Insertion_Sort   Formal_SSA   ROBDD   Median_Of_Medians_Selection   Fisher_Yates   Optimal_BST   IMP2   Auto2_Imperative_HOL   List_Inversions   IMP2_Binary_Heap   MFOTL_Monitor   Adaptive_State_Counting   Generic_Join   VerifyThis2019   Generalized_Counting_Sort   MFODL_Monitor_Optimized   Sliding_Window_Algorithm   PAC_Checker   Regression_Test_Selection   Graph: DFS_Framework   Prpu_Maxflow   Floyd_Warshall   Roy_Floyd_Warshall   Dijkstra_Shortest_Path   EdmondsKarp_Maxflow   Depth-First-Search   GraphMarkingIBP   Transitive-Closure   Transitive-Closure-II   Gabow_SCC   Kruskal   Prim_Dijkstra_Simple   Relational_Minimum_Spanning_Trees   Distributed: DiskPaxos   GenClock   ClockSynchInst   Heard_Of   Consensus_Refined   Abortable_Linearizable_Modules   IMAP-CRDT   CRDT   Chandy_Lamport   OpSets   Stellar_Quorums   WOOT_Strong_Eventual_Consistency   Progress_Tracking   Concurrent: ConcurrentGC   Online: List_Update   Geometry: Closest_Pair_Points   Approximation: Approximation_Algorithms   Mathematical: FFT   Gauss-Jordan-Elim-Fun   UpDown_Scheme   Polynomials   Gauss_Jordan   Echelon_Form   QR_Decomposition   Hermite   Groebner_Bases   Diophantine_Eqns_Lin_Hom   Taylor_Models   LLL_Basis_Reduction   Signature_Groebner   BenOr_Kozen_Reif   Smith_Normal_Form   Safe_Distance   Modular_arithmetic_LLL_and_HNF_algorithms   Virtual_Substitution   Optimization: Simplex   Quantum computing: Isabelle_Marries_Dirac   Projective_Measurements  

Concurrency

Data structures

Functional programming

Hardware

SPARCv8  

Machine learning

Networks

Programming languages

Clean   Decl_Sem_Fun_PL   Language definitions: CakeML   WebAssembly   pGCL   GPU_Kernel_PL   LightweightJava   CoreC++   FeatherweightJava   Jinja   JinjaThreads   Locally-Nameless-Sigma   AutoFocus-Stream   FocusStreamsCaseStudies   Isabelle_Meta_Model   Simpl   Complx   Safe_OCL   Isabelle_C   JinjaDCI   Lambda calculi: Higher_Order_Terms   Launchbury   PCF   POPLmark-deBruijn   Lam-ml-Normalization   LambdaMu   Binding_Syntax_Theory   LambdaAuth   Type systems: Name_Carrying_Type_Inference   MiniML   Possibilistic_Noninterference   SIFUM_Type_Systems   Dependent_SIFUM_Type_Systems   Strong_Security   WHATandWHERE_Security   VolpanoSmith   Physical_Quantities   MiniSail   Logics: ConcurrentIMP   Refine_Monadic   Automatic_Refinement   MonoBoolTranAlgebra   Simpl   Separation_Algebra   Separation_Logic_Imperative_HOL   Relational-Incorrectness-Logic   Abstract-Hoare-Logics   Kleene_Algebra   KAT_and_DRA   KAD   BytecodeLogicJmlTypes   DataRefinementIBP   RefinementReactive   SIFPL   TLA   Ribbon_Proofs   Separata   Complx   Differential_Dynamic_Logic   Hoare_Time   IMP2   UTP   QHLProver   Differential_Game_Logic   + Correctness_Algebras   Compiling: CakeML_Codegen   Compiling-Exceptions-Correctly   NormByEval   Density_Compiler   VeriComp   IMP_Compiler   Static analysis: RIPEMD-160-SPARK   Program-Conflict-Analysis   Shivers-CFA   Slicing   HRB-Slicing   InfPathElimination   Abs_Int_ITP2012   Dominance_CHK   Transformations: Call_Arity   Refine_Imperative_HOL   WorkerWrapper   Monad_Memo_DP   Formal_SSA   Minimal_SSA   Misc: JiveDataStoreModel   Pop_Refinement   Case_Labeling   Interpreter_Optimizations  

Security

Semantics

System description languages

Logic

Philosophical aspects

General logic

Computability

Set theory

Proof theory

Rewriting

Mathematics

Order

Algebra

Analysis

Probability theory

Number theory

Games and economics

Geometry

Topology

Graph theory

Combinatorics

Category theory

Physics

Misc

Tools

\ No newline at end of file