diff --git a/metadata/metadata b/metadata/metadata --- a/metadata/metadata +++ b/metadata/metadata @@ -1,9190 +1,9378 @@ [Arith_Prog_Rel_Primes] title = Arithmetic progressions and relative primes author = José Manuel Rodríguez Caballero topic = Mathematics/Number theory date = 2020-02-01 notify = jose.manuel.rodriguez.caballero@ut.ee abstract = This article provides a formalization of the solution obtained by the author of the Problem “ARITHMETIC PROGRESSIONS” from the Putnam exam problems of 2002. The statement of the problem is as follows: For which integers n > 1 does the set of positive integers less than and relatively prime to n constitute an arithmetic progression? +[Banach_Steinhaus] +title = Banach-Steinhaus Theorem +author = Dominique Unruh , Jose Manuel Rodriguez Caballero +topic = Mathematics/Analysis +date = 2020-05-02 +notify = jose.manuel.rodriguez.caballero@ut.ee, unruh@ut.ee +abstract = + We formalize in Isabelle/HOL a result + due to S. Banach and H. Steinhaus known as + the Banach-Steinhaus theorem or Uniform boundedness principle: a + pointwise-bounded family of continuous linear operators from a Banach + space to a normed space is uniformly bounded. Our approach is an + adaptation to Isabelle/HOL of a proof due to A. Sokal. + [Complex_Geometry] title = Complex Geometry author = Filip Marić , Danijela Simić topic = Mathematics/Geometry date = 2019-12-16 notify = danijela@matf.bg.ac.rs, filip@matf.bg.ac.rs, boutry@unistra.fr abstract = A formalization of geometry of complex numbers is presented. Fundamental objects that are investigated are the complex plane extended by a single infinite point, its objects (points, lines and circles), and groups of transformations that act on them (e.g., inversions and Möbius transformations). Most objects are defined algebraically, but correspondence with classical geometric definitions is shown. [Poincare_Disc] title = Poincaré Disc Model author = Danijela Simić , Filip Marić , Pierre Boutry topic = Mathematics/Geometry date = 2019-12-16 notify = danijela@matf.bg.ac.rs, filip@matf.bg.ac.rs, boutry@unistra.fr abstract = We describe formalization of the Poincaré disc model of hyperbolic geometry within the Isabelle/HOL proof assistant. The model is defined within the extended complex plane (one dimensional complex projectives space ℂP1), formalized in the AFP entry “Complex Geometry”. Points, lines, congruence of pairs of points, betweenness of triples of points, circles, and isometries are defined within the model. It is shown that the model satisfies all Tarski's axioms except the Euclid's axiom. It is shown that it satisfies its negation and the limiting parallels axiom (which proves it to be a model of hyperbolic geometry). [Fourier] title = Fourier Series author = Lawrence C Paulson topic = Mathematics/Analysis date = 2019-09-06 notify = lp15@cam.ac.uk abstract = This development formalises the square integrable functions over the reals and the basics of Fourier series. It culminates with a proof that every well-behaved periodic function can be approximated by a Fourier series. The material is ported from HOL Light: https://github.com/jrh13/hol-light/blob/master/100/fourier.ml [Generic_Deriving] title = Deriving generic class instances for datatypes author = Jonas Rädle , Lars Hupel topic = Computer science/Data structures date = 2018-11-06 notify = jonas.raedle@gmail.com abstract =

We provide a framework for automatically deriving instances for generic type classes. Our approach is inspired by Haskell's generic-deriving package and Scala's shapeless library. In addition to generating the code for type class functions, we also attempt to automatically prove type class laws for these instances. As of now, however, some manual proofs are still required for recursive datatypes.

Note: There are already articles in the AFP that provide automatic instantiation for a number of classes. Concretely, Deriving allows the automatic instantiation of comparators, linear orders, equality, and hashing. Show instantiates a Haskell-style show class.

Our approach works for arbitrary classes (with some Isabelle/HOL overhead for each class), but a smaller set of datatypes.

[Partial_Order_Reduction] title = Partial Order Reduction author = Julian Brunner topic = Computer science/Automata and formal languages date = 2018-06-05 notify = brunnerj@in.tum.de abstract = This entry provides a formalization of the abstract theory of ample set partial order reduction. The formalization includes transition systems with actions, trace theory, as well as basics on finite, infinite, and lazy sequences. We also provide a basic framework for static analysis on concurrent systems with respect to the ample set condition. [CakeML] title = CakeML author = Lars Hupel , Yu Zhang <> contributors = Johannes Åman Pohjola <> topic = Computer science/Programming languages/Language definitions date = 2018-03-12 notify = hupel@in.tum.de abstract = CakeML is a functional programming language with a proven-correct compiler and runtime system. This entry contains an unofficial version of the CakeML semantics that has been exported from the Lem specifications to Isabelle. Additionally, there are some hand-written theory files that adapt the exported code to Isabelle and port proofs from the HOL4 formalization, e.g. termination and equivalence proofs. [CakeML_Codegen] title = A Verified Code Generator from Isabelle/HOL to CakeML author = Lars Hupel topic = Computer science/Programming languages/Compiling, Logic/Rewriting date = 2019-07-08 notify = lars@hupel.info abstract = This entry contains the formalization that accompanies my PhD thesis (see https://lars.hupel.info/research/codegen/). I develop a verified compilation toolchain from executable specifications in Isabelle/HOL to CakeML abstract syntax trees. This improves over the state-of-the-art in Isabelle by providing a trustworthy procedure for code generation. [DiscretePricing] title = Pricing in discrete financial models author = Mnacho Echenim topic = Mathematics/Probability theory, Mathematics/Games and economics date = 2018-07-16 notify = mnacho.echenim@univ-grenoble-alpes.fr abstract = We have formalized the computation of fair prices for derivative products in discrete financial models. As an application, we derive a way to compute fair prices of derivative products in the Cox-Ross-Rubinstein model of a financial market, thus completing the work that was presented in this paper. extra-history = Change history: [2019-05-12]: Renamed discr_mkt predicate to stk_strict_subs and got rid of predicate A for a more natural definition of the type discrete_market; renamed basic quantity processes for coherent notation; renamed value_process into val_process and closing_value_process to cls_val_process; relaxed hypothesis of lemma CRR_market_fair_price. Added functions to price some basic options. (revision 0b813a1a833f)
[Pell] title = Pell's Equation author = Manuel Eberl topic = Mathematics/Number theory date = 2018-06-23 notify = eberlm@in.tum.de abstract =

This article gives the basic theory of Pell's equation x2 = 1 + Dy2, where D ∈ ℕ is a parameter and x, y are integer variables.

The main result that is proven is the following: If D is not a perfect square, then there exists a fundamental solution (x0, y0) that is not the trivial solution (1, 0) and which generates all other solutions (x, y) in the sense that there exists some n ∈ ℕ such that |x| + |y| √D = (x0 + y0 √D)n. This also implies that the set of solutions is infinite, and it gives us an explicit and executable characterisation of all the solutions.

Based on this, simple executable algorithms for computing the fundamental solution and the infinite sequence of all non-negative solutions are also provided.

[WebAssembly] title = WebAssembly author = Conrad Watt topic = Computer science/Programming languages/Language definitions date = 2018-04-29 notify = caw77@cam.ac.uk abstract = This is a mechanised specification of the WebAssembly language, drawn mainly from the previously published paper formalisation of Haas et al. Also included is a full proof of soundness of the type system, together with a verified type checker and interpreter. We include only a partial procedure for the extraction of the type checker and interpreter here. For more details, please see our paper in CPP 2018. [Knuth_Morris_Pratt] title = The string search algorithm by Knuth, Morris and Pratt author = Fabian Hellauer , Peter Lammich topic = Computer science/Algorithms date = 2017-12-18 notify = hellauer@in.tum.de, lammich@in.tum.de abstract = The Knuth-Morris-Pratt algorithm is often used to show that the problem of finding a string s in a text t can be solved deterministically in O(|s| + |t|) time. We use the Isabelle Refinement Framework to formulate and verify the algorithm. Via refinement, we apply some optimisations and finally use the Sepref tool to obtain executable code in Imperative/HOL. [Minkowskis_Theorem] title = Minkowski's Theorem author = Manuel Eberl topic = Mathematics/Geometry, Mathematics/Number theory date = 2017-07-13 notify = eberlm@in.tum.de abstract =

Minkowski's theorem relates a subset of ℝn, the Lebesgue measure, and the integer lattice ℤn: It states that any convex subset of ℝn with volume greater than 2n contains at least one lattice point from ℤn\{0}, i. e. a non-zero point with integer coefficients.

A related theorem which directly implies this is Blichfeldt's theorem, which states that any subset of ℝn with a volume greater than 1 contains two different points whose difference vector has integer components.

The entry contains a proof of both theorems.

[Name_Carrying_Type_Inference] title = Verified Metatheory and Type Inference for a Name-Carrying Simply-Typed Lambda Calculus author = Michael Rawson topic = Computer science/Programming languages/Type systems date = 2017-07-09 notify = mr644@cam.ac.uk, michaelrawson76@gmail.com abstract = I formalise a Church-style simply-typed \(\lambda\)-calculus, extended with pairs, a unit value, and projection functions, and show some metatheory of the calculus, such as the subject reduction property. Particular attention is paid to the treatment of names in the calculus. A nominal style of binding is used, but I use a manual approach over Nominal Isabelle in order to extract an executable type inference algorithm. More information can be found in my undergraduate dissertation. [Propositional_Proof_Systems] title = Propositional Proof Systems author = Julius Michaelis , Tobias Nipkow topic = Logic/Proof theory date = 2017-06-21 notify = maintainafpppt@liftm.de abstract = We formalize a range of proof systems for classical propositional logic (sequent calculus, natural deduction, Hilbert systems, resolution) and prove the most important meta-theoretic results about semantics and proofs: compactness, soundness, completeness, translations between proof systems, cut-elimination, interpolation and model existence. [Optics] title = Optics author = Simon Foster , Frank Zeyda topic = Computer science/Functional programming, Mathematics/Algebra date = 2017-05-25 notify = simon.foster@york.ac.uk abstract = Lenses provide an abstract interface for manipulating data types through spatially-separated views. They are defined abstractly in terms of two functions, get, the return a value from the source type, and put that updates the value. We mechanise the underlying theory of lenses, in terms of an algebraic hierarchy of lenses, including well-behaved and very well-behaved lenses, each lens class being characterised by a set of lens laws. We also mechanise a lens algebra in Isabelle that enables their composition and comparison, so as to allow construction of complex lenses. This is accompanied by a large library of algebraic laws. Moreover we also show how the lens classes can be applied by instantiating them with a number of Isabelle data types. extra-history = Change history: [2020-03-02]: Added partial bijective and symmetric lenses. Improved alphabet command generating additional lenses and results. Several additional lens relations, including observational equivalence. Additional theorems throughout. Adaptations for Isabelle 2020. (revision 44e2e5c) [Game_Based_Crypto] title = Game-based cryptography in HOL author = Andreas Lochbihler , S. Reza Sefidgar <>, Bhargav Bhatt topic = Computer science/Security/Cryptography date = 2017-05-05 notify = mail@andreas-lochbihler.de abstract =

In this AFP entry, we show how to specify game-based cryptographic security notions and formally prove secure several cryptographic constructions from the literature using the CryptHOL framework. Among others, we formalise the notions of a random oracle, a pseudo-random function, an unpredictable function, and of encryption schemes that are indistinguishable under chosen plaintext and/or ciphertext attacks. We prove the random-permutation/random-function switching lemma, security of the Elgamal and hashed Elgamal public-key encryption scheme and correctness and security of several constructions with pseudo-random functions.

Our proofs follow the game-hopping style advocated by Shoup and Bellare and Rogaway, from which most of the examples have been taken. We generalise some of their results such that they can be reused in other proofs. Thanks to CryptHOL's integration with Isabelle's parametricity infrastructure, many simple hops are easily justified using the theory of representation independence.

extra-history = Change history: [2018-09-28]: added the CryptHOL tutorial for game-based cryptography (revision 489a395764ae) [Multi_Party_Computation] title = Multi-Party Computation author = David Aspinall , David Butler topic = Computer science/Security date = 2019-05-09 notify = dbutler@turing.ac.uk abstract = We use CryptHOL to consider Multi-Party Computation (MPC) protocols. MPC was first considered by Yao in 1983 and recent advances in efficiency and an increased demand mean it is now deployed in the real world. Security is considered using the real/ideal world paradigm. We first define security in the semi-honest security setting where parties are assumed not to deviate from the protocol transcript. In this setting we prove multiple Oblivious Transfer (OT) protocols secure and then show security for the gates of the GMW protocol. We then define malicious security, this is a stronger notion of security where parties are assumed to be fully corrupted by an adversary. In this setting we again consider OT, as it is a fundamental building block of almost all MPC protocols. [Sigma_Commit_Crypto] title = Sigma Protocols and Commitment Schemes author = David Butler , Andreas Lochbihler topic = Computer science/Security/Cryptography date = 2019-10-07 notify = dbutler@turing.ac.uk abstract = We use CryptHOL to formalise commitment schemes and Sigma-protocols. Both are widely used fundamental two party cryptographic primitives. Security for commitment schemes is considered using game-based definitions whereas the security of Sigma-protocols is considered using both the game-based and simulation-based security paradigms. In this work, we first define security for both primitives and then prove secure multiple case studies: the Schnorr, Chaum-Pedersen and Okamoto Sigma-protocols as well as a construction that allows for compound (AND and OR statements) Sigma-protocols and the Pedersen and Rivest commitment schemes. We also prove that commitment schemes can be constructed from Sigma-protocols. We formalise this proof at an abstract level, only assuming the existence of a Sigma-protocol; consequently, the instantiations of this result for the concrete Sigma-protocols we consider come for free. [CryptHOL] title = CryptHOL author = Andreas Lochbihler topic = Computer science/Security/Cryptography, Computer science/Functional programming, Mathematics/Probability theory date = 2017-05-05 notify = mail@andreas-lochbihler.de abstract =

CryptHOL provides a framework for formalising cryptographic arguments in Isabelle/HOL. It shallowly embeds a probabilistic functional programming language in higher order logic. The language features monadic sequencing, recursion, random sampling, failures and failure handling, and black-box access to oracles. Oracles are probabilistic functions which maintain hidden state between different invocations. All operators are defined in the new semantic domain of generative probabilistic values, a codatatype. We derive proof rules for the operators and establish a connection with the theory of relational parametricity. Thus, the resuting proofs are trustworthy and comprehensible, and the framework is extensible and widely applicable.

The framework is used in the accompanying AFP entry "Game-based Cryptography in HOL". There, we show-case our framework by formalizing different game-based proofs from the literature. This formalisation continues the work described in the author's ESOP 2016 paper.

[Constructive_Cryptography] title = Constructive Cryptography in HOL author = Andreas Lochbihler , S. Reza Sefidgar<> topic = Computer science/Security/Cryptography, Mathematics/Probability theory date = 2018-12-17 notify = mail@andreas-lochbihler.de, reza.sefidgar@inf.ethz.ch abstract = Inspired by Abstract Cryptography, we extend CryptHOL, a framework for formalizing game-based proofs, with an abstract model of Random Systems and provide proof rules about their composition and equality. This foundation facilitates the formalization of Constructive Cryptography proofs, where the security of a cryptographic scheme is realized as a special form of construction in which a complex random system is built from simpler ones. This is a first step towards a fully-featured compositional framework, similar to Universal Composability framework, that supports formalization of simulation-based proofs. [Probabilistic_While] title = Probabilistic while loop author = Andreas Lochbihler topic = Computer science/Functional programming, Mathematics/Probability theory, Computer science/Algorithms date = 2017-05-05 notify = mail@andreas-lochbihler.de abstract = This AFP entry defines a probabilistic while operator based on sub-probability mass functions and formalises zero-one laws and variant rules for probabilistic loop termination. As applications, we implement probabilistic algorithms for the Bernoulli, geometric and arbitrary uniform distributions that only use fair coin flips, and prove them correct and terminating with probability 1. extra-history = Change history: [2018-02-02]: Added a proof that probabilistic conditioning can be implemented by repeated sampling. (revision 305867c4e911)
[Monad_Normalisation] title = Monad normalisation author = Joshua Schneider <>, Manuel Eberl , Andreas Lochbihler topic = Tools, Computer science/Functional programming, Logic/Rewriting date = 2017-05-05 notify = mail@andreas-lochbihler.de abstract = The usual monad laws can directly be used as rewrite rules for Isabelle’s simplifier to normalise monadic HOL terms and decide equivalences. In a commutative monad, however, the commutativity law is a higher-order permutative rewrite rule that makes the simplifier loop. This AFP entry implements a simproc that normalises monadic expressions in commutative monads using ordered rewriting. The simproc can also permute computations across control operators like if and case. [Monomorphic_Monad] title = Effect polymorphism in higher-order logic author = Andreas Lochbihler topic = Computer science/Functional programming date = 2017-05-05 notify = mail@andreas-lochbihler.de abstract = The notion of a monad cannot be expressed within higher-order logic (HOL) due to type system restrictions. We show that if a monad is used with values of only one type, this notion can be formalised in HOL. Based on this idea, we develop a library of effect specifications and implementations of monads and monad transformers. Hence, we can abstract over the concrete monad in HOL definitions and thus use the same definition for different (combinations of) effects. We illustrate the usefulness of effect polymorphism with a monadic interpreter for a simple language. extra-history = Change history: [2018-02-15]: added further specifications and implementations of non-determinism; more examples (revision bc5399eea78e)
[Constructor_Funs] title = Constructor Functions author = Lars Hupel topic = Tools date = 2017-04-19 notify = hupel@in.tum.de abstract = Isabelle's code generator performs various adaptations for target languages. Among others, constructor applications have to be fully saturated. That means that for constructor calls occuring as arguments to higher-order functions, synthetic lambdas have to be inserted. This entry provides tooling to avoid this construction altogether by introducing constructor functions. [Lazy_Case] title = Lazifying case constants author = Lars Hupel topic = Tools date = 2017-04-18 notify = hupel@in.tum.de abstract = Isabelle's code generator performs various adaptations for target languages. Among others, case statements are printed as match expressions. Internally, this is a sophisticated procedure, because in HOL, case statements are represented as nested calls to the case combinators as generated by the datatype package. Furthermore, the procedure relies on laziness of match expressions in the target language, i.e., that branches guarded by patterns that fail to match are not evaluated. Similarly, if-then-else is printed to the corresponding construct in the target language. This entry provides tooling to replace these special cases in the code generator by ignoring these target language features, instead printing case expressions and if-then-else as functions. [Dict_Construction] title = Dictionary Construction author = Lars Hupel topic = Tools date = 2017-05-24 notify = hupel@in.tum.de abstract = Isabelle's code generator natively supports type classes. For targets that do not have language support for classes and instances, it performs the well-known dictionary translation, as described by Haftmann and Nipkow. This translation happens outside the logic, i.e., there is no guarantee that it is correct, besides the pen-and-paper proof. This work implements a certified dictionary translation that produces new class-free constants and derives equality theorems. [Higher_Order_Terms] title = An Algebra for Higher-Order Terms author = Lars Hupel contributors = Yu Zhang <> topic = Computer science/Programming languages/Lambda calculi date = 2019-01-15 notify = lars@hupel.info abstract = In this formalization, I introduce a higher-order term algebra, generalizing the notions of free variables, matching, and substitution. The need arose from the work on a verified compiler from Isabelle to CakeML. Terms can be thought of as consisting of a generic (free variables, constants, application) and a specific part. As example applications, this entry provides instantiations for de-Bruijn terms, terms with named variables, and Blanchette’s λ-free higher-order terms. Furthermore, I implement translation functions between de-Bruijn terms and named terms and prove their correctness. [Subresultants] title = Subresultants author = Sebastiaan Joosten , René Thiemann , Akihisa Yamada topic = Mathematics/Algebra date = 2017-04-06 notify = rene.thiemann@uibk.ac.at abstract = We formalize the theory of subresultants and the subresultant polynomial remainder sequence as described by Brown and Traub. As a result, we obtain efficient certified algorithms for computing the resultant and the greatest common divisor of polynomials. [Comparison_Sort_Lower_Bound] title = Lower bound on comparison-based sorting algorithms author = Manuel Eberl topic = Computer science/Algorithms date = 2017-03-15 notify = eberlm@in.tum.de abstract =

This article contains a formal proof of the well-known fact that number of comparisons that a comparison-based sorting algorithm needs to perform to sort a list of length n is at least log2 (n!) in the worst case, i. e. Ω(n log n).

For this purpose, a shallow embedding for comparison-based sorting algorithms is defined: a sorting algorithm is a recursive datatype containing either a HOL function or a query of a comparison oracle with a continuation containing the remaining computation. This makes it possible to force the algorithm to use only comparisons and to track the number of comparisons made.

[Quick_Sort_Cost] title = The number of comparisons in QuickSort author = Manuel Eberl topic = Computer science/Algorithms date = 2017-03-15 notify = eberlm@in.tum.de abstract =

We give a formal proof of the well-known results about the number of comparisons performed by two variants of QuickSort: first, the expected number of comparisons of randomised QuickSort (i. e. QuickSort with random pivot choice) is 2 (n+1) Hn - 4 n, which is asymptotically equivalent to 2 n ln n; second, the number of comparisons performed by the classic non-randomised QuickSort has the same distribution in the average case as the randomised one.

[Random_BSTs] title = Expected Shape of Random Binary Search Trees author = Manuel Eberl topic = Computer science/Data structures date = 2017-04-04 notify = eberlm@in.tum.de abstract =

This entry contains proofs for the textbook results about the distributions of the height and internal path length of random binary search trees (BSTs), i. e. BSTs that are formed by taking an empty BST and inserting elements from a fixed set in random order.

In particular, we prove a logarithmic upper bound on the expected height and the Θ(n log n) closed-form solution for the expected internal path length in terms of the harmonic numbers. We also show how the internal path length relates to the average-case cost of a lookup in a BST.

[Randomised_BSTs] title = Randomised Binary Search Trees author = Manuel Eberl topic = Computer science/Data structures date = 2018-10-19 notify = eberlm@in.tum.de abstract =

This work is a formalisation of the Randomised Binary Search Trees introduced by Martínez and Roura, including definitions and correctness proofs.

Like randomised treaps, they are a probabilistic data structure that behaves exactly as if elements were inserted into a non-balancing BST in random order. However, unlike treaps, they only use discrete probability distributions, but their use of randomness is more complicated.

[E_Transcendental] title = The Transcendence of e author = Manuel Eberl topic = Mathematics/Analysis, Mathematics/Number theory date = 2017-01-12 notify = eberlm@in.tum.de abstract =

This work contains a proof that Euler's number e is transcendental. The proof follows the standard approach of assuming that e is algebraic and then using a specific integer polynomial to derive two inconsistent bounds, leading to a contradiction.

This kind of approach can be found in many different sources; this formalisation mostly follows a PlanetMath article by Roger Lipsett.

[Pi_Transcendental] title = The Transcendence of π author = Manuel Eberl topic = Mathematics/Number theory date = 2018-09-28 notify = eberlm@in.tum.de abstract =

This entry shows the transcendence of π based on the classic proof using the fundamental theorem of symmetric polynomials first given by von Lindemann in 1882, but the formalisation mostly follows the version by Niven. The proof reuses much of the machinery developed in the AFP entry on the transcendence of e.

[DFS_Framework] title = A Framework for Verifying Depth-First Search Algorithms author = Peter Lammich , René Neumann notify = lammich@in.tum.de date = 2016-07-05 topic = Computer science/Algorithms/Graph abstract =

This entry presents a framework for the modular verification of DFS-based algorithms, which is described in our [CPP-2015] paper. It provides a generic DFS algorithm framework, that can be parameterized with user-defined actions on certain events (e.g. discovery of new node). It comes with an extensible library of invariants, which can be used to derive invariants of a specific parameterization. Using refinement techniques, efficient implementations of the algorithms can easily be derived. Here, the framework comes with templates for a recursive and a tail-recursive implementation, and also with several templates for implementing the data structures required by the DFS algorithm. Finally, this entry contains a set of re-usable DFS-based algorithms, which illustrate the application of the framework.

[CPP-2015] Peter Lammich, René Neumann: A Framework for Verifying Depth-First Search Algorithms. CPP 2015: 137-146

[Flow_Networks] title = Flow Networks and the Min-Cut-Max-Flow Theorem author = Peter Lammich , S. Reza Sefidgar <> topic = Mathematics/Graph theory date = 2017-06-01 notify = lammich@in.tum.de abstract = We present a formalization of flow networks and the Min-Cut-Max-Flow theorem. Our formal proof closely follows a standard textbook proof, and is accessible even without being an expert in Isabelle/HOL, the interactive theorem prover used for the formalization. [Prpu_Maxflow] title = Formalizing Push-Relabel Algorithms author = Peter Lammich , S. Reza Sefidgar <> topic = Computer science/Algorithms/Graph, Mathematics/Graph theory date = 2017-06-01 notify = lammich@in.tum.de abstract = We present a formalization of push-relabel algorithms for computing the maximum flow in a network. We start with Goldberg's et al.~generic push-relabel algorithm, for which we show correctness and the time complexity bound of O(V^2E). We then derive the relabel-to-front and FIFO implementation. Using stepwise refinement techniques, we derive an efficient verified implementation. Our formal proof of the abstract algorithms closely follows a standard textbook proof. It is accessible even without being an expert in Isabelle/HOL, the interactive theorem prover used for the formalization. [Buildings] title = Chamber Complexes, Coxeter Systems, and Buildings author = Jeremy Sylvestre notify = jeremy.sylvestre@ualberta.ca date = 2016-07-01 topic = Mathematics/Algebra, Mathematics/Geometry abstract = We provide a basic formal framework for the theory of chamber complexes and Coxeter systems, and for buildings as thick chamber complexes endowed with a system of apartments. Along the way, we develop some of the general theory of abstract simplicial complexes and of groups (relying on the group_add class for the basics), including free groups and group presentations, and their universal properties. The main results verified are that the deletion condition is both necessary and sufficient for a group with a set of generators of order two to be a Coxeter system, and that the apartments in a (thick) building are all uniformly Coxeter. [Algebraic_VCs] title = Program Construction and Verification Components Based on Kleene Algebra author = Victor B. F. Gomes , Georg Struth notify = victor.gomes@cl.cam.ac.uk, g.struth@sheffield.ac.uk date = 2016-06-18 topic = Mathematics/Algebra abstract = Variants of Kleene algebra support program construction and verification by algebraic reasoning. This entry provides a verification component for Hoare logic based on Kleene algebra with tests, verification components for weakest preconditions and strongest postconditions based on Kleene algebra with domain and a component for step-wise refinement based on refinement Kleene algebra with tests. In addition to these components for the partial correctness of while programs, a verification component for total correctness based on divergence Kleene algebras and one for (partial correctness) of recursive programs based on domain quantales are provided. Finally we have integrated memory models for programs with pointers and a program trace semantics into the weakest precondition component. [C2KA_DistributedSystems] title = Communicating Concurrent Kleene Algebra for Distributed Systems Specification author = Maxime Buyse , Jason Jaskolka topic = Computer science/Automata and formal languages, Mathematics/Algebra date = 2019-08-06 notify = maxime.buyse@polytechnique.edu, jason.jaskolka@carleton.ca abstract = Communicating Concurrent Kleene Algebra (C²KA) is a mathematical framework for capturing the communicating and concurrent behaviour of agents in distributed systems. It extends Hoare et al.'s Concurrent Kleene Algebra (CKA) with communication actions through the notions of stimuli and shared environments. C²KA has applications in studying system-level properties of distributed systems such as safety, security, and reliability. In this work, we formalize results about C²KA and its application for distributed systems specification. We first formalize the stimulus structure and behaviour structure (CKA). Next, we combine them to formalize C²KA and its properties. Then, we formalize notions and properties related to the topology of distributed systems and the potential for communication via stimuli and via shared environments of agents, all within the algebraic setting of C²KA. [Card_Equiv_Relations] title = Cardinality of Equivalence Relations author = Lukas Bulwahn notify = lukas.bulwahn@gmail.com date = 2016-05-24 topic = Mathematics/Combinatorics abstract = This entry provides formulae for counting the number of equivalence relations and partial equivalence relations over a finite carrier set with given cardinality. To count the number of equivalence relations, we provide bijections between equivalence relations and set partitions, and then transfer the main results of the two AFP entries, Cardinality of Set Partitions and Spivey's Generalized Recurrence for Bell Numbers, to theorems on equivalence relations. To count the number of partial equivalence relations, we observe that counting partial equivalence relations over a set A is equivalent to counting all equivalence relations over all subsets of the set A. From this observation and the results on equivalence relations, we show that the cardinality of partial equivalence relations over a finite set of cardinality n is equal to the n+1-th Bell number. [Twelvefold_Way] title = The Twelvefold Way author = Lukas Bulwahn topic = Mathematics/Combinatorics date = 2016-12-29 notify = lukas.bulwahn@gmail.com abstract = This entry provides all cardinality theorems of the Twelvefold Way. The Twelvefold Way systematically classifies twelve related combinatorial problems concerning two finite sets, which include counting permutations, combinations, multisets, set partitions and number partitions. This development builds upon the existing formal developments with cardinality theorems for those structures. It provides twelve bijections from the various structures to different equivalence classes on finite functions, and hence, proves cardinality formulae for these equivalence classes on finite functions. [Chord_Segments] title = Intersecting Chords Theorem author = Lukas Bulwahn notify = lukas.bulwahn@gmail.com date = 2016-10-11 topic = Mathematics/Geometry abstract = This entry provides a geometric proof of the intersecting chords theorem. The theorem states that when two chords intersect each other inside a circle, the products of their segments are equal. After a short review of existing proofs in the literature, I decided to use a proof approach that employs reasoning about lengths of line segments, the orthogonality of two lines and the Pythagoras Law. Hence, one can understand the formalized proof easily with the knowledge of a few general geometric facts that are commonly taught in high-school. This theorem is the 55th theorem of the Top 100 Theorems list. [Category3] title = Category Theory with Adjunctions and Limits author = Eugene W. Stark notify = stark@cs.stonybrook.edu date = 2016-06-26 topic = Mathematics/Category theory abstract = This article attempts to develop a usable framework for doing category theory in Isabelle/HOL. Our point of view, which to some extent differs from that of the previous AFP articles on the subject, is to try to explore how category theory can be done efficaciously within HOL, rather than trying to match exactly the way things are done using a traditional approach. To this end, we define the notion of category in an "object-free" style, in which a category is represented by a single partial composition operation on arrows. This way of defining categories provides some advantages in the context of HOL, including the ability to avoid the use of records and the possibility of defining functors and natural transformations simply as certain functions on arrows, rather than as composite objects. We define various constructions associated with the basic notions, including: dual category, product category, functor category, discrete category, free category, functor composition, and horizontal and vertical composite of natural transformations. A "set category" locale is defined that axiomatizes the notion "category of all sets at a type and all functions between them," and a fairly extensive set of properties of set categories is derived from the locale assumptions. The notion of a set category is used to prove the Yoneda Lemma in a general setting of a category equipped with a "hom embedding," which maps arrows of the category to the "universe" of the set category. We also give a treatment of adjunctions, defining adjunctions via left and right adjoint functors, natural bijections between hom-sets, and unit and counit natural transformations, and showing the equivalence of these definitions. We also develop the theory of limits, including representations of functors, diagrams and cones, and diagonal functors. We show that right adjoint functors preserve limits, and that limits can be constructed via products and equalizers. We characterize the conditions under which limits exist in a set category. We also examine the case of limits in a functor category, ultimately culminating in a proof that the Yoneda embedding preserves limits. extra-history = Change history: [2018-05-29]: Revised axioms for the category locale. Introduced notation for composition and "in hom". (revision 8318366d4575)
[2020-02-15]: Move ConcreteCategory.thy from Bicategory to Category3 and use it systematically. Make other minor improvements throughout. (revision a51840d36867)
[MonoidalCategory] title = Monoidal Categories author = Eugene W. Stark topic = Mathematics/Category theory date = 2017-05-04 notify = stark@cs.stonybrook.edu abstract = Building on the formalization of basic category theory set out in the author's previous AFP article, the present article formalizes some basic aspects of the theory of monoidal categories. Among the notions defined here are monoidal category, monoidal functor, and equivalence of monoidal categories. The main theorems formalized are MacLane's coherence theorem and the constructions of the free monoidal category and free strict monoidal category generated by a given category. The coherence theorem is proved syntactically, using a structurally recursive approach to reduction of terms that might have some novel aspects. We also give proofs of some results given by Etingof et al, which may prove useful in a formal setting. In particular, we show that the left and right unitors need not be taken as given data in the definition of monoidal category, nor does the definition of monoidal functor need to take as given a specific isomorphism expressing the preservation of the unit object. Our definitions of monoidal category and monoidal functor are stated so as to take advantage of the economy afforded by these facts. extra-history = Change history: [2017-05-18]: Integrated material from MonoidalCategory/Category3Adapter into Category3/ and deleted adapter. (revision 015543cdd069)
[2018-05-29]: Modifications required due to 'Category3' changes. Introduced notation for "in hom". (revision 8318366d4575)
[2020-02-15]: Cosmetic improvements. (revision a51840d36867)
[Card_Multisets] title = Cardinality of Multisets author = Lukas Bulwahn notify = lukas.bulwahn@gmail.com date = 2016-06-26 topic = Mathematics/Combinatorics abstract =

This entry provides three lemmas to count the number of multisets of a given size and finite carrier set. The first lemma provides a cardinality formula assuming that the multiset's elements are chosen from the given carrier set. The latter two lemmas provide formulas assuming that the multiset's elements also cover the given carrier set, i.e., each element of the carrier set occurs in the multiset at least once.

The proof of the first lemma uses the argument of the recurrence relation for counting multisets. The proof of the second lemma is straightforward, and the proof of the third lemma is easily obtained using the first cardinality lemma. A challenge for the formalization is the derivation of the required induction rule, which is a special combination of the induction rules for finite sets and natural numbers. The induction rule is derived by defining a suitable inductive predicate and transforming the predicate's induction rule.

[Posix-Lexing] title = POSIX Lexing with Derivatives of Regular Expressions author = Fahad Ausaf , Roy Dyckhoff , Christian Urban notify = christian.urban@kcl.ac.uk date = 2016-05-24 topic = Computer science/Automata and formal languages abstract = Brzozowski introduced the notion of derivatives for regular expressions. They can be used for a very simple regular expression matching algorithm. Sulzmann and Lu cleverly extended this algorithm in order to deal with POSIX matching, which is the underlying disambiguation strategy for regular expressions needed in lexers. In this entry we give our inductive definition of what a POSIX value is and show (i) that such a value is unique (for given regular expression and string being matched) and (ii) that Sulzmann and Lu's algorithm always generates such a value (provided that the regular expression matches the string). We also prove the correctness of an optimised version of the POSIX matching algorithm. [LocalLexing] title = Local Lexing author = Steven Obua topic = Computer science/Automata and formal languages date = 2017-04-28 notify = steven@recursivemind.com abstract = This formalisation accompanies the paper Local Lexing which introduces a novel parsing concept of the same name. The paper also gives a high-level algorithm for local lexing as an extension of Earley's algorithm. This formalisation proves the algorithm to be correct with respect to its local lexing semantics. As a special case, this formalisation thus also contains a proof of the correctness of Earley's algorithm. The paper contains a short outline of how this formalisation is organised. [MFMC_Countable] title = A Formal Proof of the Max-Flow Min-Cut Theorem for Countable Networks author = Andreas Lochbihler date = 2016-05-09 topic = Mathematics/Graph theory abstract = This article formalises a proof of the maximum-flow minimal-cut theorem for networks with countably many edges. A network is a directed graph with non-negative real-valued edge labels and two dedicated vertices, the source and the sink. A flow in a network assigns non-negative real numbers to the edges such that for all vertices except for the source and the sink, the sum of values on incoming edges equals the sum of values on outgoing edges. A cut is a subset of the vertices which contains the source, but not the sink. Our theorem states that in every network, there is a flow and a cut such that the flow saturates all the edges going out of the cut and is zero on all the incoming edges. The proof is based on the paper The Max-Flow Min-Cut theorem for countable networks by Aharoni et al. Additionally, we prove a characterisation of the lifting operation for relations on discrete probability distributions, which leads to a concise proof of its distributivity over relation composition. notify = mail@andreas-lochbihler.de extra-history = Change history: [2017-09-06]: derive characterisation for the lifting operations on discrete distributions from finite version of the max-flow min-cut theorem (revision a7a198f5bab0)
[Liouville_Numbers] title = Liouville numbers author = Manuel Eberl date = 2015-12-28 topic = Mathematics/Analysis, Mathematics/Number theory abstract =

Liouville numbers are a class of transcendental numbers that can be approximated particularly well with rational numbers. Historically, they were the first numbers whose transcendence was proven.

In this entry, we define the concept of Liouville numbers as well as the standard construction to obtain Liouville numbers (including Liouville's constant) and we prove their most important properties: irrationality and transcendence.

The proof is very elementary and requires only standard arithmetic, the Mean Value Theorem for polynomials, and the boundedness of polynomials on compact intervals.

notify = eberlm@in.tum.de [Triangle] title = Basic Geometric Properties of Triangles author = Manuel Eberl date = 2015-12-28 topic = Mathematics/Geometry abstract =

This entry contains a definition of angles between vectors and between three points. Building on this, we prove basic geometric properties of triangles, such as the Isosceles Triangle Theorem, the Law of Sines and the Law of Cosines, that the sum of the angles of a triangle is π, and the congruence theorems for triangles.

The definitions and proofs were developed following those by John Harrison in HOL Light. However, due to Isabelle's type class system, all definitions and theorems in the Isabelle formalisation hold for all real inner product spaces.

notify = eberlm@in.tum.de [Prime_Harmonic_Series] title = The Divergence of the Prime Harmonic Series author = Manuel Eberl date = 2015-12-28 topic = Mathematics/Number theory abstract =

In this work, we prove the lower bound ln(H_n) - ln(5/3) for the partial sum of the Prime Harmonic series and, based on this, the divergence of the Prime Harmonic Series ∑[p prime] · 1/p.

The proof relies on the unique squarefree decomposition of natural numbers. This is similar to Euler's original proof (which was highly informal and morally questionable). Its advantage over proofs by contradiction, like the famous one by Paul Erdős, is that it provides a relatively good lower bound for the partial sums.

notify = eberlm@in.tum.de [Descartes_Sign_Rule] title = Descartes' Rule of Signs author = Manuel Eberl date = 2015-12-28 topic = Mathematics/Analysis abstract =

Descartes' Rule of Signs relates the number of positive real roots of a polynomial with the number of sign changes in its coefficient sequence.

Our proof follows the simple inductive proof given by Rob Arthan, which was also used by John Harrison in his HOL Light formalisation. We proved most of the lemmas for arbitrary linearly-ordered integrity domains (e.g. integers, rationals, reals); the main result, however, requires the intermediate value theorem and was therefore only proven for real polynomials.

notify = eberlm@in.tum.de [Euler_MacLaurin] title = The Euler–MacLaurin Formula author = Manuel Eberl topic = Mathematics/Analysis date = 2017-03-10 notify = eberlm@in.tum.de abstract =

The Euler-MacLaurin formula relates the value of a discrete sum to that of the corresponding integral in terms of the derivatives at the borders of the summation and a remainder term. Since the remainder term is often very small as the summation bounds grow, this can be used to compute asymptotic expansions for sums.

This entry contains a proof of this formula for functions from the reals to an arbitrary Banach space. Two variants of the formula are given: the standard textbook version and a variant outlined in Concrete Mathematics that is more useful for deriving asymptotic estimates.

As example applications, we use that formula to derive the full asymptotic expansion of the harmonic numbers and the sum of inverse squares.

[Card_Partitions] title = Cardinality of Set Partitions author = Lukas Bulwahn date = 2015-12-12 topic = Mathematics/Combinatorics abstract = The theory's main theorem states that the cardinality of set partitions of size k on a carrier set of size n is expressed by Stirling numbers of the second kind. In Isabelle, Stirling numbers of the second kind are defined in the AFP entry `Discrete Summation` through their well-known recurrence relation. The main theorem relates them to the alternative definition as cardinality of set partitions. The proof follows the simple and short explanation in Richard P. Stanley's `Enumerative Combinatorics: Volume 1` and Wikipedia, and unravels the full details and implicit reasoning steps of these explanations. notify = lukas.bulwahn@gmail.com [Card_Number_Partitions] title = Cardinality of Number Partitions author = Lukas Bulwahn date = 2016-01-14 topic = Mathematics/Combinatorics abstract = This entry provides a basic library for number partitions, defines the two-argument partition function through its recurrence relation and relates this partition function to the cardinality of number partitions. The main proof shows that the recursively-defined partition function with arguments n and k equals the cardinality of number partitions of n with exactly k parts. The combinatorial proof follows the proof sketch of Theorem 2.4.1 in Mazur's textbook `Combinatorics: A Guided Tour`. This entry can serve as starting point for various more intrinsic properties about number partitions, the partition function and related recurrence relations. notify = lukas.bulwahn@gmail.com [Multirelations] title = Binary Multirelations author = Hitoshi Furusawa , Georg Struth date = 2015-06-11 topic = Mathematics/Algebra abstract = Binary multirelations associate elements of a set with its subsets; hence they are binary relations from a set to its power set. Applications include alternating automata, models and logics for games, program semantics with dual demonic and angelic nondeterministic choices and concurrent dynamic logics. This proof document supports an arXiv article that formalises the basic algebra of multirelations and proposes axiom systems for them, ranging from weak bi-monoids to weak bi-quantales. notify = [Noninterference_Generic_Unwinding] title = The Generic Unwinding Theorem for CSP Noninterference Security author = Pasquale Noce date = 2015-06-11 topic = Computer science/Security, Computer science/Concurrency/Process calculi abstract =

The classical definition of noninterference security for a deterministic state machine with outputs requires to consider the outputs produced by machine actions after any trace, i.e. any indefinitely long sequence of actions, of the machine. In order to render the verification of the security of such a machine more straightforward, there is a need of some sufficient condition for security such that just individual actions, rather than unbounded sequences of actions, have to be considered.

By extending previous results applying to transitive noninterference policies, Rushby has proven an unwinding theorem that provides a sufficient condition of this kind in the general case of a possibly intransitive policy. This condition has to be satisfied by a generic function mapping security domains into equivalence relations over machine states.

An analogous problem arises for CSP noninterference security, whose definition requires to consider any possible future, i.e. any indefinitely long sequence of subsequent events and any indefinitely large set of refused events associated to that sequence, for each process trace.

This paper provides a sufficient condition for CSP noninterference security, which indeed requires to just consider individual accepted and refused events and applies to the general case of a possibly intransitive policy. This condition follows Rushby's one for classical noninterference security, and has to be satisfied by a generic function mapping security domains into equivalence relations over process traces; hence its name, Generic Unwinding Theorem. Variants of this theorem applying to deterministic processes and trace set processes are also proven. Finally, the sufficient condition for security expressed by the theorem is shown not to be a necessary condition as well, viz. there exists a secure process such that no domain-relation map satisfying the condition exists.

notify = [Noninterference_Ipurge_Unwinding] title = The Ipurge Unwinding Theorem for CSP Noninterference Security author = Pasquale Noce date = 2015-06-11 topic = Computer science/Security abstract =

The definition of noninterference security for Communicating Sequential Processes requires to consider any possible future, i.e. any indefinitely long sequence of subsequent events and any indefinitely large set of refused events associated to that sequence, for each process trace. In order to render the verification of the security of a process more straightforward, there is a need of some sufficient condition for security such that just individual accepted and refused events, rather than unbounded sequences and sets of events, have to be considered.

Of course, if such a sufficient condition were necessary as well, it would be even more valuable, since it would permit to prove not only that a process is secure by verifying that the condition holds, but also that a process is not secure by verifying that the condition fails to hold.

This paper provides a necessary and sufficient condition for CSP noninterference security, which indeed requires to just consider individual accepted and refused events and applies to the general case of a possibly intransitive policy. This condition follows Rushby's output consistency for deterministic state machines with outputs, and has to be satisfied by a specific function mapping security domains into equivalence relations over process traces. The definition of this function makes use of an intransitive purge function following Rushby's one; hence the name given to the condition, Ipurge Unwinding Theorem.

Furthermore, in accordance with Hoare's formal definition of deterministic processes, it is shown that a process is deterministic just in case it is a trace set process, i.e. it may be identified by means of a trace set alone, matching the set of its traces, in place of a failures-divergences pair. Then, variants of the Ipurge Unwinding Theorem are proven for deterministic processes and trace set processes.

notify = [List_Interleaving] title = Reasoning about Lists via List Interleaving author = Pasquale Noce date = 2015-06-11 topic = Computer science/Data structures abstract =

Among the various mathematical tools introduced in his outstanding work on Communicating Sequential Processes, Hoare has defined "interleaves" as the predicate satisfied by any three lists such that the first list may be split into sublists alternately extracted from the other two ones, whatever is the criterion for extracting an item from either one list or the other in each step.

This paper enriches Hoare's definition by identifying such criterion with the truth value of a predicate taking as inputs the head and the tail of the first list. This enhanced "interleaves" predicate turns out to permit the proof of equalities between lists without the need of an induction. Some rules that allow to infer "interleaves" statements without induction, particularly applying to the addition or removal of a prefix to the input lists, are also proven. Finally, a stronger version of the predicate, named "Interleaves", is shown to fulfil further rules applying to the addition or removal of a suffix to the input lists.

notify = [Residuated_Lattices] title = Residuated Lattices author = Victor B. F. Gomes , Georg Struth date = 2015-04-15 topic = Mathematics/Algebra abstract = The theory of residuated lattices, first proposed by Ward and Dilworth, is formalised in Isabelle/HOL. This includes concepts of residuated functions; their adjoints and conjugates. It also contains necessary and sufficient conditions for the existence of these operations in an arbitrary lattice. The mathematical components for residuated lattices are linked to the AFP entry for relation algebra. In particular, we prove Jonsson and Tsinakis conditions for a residuated boolean algebra to form a relation algebra. notify = g.struth@sheffield.ac.uk [ConcurrentGC] title = Relaxing Safely: Verified On-the-Fly Garbage Collection for x86-TSO author = Peter Gammie , Tony Hosking , Kai Engelhardt <> date = 2015-04-13 topic = Computer science/Algorithms/Concurrent abstract =

We use ConcurrentIMP to model Schism, a state-of-the-art real-time garbage collection scheme for weak memory, and show that it is safe on x86-TSO.

This development accompanies the PLDI 2015 paper of the same name.

notify = peteg42@gmail.com [List_Update] title = Analysis of List Update Algorithms author = Maximilian P.L. Haslbeck , Tobias Nipkow date = 2016-02-17 topic = Computer science/Algorithms/Online abstract =

These theories formalize the quantitative analysis of a number of classical algorithms for the list update problem: 2-competitiveness of move-to-front, the lower bound of 2 for the competitiveness of deterministic list update algorithms and 1.6-competitiveness of the randomized COMB algorithm, the best randomized list update algorithm known to date. The material is based on the first two chapters of Online Computation and Competitive Analysis by Borodin and El-Yaniv.

For an informal description see the FSTTCS 2016 publication Verified Analysis of List Update Algorithms by Haslbeck and Nipkow.

notify = nipkow@in.tum.de [ConcurrentIMP] title = Concurrent IMP author = Peter Gammie date = 2015-04-13 topic = Computer science/Programming languages/Logics abstract = ConcurrentIMP extends the small imperative language IMP with control non-determinism and constructs for synchronous message passing. notify = peteg42@gmail.com [TortoiseHare] title = The Tortoise and Hare Algorithm author = Peter Gammie date = 2015-11-18 topic = Computer science/Algorithms abstract = We formalize the Tortoise and Hare cycle-finding algorithm ascribed to Floyd by Knuth, and an improved version due to Brent. notify = peteg42@gmail.com [UPF] title = The Unified Policy Framework (UPF) author = Achim D. Brucker , Lukas Brügger , Burkhart Wolff date = 2014-11-28 topic = Computer science/Security abstract = We present the Unified Policy Framework (UPF), a generic framework for modelling security (access-control) policies. UPF emphasizes the view that a policy is a policy decision function that grants or denies access to resources, permissions, etc. In other words, instead of modelling the relations of permitted or prohibited requests directly, we model the concrete function that implements the policy decision point in a system. In more detail, UPF is based on the following four principles: 1) Functional representation of policies, 2) No conflicts are possible, 3) Three-valued decision type (allow, deny, undefined), 4) Output type not containing the decision only. notify = adbrucker@0x5f.org, wolff@lri.fr, lukas.a.bruegger@gmail.com [UPF_Firewall] title = Formal Network Models and Their Application to Firewall Policies author = Achim D. Brucker , Lukas Brügger<>, Burkhart Wolff topic = Computer science/Security, Computer science/Networks date = 2017-01-08 notify = adbrucker@0x5f.org abstract = We present a formal model of network protocols and their application to modeling firewall policies. The formalization is based on the Unified Policy Framework (UPF). The formalization was originally developed with for generating test cases for testing the security configuration actual firewall and router (middle-boxes) using HOL-TestGen. Our work focuses on modeling application level protocols on top of tcp/ip. [AODV] title = Loop freedom of the (untimed) AODV routing protocol author = Timothy Bourke , Peter Höfner date = 2014-10-23 topic = Computer science/Concurrency/Process calculi abstract =

The Ad hoc On-demand Distance Vector (AODV) routing protocol allows the nodes in a Mobile Ad hoc Network (MANET) or a Wireless Mesh Network (WMN) to know where to forward data packets. Such a protocol is ‘loop free’ if it never leads to routing decisions that forward packets in circles.

This development mechanises an existing pen-and-paper proof of loop freedom of AODV. The protocol is modelled in the Algebra of Wireless Networks (AWN), which is the subject of an earlier paper and AFP mechanization. The proof relies on a novel compositional approach for lifting invariants to networks of nodes.

We exploit the mechanization to analyse several variants of AODV and show that Isabelle/HOL can re-establish most proof obligations automatically and identify exactly the steps that are no longer valid.

notify = tim@tbrk.org [Show] title = Haskell's Show Class in Isabelle/HOL author = Christian Sternagel , René Thiemann date = 2014-07-29 topic = Computer science/Functional programming license = LGPL abstract = We implemented a type class for "to-string" functions, similar to Haskell's Show class. Moreover, we provide instantiations for Isabelle/HOL's standard types like bool, prod, sum, nats, ints, and rats. It is further possible, to automatically derive show functions for arbitrary user defined datatypes similar to Haskell's "deriving Show". extra-history = Change history: [2015-03-11]: Adapted development to new-style (BNF-based) datatypes.
[2015-04-10]: Moved development for old-style datatypes into subdirectory "Old_Datatype".
notify = christian.sternagel@uibk.ac.at, rene.thiemann@uibk.ac.at [Certification_Monads] title = Certification Monads author = Christian Sternagel , René Thiemann date = 2014-10-03 topic = Computer science/Functional programming abstract = This entry provides several monads intended for the development of stand-alone certifiers via code generation from Isabelle/HOL. More specifically, there are three flavors of error monads (the sum type, for the case where all monadic functions are total; an instance of the former, the so called check monad, yielding either success without any further information or an error message; as well as a variant of the sum type that accommodates partial functions by providing an explicit bottom element) and a parser monad built on top. All of this monads are heavily used in the IsaFoR/CeTA project which thus provides many examples of their usage. notify = c.sternagel@gmail.com, rene.thiemann@uibk.ac.at [CISC-Kernel] title = Formal Specification of a Generic Separation Kernel author = Freek Verbeek , Sergey Tverdyshev , Oto Havle , Holger Blasum , Bruno Langenstein , Werner Stephan , Yakoub Nemouchi , Abderrahmane Feliachi , Burkhart Wolff , Julien Schmaltz date = 2014-07-18 topic = Computer science/Security abstract =

Intransitive noninterference has been a widely studied topic in the last few decades. Several well-established methodologies apply interactive theorem proving to formulate a noninterference theorem over abstract academic models. In joint work with several industrial and academic partners throughout Europe, we are helping in the certification process of PikeOS, an industrial separation kernel developed at SYSGO. In this process, established theories could not be applied. We present a new generic model of separation kernels and a new theory of intransitive noninterference. The model is rich in detail, making it suitable for formal verification of realistic and industrial systems such as PikeOS. Using a refinement-based theorem proving approach, we ensure that proofs remain manageable.

This document corresponds to the deliverable D31.1 of the EURO-MILS Project http://www.euromils.eu.

notify = [pGCL] title = pGCL for Isabelle author = David Cock date = 2014-07-13 topic = Computer science/Programming languages/Language definitions abstract =

pGCL is both a programming language and a specification language that incorporates both probabilistic and nondeterministic choice, in a unified manner. Program verification is by refinement or annotation (or both), using either Hoare triples, or weakest-precondition entailment, in the style of GCL.

This package provides both a shallow embedding of the language primitives, and an annotation and refinement framework. The generated document includes a brief tutorial.

notify = [Noninterference_CSP] title = Noninterference Security in Communicating Sequential Processes author = Pasquale Noce date = 2014-05-23 topic = Computer science/Security abstract =

An extension of classical noninterference security for deterministic state machines, as introduced by Goguen and Meseguer and elegantly formalized by Rushby, to nondeterministic systems should satisfy two fundamental requirements: it should be based on a mathematically precise theory of nondeterminism, and should be equivalent to (or at least not weaker than) the classical notion in the degenerate deterministic case.

This paper proposes a definition of noninterference security applying to Hoare's Communicating Sequential Processes (CSP) in the general case of a possibly intransitive noninterference policy, and proves the equivalence of this security property to classical noninterference security for processes representing deterministic state machines.

Furthermore, McCullough's generalized noninterference security is shown to be weaker than both the proposed notion of CSP noninterference security for a generic process, and classical noninterference security for processes representing deterministic state machines. This renders CSP noninterference security preferable as an extension of classical noninterference security to nondeterministic systems.

notify = pasquale.noce.lavoro@gmail.com [Floyd_Warshall] title = The Floyd-Warshall Algorithm for Shortest Paths author = Simon Wimmer , Peter Lammich topic = Computer science/Algorithms/Graph date = 2017-05-08 notify = wimmers@in.tum.de abstract = The Floyd-Warshall algorithm [Flo62, Roy59, War62] is a classic dynamic programming algorithm to compute the length of all shortest paths between any two vertices in a graph (i.e. to solve the all-pairs shortest path problem, or APSP for short). Given a representation of the graph as a matrix of weights M, it computes another matrix M' which represents a graph with the same path lengths and contains the length of the shortest path between any two vertices i and j. This is only possible if the graph does not contain any negative cycles. However, in this case the Floyd-Warshall algorithm will detect the situation by calculating a negative diagonal entry. This entry includes a formalization of the algorithm and of these key properties. The algorithm is refined to an efficient imperative version using the Imperative Refinement Framework. [Roy_Floyd_Warshall] title = Transitive closure according to Roy-Floyd-Warshall author = Makarius Wenzel <> date = 2014-05-23 topic = Computer science/Algorithms/Graph abstract = This formulation of the Roy-Floyd-Warshall algorithm for the transitive closure bypasses matrices and arrays, but uses a more direct mathematical model with adjacency functions for immediate predecessors and successors. This can be implemented efficiently in functional programming languages and is particularly adequate for sparse relations. notify = [GPU_Kernel_PL] title = Syntax and semantics of a GPU kernel programming language author = John Wickerson date = 2014-04-03 topic = Computer science/Programming languages/Language definitions abstract = This document accompanies the article "The Design and Implementation of a Verification Technique for GPU Kernels" by Adam Betts, Nathan Chong, Alastair F. Donaldson, Jeroen Ketema, Shaz Qadeer, Paul Thomson and John Wickerson. It formalises all of the definitions provided in Sections 3 and 4 of the article. notify = [AWN] title = Mechanization of the Algebra for Wireless Networks (AWN) author = Timothy Bourke date = 2014-03-08 topic = Computer science/Concurrency/Process calculi abstract =

AWN is a process algebra developed for modelling and analysing protocols for Mobile Ad hoc Networks (MANETs) and Wireless Mesh Networks (WMNs). AWN models comprise five distinct layers: sequential processes, local parallel compositions, nodes, partial networks, and complete networks.

This development mechanises the original operational semantics of AWN and introduces a variant 'open' operational semantics that enables the compositional statement and proof of invariants across distinct network nodes. It supports labels (for weakening invariants) and (abstract) data state manipulations. A framework for compositional invariant proofs is developed, including a tactic (inv_cterms) for inductive invariant proofs of sequential processes, lifting rules for the open versions of the higher layers, and a rule for transferring lifted properties back to the standard semantics. A notion of 'control terms' reduces proof obligations to the subset of subterms that act directly (in contrast to operators for combining terms and joining processes).

notify = tim@tbrk.org [Selection_Heap_Sort] title = Verification of Selection and Heap Sort Using Locales author = Danijela Petrovic date = 2014-02-11 topic = Computer science/Algorithms abstract = Stepwise program refinement techniques can be used to simplify program verification. Programs are better understood since their main properties are clearly stated, and verification of rather complex algorithms is reduced to proving simple statements connecting successive program specifications. Additionally, it is easy to analyze similar algorithms and to compare their properties within a single formalization. Usually, formal analysis is not done in educational setting due to complexity of verification and a lack of tools and procedures to make comparison easy. Verification of an algorithm should not only give correctness proof, but also better understanding of an algorithm. If the verification is based on small step program refinement, it can become simple enough to be demonstrated within the university-level computer science curriculum. In this paper we demonstrate this and give a formal analysis of two well known algorithms (Selection Sort and Heap Sort) using proof assistant Isabelle/HOL and program refinement techniques. notify = [Real_Impl] title = Implementing field extensions of the form Q[sqrt(b)] author = René Thiemann date = 2014-02-06 license = LGPL topic = Mathematics/Analysis abstract = We apply data refinement to implement the real numbers, where we support all numbers in the field extension Q[sqrt(b)], i.e., all numbers of the form p + q * sqrt(b) for rational numbers p and q and some fixed natural number b. To this end, we also developed algorithms to precisely compute roots of a rational number, and to perform a factorization of natural numbers which eliminates duplicate prime factors.

Our results have been used to certify termination proofs which involve polynomial interpretations over the reals. extra-history = Change history: [2014-07-11]: Moved NthRoot_Impl to Sqrt-Babylonian. notify = rene.thiemann@uibk.ac.at [ShortestPath] title = An Axiomatic Characterization of the Single-Source Shortest Path Problem author = Christine Rizkallah date = 2013-05-22 topic = Mathematics/Graph theory abstract = This theory is split into two sections. In the first section, we give a formal proof that a well-known axiomatic characterization of the single-source shortest path problem is correct. Namely, we prove that in a directed graph with a non-negative cost function on the edges the single-source shortest path function is the only function that satisfies a set of four axioms. In the second section, we give a formal proof of the correctness of an axiomatic characterization of the single-source shortest path problem for directed graphs with general cost functions. The axioms here are more involved because we have to account for potential negative cycles in the graph. The axioms are summarized in three Isabelle locales. notify = [Launchbury] title = The Correctness of Launchbury's Natural Semantics for Lazy Evaluation author = Joachim Breitner date = 2013-01-31 topic = Computer science/Programming languages/Lambda calculi, Computer science/Semantics abstract = In his seminal paper "Natural Semantics for Lazy Evaluation", John Launchbury proves his semantics correct with respect to a denotational semantics, and outlines an adequacy proof. We have formalized both semantics and machine-checked the correctness proof, clarifying some details. Furthermore, we provide a new and more direct adequacy proof that does not require intermediate operational semantics. extra-history = Change history: [2014-05-24]: Added the proof of adequacy, as well as simplified and improved the existing proofs. Adjusted abstract accordingly. [2015-03-16]: Booleans and if-then-else added to syntax and semantics, making this entry suitable to be used by the entry "Call_Arity". notify = [Call_Arity] title = The Safety of Call Arity author = Joachim Breitner date = 2015-02-20 topic = Computer science/Programming languages/Transformations abstract = We formalize the Call Arity analysis, as implemented in GHC, and prove both functional correctness and, more interestingly, safety (i.e. the transformation does not increase allocation).

We use syntax and the denotational semantics from the entry "Launchbury", where we formalized Launchbury's natural semantics for lazy evaluation.

The functional correctness of Call Arity is proved with regard to that denotational semantics. The operational properties are shown with regard to a small-step semantics akin to Sestoft's mark 1 machine, which we prove to be equivalent to Launchbury's semantics.

We use Christian Urban's Nominal2 package to define our terms and make use of Brian Huffman's HOLCF package for the domain-theoretical aspects of the development. extra-history = Change history: [2015-03-16]: This entry now builds on top of the Launchbury entry, and the equivalency proof of the natural and the small-step semantics was added. notify = [CCS] title = CCS in nominal logic author = Jesper Bengtson date = 2012-05-29 topic = Computer science/Concurrency/Process calculi abstract = We formalise a large portion of CCS as described in Milner's book 'Communication and Concurrency' using the nominal datatype package in Isabelle. Our results include many of the standard theorems of bisimulation equivalence and congruence, for both weak and strong versions. One main goal of this formalisation is to keep the machine-checked proofs as close to their pen-and-paper counterpart as possible.

This entry is described in detail in Bengtson's thesis. notify = [Pi_Calculus] title = The pi-calculus in nominal logic author = Jesper Bengtson date = 2012-05-29 topic = Computer science/Concurrency/Process calculi abstract = We formalise the pi-calculus using the nominal datatype package, based on ideas from the nominal logic by Pitts et al., and demonstrate an implementation in Isabelle/HOL. The purpose is to derive powerful induction rules for the semantics in order to conduct machine checkable proofs, closely following the intuitive arguments found in manual proofs. In this way we have covered many of the standard theorems of bisimulation equivalence and congruence, both late and early, and both strong and weak in a uniform manner. We thus provide one of the most extensive formalisations of a the pi-calculus ever done inside a theorem prover.

A significant gain in our formulation is that agents are identified up to alpha-equivalence, thereby greatly reducing the arguments about bound names. This is a normal strategy for manual proofs about the pi-calculus, but that kind of hand waving has previously been difficult to incorporate smoothly in an interactive theorem prover. We show how the nominal logic formalism and its support in Isabelle accomplishes this and thus significantly reduces the tedium of conducting completely formal proofs. This improves on previous work using weak higher order abstract syntax since we do not need extra assumptions to filter out exotic terms and can keep all arguments within a familiar first-order logic.

This entry is described in detail in Bengtson's thesis. notify = [Psi_Calculi] title = Psi-calculi in Isabelle author = Jesper Bengtson date = 2012-05-29 topic = Computer science/Concurrency/Process calculi abstract = Psi-calculi are extensions of the pi-calculus, accommodating arbitrary nominal datatypes to represent not only data but also communication channels, assertions and conditions, giving it an expressive power beyond the applied pi-calculus and the concurrent constraint pi-calculus.

We have formalised psi-calculi in the interactive theorem prover Isabelle using its nominal datatype package. One distinctive feature is that the framework needs to treat binding sequences, as opposed to single binders, in an efficient way. While different methods for formalising single binder calculi have been proposed over the last decades, representations for such binding sequences are not very well explored.

The main effort in the formalisation is to keep the machine checked proofs as close to their pen-and-paper counterparts as possible. This includes treating all binding sequences as atomic elements, and creating custom induction and inversion rules that to remove the bulk of manual alpha-conversions.

This entry is described in detail in Bengtson's thesis. notify = [Encodability_Process_Calculi] title = Analysing and Comparing Encodability Criteria for Process Calculi author = Kirstin Peters , Rob van Glabbeek date = 2015-08-10 topic = Computer science/Concurrency/Process calculi abstract = Encodings or the proof of their absence are the main way to compare process calculi. To analyse the quality of encodings and to rule out trivial or meaningless encodings, they are augmented with quality criteria. There exists a bunch of different criteria and different variants of criteria in order to reason in different settings. This leads to incomparable results. Moreover it is not always clear whether the criteria used to obtain a result in a particular setting do indeed fit to this setting. We show how to formally reason about and compare encodability criteria by mapping them on requirements on a relation between source and target terms that is induced by the encoding function. In particular we analyse the common criteria full abstraction, operational correspondence, divergence reflection, success sensitiveness, and respect of barbs; e.g. we analyse the exact nature of the simulation relation (coupled simulation versus bisimulation) that is induced by different variants of operational correspondence. This way we reduce the problem of analysing or comparing encodability criteria to the better understood problem of comparing relations on processes. notify = kirstin.peters@tu-berlin.de [Circus] title = Isabelle/Circus author = Abderrahmane Feliachi , Burkhart Wolff , Marie-Claude Gaudel contributors = Makarius Wenzel date = 2012-05-27 topic = Computer science/Concurrency/Process calculi, Computer science/System description languages abstract = The Circus specification language combines elements for complex data and behavior specifications, using an integration of Z and CSP with a refinement calculus. Its semantics is based on Hoare and He's Unifying Theories of Programming (UTP). Isabelle/Circus is a formalization of the UTP and the Circus language in Isabelle/HOL. It contains proof rules and tactic support that allows for proofs of refinement for Circus processes (involving both data and behavioral aspects).

The Isabelle/Circus environment supports a syntax for the semantic definitions which is close to textbook presentations of Circus. This article contains an extended version of corresponding VSTTE Paper together with the complete formal development of its underlying commented theories. extra-history = Change history: [2014-06-05]: More polishing, shorter proofs, added Circus syntax, added Makarius Wenzel as contributor. notify = [Dijkstra_Shortest_Path] title = Dijkstra's Shortest Path Algorithm author = Benedikt Nordhoff , Peter Lammich topic = Computer science/Algorithms/Graph date = 2012-01-30 abstract = We implement and prove correct Dijkstra's algorithm for the single source shortest path problem, conceived in 1956 by E. Dijkstra. The algorithm is implemented using the data refinement framework for monadic, nondeterministic programs. An efficient implementation is derived using data structures from the Isabelle Collection Framework. notify = lammich@in.tum.de [Refine_Monadic] title = Refinement for Monadic Programs author = Peter Lammich topic = Computer science/Programming languages/Logics date = 2012-01-30 abstract = We provide a framework for program and data refinement in Isabelle/HOL. The framework is based on a nondeterminism-monad with assertions, i.e., the monad carries a set of results or an assertion failure. Recursion is expressed by fixed points. For convenience, we also provide while and foreach combinators.

The framework provides tools to automatize canonical tasks, such as verification condition generation, finding appropriate data refinement relations, and refine an executable program to a form that is accepted by the Isabelle/HOL code generator.

This submission comes with a collection of examples and a user-guide, illustrating the usage of the framework. extra-history = Change history: [2012-04-23] Introduced ordered FOREACH loops
[2012-06] New features: REC_rule_arb and RECT_rule_arb allow for generalizing over variables. prepare_code_thms - command extracts code equations for recursion combinators.
[2012-07] New example: Nested DFS for emptiness check of Buchi-automata with witness.
New feature: fo_rule method to apply resolution using first-order matching. Useful for arg_conf, fun_cong.
[2012-08] Adaptation to ICF v2.
[2012-10-05] Adaptations to include support for Automatic Refinement Framework.
[2013-09] This entry now depends on Automatic Refinement
[2014-06] New feature: vc_solve method to solve verification conditions. Maintenace changes: VCG-rules for nfoldli, improved setup for FOREACH-loops.
[2014-07] Now defining recursion via flat domain. Dropped many single-valued prerequisites. Changed notion of data refinement. In single-valued case, this matches the old notion. In non-single valued case, the new notion allows for more convenient rules. In particular, the new definitions allow for projecting away ghost variables as a refinement step.
[2014-11] New features: le-or-fail relation (leof), modular reasoning about loop invariants. notify = lammich@in.tum.de [Refine_Imperative_HOL] title = The Imperative Refinement Framework author = Peter Lammich notify = lammich@in.tum.de date = 2016-08-08 topic = Computer science/Programming languages/Transformations,Computer science/Data structures abstract = We present the Imperative Refinement Framework (IRF), a tool that supports a stepwise refinement based approach to imperative programs. This entry is based on the material we presented in [ITP-2015, CPP-2016]. It uses the Monadic Refinement Framework as a frontend for the specification of the abstract programs, and Imperative/HOL as a backend to generate executable imperative programs. The IRF comes with tool support to synthesize imperative programs from more abstract, functional ones, using efficient imperative implementations for the abstract data structures. This entry also includes the Imperative Isabelle Collection Framework (IICF), which provides a library of re-usable imperative collection data structures. Moreover, this entry contains a quickstart guide and a reference manual, which provide an introduction to using the IRF for Isabelle/HOL experts. It also provids a collection of (partly commented) practical examples, some highlights being Dijkstra's Algorithm, Nested-DFS, and a generic worklist algorithm with subsumption. Finally, this entry contains benchmark scripts that compare the runtime of some examples against reference implementations of the algorithms in Java and C++. [ITP-2015] Peter Lammich: Refinement to Imperative/HOL. ITP 2015: 253--269 [CPP-2016] Peter Lammich: Refinement based verification of imperative data structures. CPP 2016: 27--36 [Automatic_Refinement] title = Automatic Data Refinement author = Peter Lammich topic = Computer science/Programming languages/Logics date = 2013-10-02 abstract = We present the Autoref tool for Isabelle/HOL, which automatically refines algorithms specified over abstract concepts like maps and sets to algorithms over concrete implementations like red-black-trees, and produces a refinement theorem. It is based on ideas borrowed from relational parametricity due to Reynolds and Wadler. The tool allows for rapid prototyping of verified, executable algorithms. Moreover, it can be configured to fine-tune the result to the user~s needs. Our tool is able to automatically instantiate generic algorithms, which greatly simplifies the implementation of executable data structures.

This AFP-entry provides the basic tool, which is then used by the Refinement and Collection Framework to provide automatic data refinement for the nondeterminism monad and various collection datastructures. notify = lammich@in.tum.de [EdmondsKarp_Maxflow] title = Formalizing the Edmonds-Karp Algorithm author = Peter Lammich , S. Reza Sefidgar<> notify = lammich@in.tum.de date = 2016-08-12 topic = Computer science/Algorithms/Graph abstract = We present a formalization of the Ford-Fulkerson method for computing the maximum flow in a network. Our formal proof closely follows a standard textbook proof, and is accessible even without being an expert in Isabelle/HOL--- the interactive theorem prover used for the formalization. We then use stepwise refinement to obtain the Edmonds-Karp algorithm, and formally prove a bound on its complexity. Further refinement yields a verified implementation, whose execution time compares well to an unverified reference implementation in Java. This entry is based on our ITP-2016 paper with the same title. [VerifyThis2018] title = VerifyThis 2018 - Polished Isabelle Solutions author = Peter Lammich , Simon Wimmer topic = Computer science/Algorithms date = 2018-04-27 notify = lammich@in.tum.de abstract = VerifyThis 2018 was a program verification competition associated with ETAPS 2018. It was the 7th event in the VerifyThis competition series. In this entry, we present polished and completed versions of our solutions that we created during the competition. [PseudoHoops] title = Pseudo Hoops author = George Georgescu <>, Laurentiu Leustean <>, Viorel Preoteasa topic = Mathematics/Algebra date = 2011-09-22 abstract = Pseudo-hoops are algebraic structures introduced by B. Bosbach under the name of complementary semigroups. In this formalization we prove some properties of pseudo-hoops and we define the basic concepts of filter and normal filter. The lattice of normal filters is isomorphic with the lattice of congruences of a pseudo-hoop. We also study some important classes of pseudo-hoops. Bounded Wajsberg pseudo-hoops are equivalent to pseudo-Wajsberg algebras and bounded basic pseudo-hoops are equivalent to pseudo-BL algebras. Some examples of pseudo-hoops are given in the last section of the formalization. notify = viorel.preoteasa@aalto.fi [MonoBoolTranAlgebra] title = Algebra of Monotonic Boolean Transformers author = Viorel Preoteasa topic = Computer science/Programming languages/Logics date = 2011-09-22 abstract = Algebras of imperative programming languages have been successful in reasoning about programs. In general an algebra of programs is an algebraic structure with programs as elements and with program compositions (sequential composition, choice, skip) as algebra operations. Various versions of these algebras were introduced to model partial correctness, total correctness, refinement, demonic choice, and other aspects. We formalize here an algebra which can be used to model total correctness, refinement, demonic and angelic choice. The basic model of this algebra are monotonic Boolean transformers (monotonic functions from a Boolean algebra to itself). notify = viorel.preoteasa@aalto.fi [LatticeProperties] title = Lattice Properties author = Viorel Preoteasa topic = Mathematics/Order date = 2011-09-22 abstract = This formalization introduces and collects some algebraic structures based on lattices and complete lattices for use in other developments. The structures introduced are modular, and lattice ordered groups. In addition to the results proved for the new lattices, this formalization also introduces theorems about latices and complete lattices in general. extra-history = Change history: [2012-01-05]: Removed the theory about distributive complete lattices which is in the standard library now. Added a theory about well founded and transitive relations and a result about fixpoints in complete lattices and well founded relations. Moved the results about conjunctive and disjunctive functions to a new theory. Removed the syntactic classes for inf and sup which are in the standard library now. notify = viorel.preoteasa@aalto.fi [Impossible_Geometry] title = Proving the Impossibility of Trisecting an Angle and Doubling the Cube author = Ralph Romanos , Lawrence C. Paulson topic = Mathematics/Algebra, Mathematics/Geometry date = 2012-08-05 abstract = Squaring the circle, doubling the cube and trisecting an angle, using a compass and straightedge alone, are classic unsolved problems first posed by the ancient Greeks. All three problems were proved to be impossible in the 19th century. The following document presents the proof of the impossibility of solving the latter two problems using Isabelle/HOL, following a proof by Carrega. The proof uses elementary methods: no Galois theory or field extensions. The set of points constructible using a compass and straightedge is defined inductively. Radical expressions, which involve only square roots and arithmetic of rational numbers, are defined, and we find that all constructive points have radical coordinates. Finally, doubling the cube and trisecting certain angles requires solving certain cubic equations that can be proved to have no rational roots. The Isabelle proofs require a great many detailed calculations. notify = ralph.romanos@student.ecp.fr, lp15@cam.ac.uk [IP_Addresses] title = IP Addresses author = Cornelius Diekmann , Julius Michaelis , Lars Hupel notify = diekmann@net.in.tum.de date = 2016-06-28 topic = Computer science/Networks abstract = This entry contains a definition of IP addresses and a library to work with them. Generic IP addresses are modeled as machine words of arbitrary length. Derived from this generic definition, IPv4 addresses are 32bit machine words, IPv6 addresses are 128bit words. Additionally, IPv4 addresses can be represented in dot-decimal notation and IPv6 addresses in (compressed) colon-separated notation. We support toString functions and parsers for both notations. Sets of IP addresses can be represented with a netmask (e.g. 192.168.0.0/255.255.0.0) or in CIDR notation (e.g. 192.168.0.0/16). To provide executable code for set operations on IP address ranges, the library includes a datatype to work on arbitrary intervals of machine words. [Simple_Firewall] title = Simple Firewall author = Cornelius Diekmann , Julius Michaelis , Maximilian Haslbeck notify = diekmann@net.in.tum.de, max.haslbeck@gmx.de date = 2016-08-24 topic = Computer science/Networks abstract = We present a simple model of a firewall. The firewall can accept or drop a packet and can match on interfaces, IP addresses, protocol, and ports. It was designed to feature nice mathematical properties: The type of match expressions was carefully crafted such that the conjunction of two match expressions is only one match expression. This model is too simplistic to mirror all aspects of the real world. In the upcoming entry "Iptables Semantics", we will translate the Linux firewall iptables to this model. For a fixed service (e.g. ssh, http), we provide an algorithm to compute an overview of the firewall's filtering behavior. The algorithm computes minimal service matrices, i.e. graphs which partition the complete IPv4 and IPv6 address space and visualize the allowed accesses between partitions. For a detailed description, see Verified iptables Firewall Analysis, IFIP Networking 2016. [Iptables_Semantics] title = Iptables Semantics author = Cornelius Diekmann , Lars Hupel notify = diekmann@net.in.tum.de, hupel@in.tum.de date = 2016-09-09 topic = Computer science/Networks abstract = We present a big step semantics of the filtering behavior of the Linux/netfilter iptables firewall. We provide algorithms to simplify complex iptables rulests to a simple firewall model (c.f. AFP entry Simple_Firewall) and to verify spoofing protection of a ruleset. Internally, we embed our semantics into ternary logic, ultimately supporting every iptables match condition by abstracting over unknowns. Using this AFP entry and all entries it depends on, we created an easy-to-use, stand-alone haskell tool called fffuu. The tool does not require any input —except for the iptables-save dump of the analyzed firewall— and presents interesting results about the user's ruleset. Real-Word firewall errors have been uncovered, and the correctness of rulesets has been proved, with the help of our tool. [Routing] title = Routing author = Julius Michaelis , Cornelius Diekmann notify = afp@liftm.de date = 2016-08-31 topic = Computer science/Networks abstract = This entry contains definitions for routing with routing tables/longest prefix matching. A routing table entry is modelled as a record of a prefix match, a metric, an output port, and an optional next hop. A routing table is a list of entries, sorted by prefix length and metric. Additionally, a parser and serializer for the output of the ip-route command, a function to create a relation from output port to corresponding destination IP space, and a model of a Linux-style router are included. [KBPs] title = Knowledge-based programs author = Peter Gammie topic = Computer science/Automata and formal languages date = 2011-05-17 abstract = Knowledge-based programs (KBPs) are a formalism for directly relating agents' knowledge and behaviour. Here we present a general scheme for compiling KBPs to executable automata with a proof of correctness in Isabelle/HOL. We develop the algorithm top-down, using Isabelle's locale mechanism to structure these proofs, and show that two classic examples can be synthesised using Isabelle's code generator. extra-history = Change history: [2012-03-06]: Add some more views and revive the code generation. notify = kleing@cse.unsw.edu.au [Tarskis_Geometry] title = The independence of Tarski's Euclidean axiom author = T. J. M. Makarios topic = Mathematics/Geometry date = 2012-10-30 abstract = Tarski's axioms of plane geometry are formalized and, using the standard real Cartesian model, shown to be consistent. A substantial theory of the projective plane is developed. Building on this theory, the Klein-Beltrami model of the hyperbolic plane is defined and shown to satisfy all of Tarski's axioms except his Euclidean axiom; thus Tarski's Euclidean axiom is shown to be independent of his other axioms of plane geometry.

An earlier version of this work was the subject of the author's MSc thesis, which contains natural-language explanations of some of the more interesting proofs. notify = tjm1983@gmail.com [General-Triangle] title = The General Triangle Is Unique author = Joachim Breitner topic = Mathematics/Geometry date = 2011-04-01 abstract = Some acute-angled triangles are special, e.g. right-angled or isoscele triangles. Some are not of this kind, but, without measuring angles, look as if they were. In that sense, there is exactly one general triangle. This well-known fact is proven here formally. notify = mail@joachim-breitner.de [LightweightJava] title = Lightweight Java author = Rok Strniša , Matthew Parkinson topic = Computer science/Programming languages/Language definitions date = 2011-02-07 abstract = A fully-formalized and extensible minimal imperative fragment of Java. notify = rok@strnisa.com [Lower_Semicontinuous] title = Lower Semicontinuous Functions author = Bogdan Grechuk topic = Mathematics/Analysis date = 2011-01-08 abstract = We define the notions of lower and upper semicontinuity for functions from a metric space to the extended real line. We prove that a function is both lower and upper semicontinuous if and only if it is continuous. We also give several equivalent characterizations of lower semicontinuity. In particular, we prove that a function is lower semicontinuous if and only if its epigraph is a closed set. Also, we introduce the notion of the lower semicontinuous hull of an arbitrary function and prove its basic properties. notify = hoelzl@in.tum.de [RIPEMD-160-SPARK] title = RIPEMD-160 author = Fabian Immler topic = Computer science/Programming languages/Static analysis date = 2011-01-10 abstract = This work presents a verification of an implementation in SPARK/ADA of the cryptographic hash-function RIPEMD-160. A functional specification of RIPEMD-160 is given in Isabelle/HOL. Proofs for the verification conditions generated by the static-analysis toolset of SPARK certify the functional correctness of the implementation. extra-history = Change history: [2015-11-09]: Entry is now obsolete, moved to Isabelle distribution. notify = immler@in.tum.de [Regular-Sets] title = Regular Sets and Expressions author = Alexander Krauss , Tobias Nipkow contributors = Manuel Eberl topic = Computer science/Automata and formal languages date = 2010-05-12 abstract = This is a library of constructions on regular expressions and languages. It provides the operations of concatenation, Kleene star and derivative on languages. Regular expressions and their meaning are defined. An executable equivalence checker for regular expressions is verified; it does not need automata but works directly on regular expressions. By mapping regular expressions to binary relations, an automatic and complete proof method for (in)equalities of binary relations over union, concatenation and (reflexive) transitive closure is obtained.

Extended regular expressions with complement and intersection are also defined and an equivalence checker is provided. extra-history = Change history: [2011-08-26]: Christian Urban added a theory about derivatives and partial derivatives of regular expressions
[2012-05-10]: Tobias Nipkow added extended regular expressions
[2012-05-10]: Tobias Nipkow added equivalence checking with partial derivatives notify = nipkow@in.tum.de, krauss@in.tum.de, christian.urban@kcl.ac.uk [Regex_Equivalence] title = Unified Decision Procedures for Regular Expression Equivalence author = Tobias Nipkow , Dmitriy Traytel topic = Computer science/Automata and formal languages date = 2014-01-30 abstract = We formalize a unified framework for verified decision procedures for regular expression equivalence. Five recently published formalizations of such decision procedures (three based on derivatives, two on marked regular expressions) can be obtained as instances of the framework. We discover that the two approaches based on marked regular expressions, which were previously thought to be the same, are different, and one seems to produce uniformly smaller automata. The common framework makes it possible to compare the performance of the different decision procedures in a meaningful way. The formalization is described in a paper of the same name presented at Interactive Theorem Proving 2014. notify = nipkow@in.tum.de, traytel@in.tum.de [MSO_Regex_Equivalence] title = Decision Procedures for MSO on Words Based on Derivatives of Regular Expressions author = Dmitriy Traytel , Tobias Nipkow topic = Computer science/Automata and formal languages, Logic/General logic/Decidability of theories date = 2014-06-12 abstract = Monadic second-order logic on finite words (MSO) is a decidable yet expressive logic into which many decision problems can be encoded. Since MSO formulas correspond to regular languages, equivalence of MSO formulas can be reduced to the equivalence of some regular structures (e.g. automata). We verify an executable decision procedure for MSO formulas that is not based on automata but on regular expressions.

Decision procedures for regular expression equivalence have been formalized before, usually based on Brzozowski derivatives. Yet, for a straightforward embedding of MSO formulas into regular expressions an extension of regular expressions with a projection operation is required. We prove total correctness and completeness of an equivalence checker for regular expressions extended in that way. We also define a language-preserving translation of formulas into regular expressions with respect to two different semantics of MSO.

The formalization is described in this ICFP 2013 functional pearl. notify = traytel@in.tum.de, nipkow@in.tum.de [Formula_Derivatives] title = Derivatives of Logical Formulas author = Dmitriy Traytel topic = Computer science/Automata and formal languages, Logic/General logic/Decidability of theories date = 2015-05-28 abstract = We formalize new decision procedures for WS1S, M2L(Str), and Presburger Arithmetics. Formulas of these logics denote regular languages. Unlike traditional decision procedures, we do not translate formulas into automata (nor into regular expressions), at least not explicitly. Instead we devise notions of derivatives (inspired by Brzozowski derivatives for regular expressions) that operate on formulas directly and compute a syntactic bisimulation using these derivatives. The treatment of Boolean connectives and quantifiers is uniform for all mentioned logics and is abstracted into a locale. This locale is then instantiated by different atomic formulas and their derivatives (which may differ even for the same logic under different encodings of interpretations as formal words).

The WS1S instance is described in the draft paper A Coalgebraic Decision Procedure for WS1S by the author. notify = traytel@in.tum.de [Myhill-Nerode] title = The Myhill-Nerode Theorem Based on Regular Expressions author = Chunhan Wu <>, Xingyuan Zhang <>, Christian Urban contributors = Manuel Eberl topic = Computer science/Automata and formal languages date = 2011-08-26 abstract = There are many proofs of the Myhill-Nerode theorem using automata. In this library we give a proof entirely based on regular expressions, since regularity of languages can be conveniently defined using regular expressions (it is more painful in HOL to define regularity in terms of automata). We prove the first direction of the Myhill-Nerode theorem by solving equational systems that involve regular expressions. For the second direction we give two proofs: one using tagging-functions and another using partial derivatives. We also establish various closure properties of regular languages. Most details of the theories are described in our ITP 2011 paper. notify = christian.urban@kcl.ac.uk [Universal_Turing_Machine] title = Universal Turing Machine author = Jian Xu<>, Xingyuan Zhang<>, Christian Urban , Sebastiaan J. C. Joosten topic = Logic/Computability, Computer science/Automata and formal languages date = 2019-02-08 notify = sjcjoosten@gmail.com, christian.urban@kcl.ac.uk abstract = We formalise results from computability theory: recursive functions, undecidability of the halting problem, and the existence of a universal Turing machine. This formalisation is the AFP entry corresponding to the paper Mechanising Turing Machines and Computability Theory in Isabelle/HOL, ITP 2013. [CYK] title = A formalisation of the Cocke-Younger-Kasami algorithm author = Maksym Bortin date = 2016-04-27 topic = Computer science/Algorithms, Computer science/Automata and formal languages abstract = The theory provides a formalisation of the Cocke-Younger-Kasami algorithm (CYK for short), an approach to solving the word problem for context-free languages. CYK decides if a word is in the languages generated by a context-free grammar in Chomsky normal form. The formalized algorithm is executable. notify = maksym.bortin@nicta.com.au [Boolean_Expression_Checkers] title = Boolean Expression Checkers author = Tobias Nipkow date = 2014-06-08 topic = Computer science/Algorithms, Logic/General logic/Mechanization of proofs abstract = This entry provides executable checkers for the following properties of boolean expressions: satisfiability, tautology and equivalence. Internally, the checkers operate on binary decision trees and are reasonably efficient (for purely functional algorithms). extra-history = Change history: [2015-09-23]: Salomon Sickert added an interface that does not require the usage of the Boolean formula datatype. Furthermore the general Mapping type is used instead of an association list. notify = nipkow@in.tum.de [Presburger-Automata] title = Formalizing the Logic-Automaton Connection author = Stefan Berghofer , Markus Reiter <> date = 2009-12-03 topic = Computer science/Automata and formal languages, Logic/General logic/Decidability of theories abstract = This work presents a formalization of a library for automata on bit strings. It forms the basis of a reflection-based decision procedure for Presburger arithmetic, which is efficiently executable thanks to Isabelle's code generator. With this work, we therefore provide a mechanized proof of a well-known connection between logic and automata theory. The formalization is also described in a publication [TPHOLs 2009]. notify = berghofe@in.tum.de [Functional-Automata] title = Functional Automata author = Tobias Nipkow date = 2004-03-30 topic = Computer science/Automata and formal languages abstract = This theory defines deterministic and nondeterministic automata in a functional representation: the transition function/relation and the finality predicate are just functions. Hence the state space may be infinite. It is shown how to convert regular expressions into such automata. A scanner (generator) is implemented with the help of functional automata: the scanner chops the input up into longest recognized substrings. Finally we also show how to convert a certain subclass of functional automata (essentially the finite deterministic ones) into regular sets. notify = nipkow@in.tum.de [Statecharts] title = Formalizing Statecharts using Hierarchical Automata author = Steffen Helke , Florian Kammüller topic = Computer science/Automata and formal languages date = 2010-08-08 abstract = We formalize in Isabelle/HOL the abtract syntax and a synchronous step semantics for the specification language Statecharts. The formalization is based on Hierarchical Automata which allow a structural decomposition of Statecharts into Sequential Automata. To support the composition of Statecharts, we introduce calculating operators to construct a Hierarchical Automaton in a stepwise manner. Furthermore, we present a complete semantics of Statecharts including a theory of data spaces, which enables the modelling of racing effects. We also adapt CTL for Statecharts to build a bridge for future combinations with model checking. However the main motivation of this work is to provide a sound and complete basis for reasoning on Statecharts. As a central meta theorem we prove that the well-formedness of a Statechart is preserved by the semantics. notify = nipkow@in.tum.de [Stuttering_Equivalence] title = Stuttering Equivalence author = Stephan Merz topic = Computer science/Automata and formal languages date = 2012-05-07 abstract =

Two omega-sequences are stuttering equivalent if they differ only by finite repetitions of elements. Stuttering equivalence is a fundamental concept in the theory of concurrent and distributed systems. Notably, Lamport argues that refinement notions for such systems should be insensitive to finite stuttering. Peled and Wilke showed that all PLTL (propositional linear-time temporal logic) properties that are insensitive to stuttering equivalence can be expressed without the next-time operator. Stuttering equivalence is also important for certain verification techniques such as partial-order reduction for model checking.

We formalize stuttering equivalence in Isabelle/HOL. Our development relies on the notion of stuttering sampling functions that may skip blocks of identical sequence elements. We also encode PLTL and prove the theorem due to Peled and Wilke.

extra-history = Change history: [2013-01-31]: Added encoding of PLTL and proved Peled and Wilke's theorem. Adjusted abstract accordingly. notify = Stephan.Merz@loria.fr [Coinductive_Languages] title = A Codatatype of Formal Languages author = Dmitriy Traytel topic = Computer science/Automata and formal languages date = 2013-11-15 abstract =

We define formal languages as a codataype of infinite trees branching over the alphabet. Each node in such a tree indicates whether the path to this node constitutes a word inside or outside of the language. This codatatype is isormorphic to the set of lists representation of languages, but caters for definitions by corecursion and proofs by coinduction.

Regular operations on languages are then defined by primitive corecursion. A difficulty arises here, since the standard definitions of concatenation and iteration from the coalgebraic literature are not primitively corecursive-they require guardedness up-to union/concatenation. Without support for up-to corecursion, these operation must be defined as a composition of primitive ones (and proved being equal to the standard definitions). As an exercise in coinduction we also prove the axioms of Kleene algebra for the defined regular operations.

Furthermore, a language for context-free grammars given by productions in Greibach normal form and an initial nonterminal is constructed by primitive corecursion, yielding an executable decision procedure for the word problem without further ado.

notify = traytel@in.tum.de [Tree-Automata] title = Tree Automata author = Peter Lammich date = 2009-11-25 topic = Computer science/Automata and formal languages abstract = This work presents a machine-checked tree automata library for Standard-ML, OCaml and Haskell. The algorithms are efficient by using appropriate data structures like RB-trees. The available algorithms for non-deterministic automata include membership query, reduction, intersection, union, and emptiness check with computation of a witness for non-emptiness. The executable algorithms are derived from less-concrete, non-executable algorithms using data-refinement techniques. The concrete data structures are from the Isabelle Collections Framework. Moreover, this work contains a formalization of the class of tree-regular languages and its closure properties under set operations. notify = peter.lammich@uni-muenster.de, nipkow@in.tum.de [Depth-First-Search] title = Depth First Search author = Toshiaki Nishihara <>, Yasuhiko Minamide <> date = 2004-06-24 topic = Computer science/Algorithms/Graph abstract = Depth-first search of a graph is formalized with recdef. It is shown that it visits all of the reachable nodes from a given list of nodes. Executable ML code of depth-first search is obtained using the code generation feature of Isabelle/HOL. notify = lp15@cam.ac.uk, krauss@in.tum.de [FFT] title = Fast Fourier Transform author = Clemens Ballarin date = 2005-10-12 topic = Computer science/Algorithms/Mathematical abstract = We formalise a functional implementation of the FFT algorithm over the complex numbers, and its inverse. Both are shown equivalent to the usual definitions of these operations through Vandermonde matrices. They are also shown to be inverse to each other, more precisely, that composition of the inverse and the transformation yield the identity up to a scalar. notify = ballarin@in.tum.de [Gauss-Jordan-Elim-Fun] title = Gauss-Jordan Elimination for Matrices Represented as Functions author = Tobias Nipkow date = 2011-08-19 topic = Computer science/Algorithms/Mathematical, Mathematics/Algebra abstract = This theory provides a compact formulation of Gauss-Jordan elimination for matrices represented as functions. Its distinctive feature is succinctness. It is not meant for large computations. notify = nipkow@in.tum.de [UpDown_Scheme] title = Verification of the UpDown Scheme author = Johannes Hölzl date = 2015-01-28 topic = Computer science/Algorithms/Mathematical abstract = The UpDown scheme is a recursive scheme used to compute the stiffness matrix on a special form of sparse grids. Usually, when discretizing a Euclidean space of dimension d we need O(n^d) points, for n points along each dimension. Sparse grids are a hierarchical representation where the number of points is reduced to O(n * log(n)^d). One disadvantage of such sparse grids is that the algorithm now operate recursively in the dimensions and levels of the sparse grid.

The UpDown scheme allows us to compute the stiffness matrix on such a sparse grid. The stiffness matrix represents the influence of each representation function on the L^2 scalar product. For a detailed description see Dirk Pflüger's PhD thesis. This formalization was developed as an interdisciplinary project (IDP) at the Technische Universität München. notify = hoelzl@in.tum.de [GraphMarkingIBP] title = Verification of the Deutsch-Schorr-Waite Graph Marking Algorithm using Data Refinement author = Viorel Preoteasa , Ralph-Johan Back date = 2010-05-28 topic = Computer science/Algorithms/Graph abstract = The verification of the Deutsch-Schorr-Waite graph marking algorithm is used as a benchmark in many formalizations of pointer programs. The main purpose of this mechanization is to show how data refinement of invariant based programs can be used in verifying practical algorithms. The verification starts with an abstract algorithm working on a graph given by a relation next on nodes. Gradually the abstract program is refined into Deutsch-Schorr-Waite graph marking algorithm where only one bit per graph node of additional memory is used for marking. extra-history = Change history: [2012-01-05]: Updated for the new definition of data refinement and the new syntax for demonic and angelic update statements notify = viorel.preoteasa@aalto.fi [Efficient-Mergesort] title = Efficient Mergesort topic = Computer science/Algorithms date = 2011-11-09 author = Christian Sternagel abstract = We provide a formalization of the mergesort algorithm as used in GHC's Data.List module, proving correctness and stability. Furthermore, experimental data suggests that generated (Haskell-)code for this algorithm is much faster than for previous algorithms available in the Isabelle distribution. extra-history = Change history: [2012-10-24]: Added reference to journal article.
[2018-09-17]: Added theory Efficient_Mergesort that works exclusively with the mutual induction schemas generated by the function package.
[2018-09-19]: Added theory Mergesort_Complexity that proves an upper bound on the number of comparisons that are required by mergesort.
[2018-09-19]: Theory Efficient_Mergesort replaces theory Efficient_Sort but keeping the old name Efficient_Sort. notify = c.sternagel@gmail.com [SATSolverVerification] title = Formal Verification of Modern SAT Solvers author = Filip Marić date = 2008-07-23 topic = Computer science/Algorithms abstract = This document contains formal correctness proofs of modern SAT solvers. Following (Krstic et al, 2007) and (Nieuwenhuis et al., 2006), solvers are described using state-transition systems. Several different SAT solver descriptions are given and their partial correctness and termination is proved. These include:

  • a solver based on classical DPLL procedure (using only a backtrack-search with unit propagation),
  • a very general solver with backjumping and learning (similar to the description given in (Nieuwenhuis et al., 2006)), and
  • a solver with a specific conflict analysis algorithm (similar to the description given in (Krstic et al., 2007)).
Within the SAT solver correctness proofs, a large number of lemmas about propositional logic and CNF formulae are proved. This theory is self-contained and could be used for further exploring of properties of CNF based SAT algorithms. notify = [Transitive-Closure] title = Executable Transitive Closures of Finite Relations topic = Computer science/Algorithms/Graph date = 2011-03-14 author = Christian Sternagel , René Thiemann license = LGPL abstract = We provide a generic work-list algorithm to compute the transitive closure of finite relations where only successors of newly detected states are generated. This algorithm is then instantiated for lists over arbitrary carriers and red black trees (which are faster but require a linear order on the carrier), respectively. Our formalization was performed as part of the IsaFoR/CeTA project where reflexive transitive closures of large tree automata have to be computed. extra-history = Change history: [2014-09-04] added example simprocs in Finite_Transitive_Closure_Simprocs notify = c.sternagel@gmail.com, rene.thiemann@uibk.ac.at [Transitive-Closure-II] title = Executable Transitive Closures topic = Computer science/Algorithms/Graph date = 2012-02-29 author = René Thiemann license = LGPL abstract =

We provide a generic work-list algorithm to compute the (reflexive-)transitive closure of relations where only successors of newly detected states are generated. In contrast to our previous work, the relations do not have to be finite, but each element must only have finitely many (indirect) successors. Moreover, a subsumption relation can be used instead of pure equality. An executable variant of the algorithm is available where the generic operations are instantiated with list operations.

This formalization was performed as part of the IsaFoR/CeTA project, and it has been used to certify size-change termination proofs where large transitive closures have to be computed.

notify = rene.thiemann@uibk.ac.at [MuchAdoAboutTwo] title = Much Ado About Two author = Sascha Böhme date = 2007-11-06 topic = Computer science/Algorithms abstract = This article is an Isabelle formalisation of a paper with the same title. In a similar way as Knuth's 0-1-principle for sorting algorithms, that paper develops a 0-1-2-principle for parallel prefix computations. notify = boehmes@in.tum.de [DiskPaxos] title = Proving the Correctness of Disk Paxos date = 2005-06-22 author = Mauro Jaskelioff , Stephan Merz topic = Computer science/Algorithms/Distributed abstract = Disk Paxos is an algorithm for building arbitrary fault-tolerant distributed systems. The specification of Disk Paxos has been proved correct informally and tested using the TLC model checker, but up to now, it has never been fully formally verified. In this work we have formally verified its correctness using the Isabelle theorem prover and the HOL logic system, showing that Isabelle is a practical tool for verifying properties of TLA+ specifications. notify = kleing@cse.unsw.edu.au [GenClock] title = Formalization of a Generalized Protocol for Clock Synchronization author = Alwen Tiu date = 2005-06-24 topic = Computer science/Algorithms/Distributed abstract = We formalize the generalized Byzantine fault-tolerant clock synchronization protocol of Schneider. This protocol abstracts from particular algorithms or implementations for clock synchronization. This abstraction includes several assumptions on the behaviors of physical clocks and on general properties of concrete algorithms/implementations. Based on these assumptions the correctness of the protocol is proved by Schneider. His proof was later verified by Shankar using the theorem prover EHDM (precursor to PVS). Our formalization in Isabelle/HOL is based on Shankar's formalization. notify = kleing@cse.unsw.edu.au [ClockSynchInst] title = Instances of Schneider's generalized protocol of clock synchronization author = Damián Barsotti date = 2006-03-15 topic = Computer science/Algorithms/Distributed abstract = F. B. Schneider ("Understanding protocols for Byzantine clock synchronization") generalizes a number of protocols for Byzantine fault-tolerant clock synchronization and presents a uniform proof for their correctness. In Schneider's schema, each processor maintains a local clock by periodically adjusting each value to one computed by a convergence function applied to the readings of all the clocks. Then, correctness of an algorithm, i.e. that the readings of two clocks at any time are within a fixed bound of each other, is based upon some conditions on the convergence function. To prove that a particular clock synchronization algorithm is correct it suffices to show that the convergence function used by the algorithm meets Schneider's conditions. Using the theorem prover Isabelle, we formalize the proofs that the convergence functions of two algorithms, namely, the Interactive Convergence Algorithm (ICA) of Lamport and Melliar-Smith and the Fault-tolerant Midpoint algorithm of Lundelius-Lynch, meet Schneider's conditions. Furthermore, we experiment on handling some parts of the proofs with fully automatic tools like ICS and CVC-lite. These theories are part of a joint work with Alwen Tiu and Leonor P. Nieto "Verification of Clock Synchronization Algorithms: Experiments on a combination of deductive tools" in proceedings of AVOCS 2005. In this work the correctness of Schneider schema was also verified using Isabelle (entry GenClock in AFP). notify = kleing@cse.unsw.edu.au [Heard_Of] title = Verifying Fault-Tolerant Distributed Algorithms in the Heard-Of Model date = 2012-07-27 author = Henri Debrat , Stephan Merz topic = Computer science/Algorithms/Distributed abstract = Distributed computing is inherently based on replication, promising increased tolerance to failures of individual computing nodes or communication channels. Realizing this promise, however, involves quite subtle algorithmic mechanisms, and requires precise statements about the kinds and numbers of faults that an algorithm tolerates (such as process crashes, communication faults or corrupted values). The landmark theorem due to Fischer, Lynch, and Paterson shows that it is impossible to achieve Consensus among N asynchronously communicating nodes in the presence of even a single permanent failure. Existing solutions must rely on assumptions of "partial synchrony".

Indeed, there have been numerous misunderstandings on what exactly a given algorithm is supposed to realize in what kinds of environments. Moreover, the abundance of subtly different computational models complicates comparisons between different algorithms. Charron-Bost and Schiper introduced the Heard-Of model for representing algorithms and failure assumptions in a uniform framework, simplifying comparisons between algorithms.

In this contribution, we represent the Heard-Of model in Isabelle/HOL. We define two semantics of runs of algorithms with different unit of atomicity and relate these through a reduction theorem that allows us to verify algorithms in the coarse-grained semantics (where proofs are easier) and infer their correctness for the fine-grained one (which corresponds to actual executions). We instantiate the framework by verifying six Consensus algorithms that differ in the underlying algorithmic mechanisms and the kinds of faults they tolerate. notify = Stephan.Merz@loria.fr [Consensus_Refined] title = Consensus Refined date = 2015-03-18 author = Ognjen Maric <>, Christoph Sprenger topic = Computer science/Algorithms/Distributed abstract = Algorithms for solving the consensus problem are fundamental to distributed computing. Despite their brevity, their ability to operate in concurrent, asynchronous and failure-prone environments comes at the cost of complex and subtle behaviors. Accordingly, understanding how they work and proving their correctness is a non-trivial endeavor where abstraction is immensely helpful. Moreover, research on consensus has yielded a large number of algorithms, many of which appear to share common algorithmic ideas. A natural question is whether and how these similarities can be distilled and described in a precise, unified way. In this work, we combine stepwise refinement and lockstep models to provide an abstract and unified view of a sizeable family of consensus algorithms. Our models provide insights into the design choices underlying the different algorithms, and classify them based on those choices. notify = sprenger@inf.ethz.ch [Key_Agreement_Strong_Adversaries] title = Refining Authenticated Key Agreement with Strong Adversaries author = Joseph Lallemand , Christoph Sprenger topic = Computer science/Security license = LGPL date = 2017-01-31 notify = joseph.lallemand@loria.fr, sprenger@inf.ethz.ch abstract = We develop a family of key agreement protocols that are correct by construction. Our work substantially extends prior work on developing security protocols by refinement. First, we strengthen the adversary by allowing him to compromise different resources of protocol participants, such as their long-term keys or their session keys. This enables the systematic development of protocols that ensure strong properties such as perfect forward secrecy. Second, we broaden the class of protocols supported to include those with non-atomic keys and equationally defined cryptographic operators. We use these extensions to develop key agreement protocols including signed Diffie-Hellman and the core of IKEv1 and SKEME. [Security_Protocol_Refinement] title = Developing Security Protocols by Refinement author = Christoph Sprenger , Ivano Somaini<> topic = Computer science/Security license = LGPL date = 2017-05-24 notify = sprenger@inf.ethz.ch abstract = We propose a development method for security protocols based on stepwise refinement. Our refinement strategy transforms abstract security goals into protocols that are secure when operating over an insecure channel controlled by a Dolev-Yao-style intruder. As intermediate levels of abstraction, we employ messageless guard protocols and channel protocols communicating over channels with security properties. These abstractions provide insights on why protocols are secure and foster the development of families of protocols sharing common structure and properties. We have implemented our method in Isabelle/HOL and used it to develop different entity authentication and key establishment protocols, including realistic features such as key confirmation, replay caches, and encrypted tickets. Our development highlights that guard protocols and channel protocols provide fundamental abstractions for bridging the gap between security properties and standard protocol descriptions based on cryptographic messages. It also shows that our refinement approach scales to protocols of nontrivial size and complexity. [Abortable_Linearizable_Modules] title = Abortable Linearizable Modules author = Rachid Guerraoui , Viktor Kuncak , Giuliano Losa date = 2012-03-01 topic = Computer science/Algorithms/Distributed abstract = We define the Abortable Linearizable Module automaton (ALM for short) and prove its key composition property using the IOA theory of HOLCF. The ALM is at the heart of the Speculative Linearizability framework. This framework simplifies devising correct speculative algorithms by enabling their decomposition into independent modules that can be analyzed and proved correct in isolation. It is particularly useful when working in a distributed environment, where the need to tolerate faults and asynchrony has made current monolithic protocols so intricate that it is no longer tractable to check their correctness. Our theory contains a typical example of a refinement proof in the I/O-automata framework of Lynch and Tuttle. notify = giuliano@losa.fr, nipkow@in.tum.de [Amortized_Complexity] title = Amortized Complexity Verified author = Tobias Nipkow date = 2014-07-07 topic = Computer science/Data structures abstract = A framework for the analysis of the amortized complexity of functional data structures is formalized in Isabelle/HOL and applied to a number of standard examples and to the folowing non-trivial ones: skew heaps, splay trees, splay heaps and pairing heaps.

A preliminary version of this work (without pairing heaps) is described in a paper published in the proceedings of the conference on Interactive Theorem Proving ITP 2015. An extended version of this publication is available here. extra-history = Change history: [2015-03-17]: Added pairing heaps by Hauke Brinkop.
[2016-07-12]: Moved splay heaps from here to Splay_Tree
[2016-07-14]: Moved pairing heaps from here to the new Pairing_Heap notify = nipkow@in.tum.de [Dynamic_Tables] title = Parameterized Dynamic Tables author = Tobias Nipkow date = 2015-06-07 topic = Computer science/Data structures abstract = This article formalizes the amortized analysis of dynamic tables parameterized with their minimal and maximal load factors and the expansion and contraction factors.

A full description is found in a companion paper. notify = nipkow@in.tum.de [AVL-Trees] title = AVL Trees author = Tobias Nipkow , Cornelia Pusch <> date = 2004-03-19 topic = Computer science/Data structures abstract = Two formalizations of AVL trees with room for extensions. The first formalization is monolithic and shorter, the second one in two stages, longer and a bit simpler. The final implementation is the same. If you are interested in developing this further, please contact gerwin.klein@nicta.com.au. extra-history = Change history: [2011-04-11]: Ondrej Kuncar added delete function notify = kleing@cse.unsw.edu.au [BDD] title = BDD Normalisation author = Veronika Ortner <>, Norbert Schirmer <> date = 2008-02-29 topic = Computer science/Data structures abstract = We present the verification of the normalisation of a binary decision diagram (BDD). The normalisation follows the original algorithm presented by Bryant in 1986 and transforms an ordered BDD in a reduced, ordered and shared BDD. The verification is based on Hoare logics. notify = kleing@cse.unsw.edu.au, norbert.schirmer@web.de [BinarySearchTree] title = Binary Search Trees author = Viktor Kuncak date = 2004-04-05 topic = Computer science/Data structures abstract = The correctness is shown of binary search tree operations (lookup, insert and remove) implementing a set. Two versions are given, for both structured and linear (tactic-style) proofs. An implementation of integer-indexed maps is also verified. notify = lp15@cam.ac.uk [Splay_Tree] title = Splay Tree author = Tobias Nipkow notify = nipkow@in.tum.de date = 2014-08-12 topic = Computer science/Data structures abstract = Splay trees are self-adjusting binary search trees which were invented by Sleator and Tarjan [JACM 1985]. This entry provides executable and verified functional splay trees as well as the related splay heaps (due to Okasaki).

The amortized complexity of splay trees and heaps is analyzed in the AFP entry Amortized Complexity. extra-history = Change history: [2016-07-12]: Moved splay heaps here from Amortized_Complexity [Root_Balanced_Tree] title = Root-Balanced Tree author = Tobias Nipkow notify = nipkow@in.tum.de date = 2017-08-20 topic = Computer science/Data structures abstract =

Andersson introduced general balanced trees, search trees based on the design principle of partial rebuilding: perform update operations naively until the tree becomes too unbalanced, at which point a whole subtree is rebalanced. This article defines and analyzes a functional version of general balanced trees, which we call root-balanced trees. Using a lightweight model of execution time, amortized logarithmic complexity is verified in the theorem prover Isabelle.

This is the Isabelle formalization of the material decribed in the APLAS 2017 article Verified Root-Balanced Trees by the same author, which also presents experimental results that show competitiveness of root-balanced with AVL and red-black trees.

[Skew_Heap] title = Skew Heap author = Tobias Nipkow date = 2014-08-13 topic = Computer science/Data structures abstract = Skew heaps are an amazingly simple and lightweight implementation of priority queues. They were invented by Sleator and Tarjan [SIAM 1986] and have logarithmic amortized complexity. This entry provides executable and verified functional skew heaps.

The amortized complexity of skew heaps is analyzed in the AFP entry Amortized Complexity. notify = nipkow@in.tum.de [Pairing_Heap] title = Pairing Heap author = Hauke Brinkop , Tobias Nipkow date = 2016-07-14 topic = Computer science/Data structures abstract = This library defines three different versions of pairing heaps: a functional version of the original design based on binary trees [Fredman et al. 1986], the version by Okasaki [1998] and a modified version of the latter that is free of structural invariants.

The amortized complexity of pairing heaps is analyzed in the AFP article Amortized Complexity. extra-0 = Origin: This library was extracted from Amortized Complexity and extended. notify = nipkow@in.tum.de [Priority_Queue_Braun] title = Priority Queues Based on Braun Trees author = Tobias Nipkow date = 2014-09-04 topic = Computer science/Data structures abstract = This entry verifies priority queues based on Braun trees. Insertion and deletion take logarithmic time and preserve the balanced nature of Braun trees. Two implementations of deletion are provided. notify = nipkow@in.tum.de extra-history = Change history: [2019-12-16]: Added theory Priority_Queue_Braun2 with second version of del_min [Binomial-Queues] title = Functional Binomial Queues author = René Neumann date = 2010-10-28 topic = Computer science/Data structures abstract = Priority queues are an important data structure and efficient implementations of them are crucial. We implement a functional variant of binomial queues in Isabelle/HOL and show its functional correctness. A verification against an abstract reference specification of priority queues has also been attempted, but could not be achieved to the full extent. notify = florian.haftmann@informatik.tu-muenchen.de [Binomial-Heaps] title = Binomial Heaps and Skew Binomial Heaps author = Rene Meis , Finn Nielsen , Peter Lammich date = 2010-10-28 topic = Computer science/Data structures abstract = We implement and prove correct binomial heaps and skew binomial heaps. Both are data-structures for priority queues. While binomial heaps have logarithmic findMin, deleteMin, insert, and meld operations, skew binomial heaps have constant time findMin, insert, and meld operations, and only the deleteMin-operation is logarithmic. This is achieved by using skew links to avoid cascading linking on insert-operations, and data-structural bootstrapping to get constant-time findMin and meld operations. Our implementation follows the paper by Brodal and Okasaki. notify = peter.lammich@uni-muenster.de [Finger-Trees] title = Finger Trees author = Benedikt Nordhoff , Stefan Körner , Peter Lammich date = 2010-10-28 topic = Computer science/Data structures abstract = We implement and prove correct 2-3 finger trees. Finger trees are a general purpose data structure, that can be used to efficiently implement other data structures, such as priority queues. Intuitively, a finger tree is an annotated sequence, where the annotations are elements of a monoid. Apart from operations to access the ends of the sequence, the main operation is to split the sequence at the point where a monotone predicate over the sum of the left part of the sequence becomes true for the first time. The implementation follows the paper of Hinze and Paterson. The code generator can be used to get efficient, verified code. notify = peter.lammich@uni-muenster.de [Trie] title = Trie author = Andreas Lochbihler , Tobias Nipkow date = 2015-03-30 topic = Computer science/Data structures abstract = This article formalizes the ``trie'' data structure invented by Fredkin [CACM 1960]. It also provides a specialization where the entries in the trie are lists. extra-0 = Origin: This article was extracted from existing articles by the authors. notify = nipkow@in.tum.de [FinFun] title = Code Generation for Functions as Data author = Andreas Lochbihler date = 2009-05-06 topic = Computer science/Data structures abstract = FinFuns are total functions that are constant except for a finite set of points, i.e. a generalisation of finite maps. They are formalised as a new type in Isabelle/HOL such that the code generator can handle equality tests and quantification on FinFuns. On the code output level, FinFuns are explicitly represented by constant functions and pointwise updates, similarly to associative lists. Inside the logic, they behave like ordinary functions with extensionality. Via the update/constant pattern, a recursion combinator and an induction rule for FinFuns allow for defining and reasoning about operators on FinFun that are also executable. extra-history = Change history: [2010-08-13]: new concept domain of a FinFun as a FinFun (revision 34b3517cbc09)
[2010-11-04]: new conversion function from FinFun to list of elements in the domain (revision 0c167102e6ed)
[2012-03-07]: replace sets as FinFuns by predicates as FinFuns because the set type constructor has been reintroduced (revision b7aa87989f3a) notify = nipkow@in.tum.de [Collections] title = Collections Framework author = Peter Lammich contributors = Andreas Lochbihler , Thomas Tuerk <> date = 2009-11-25 topic = Computer science/Data structures abstract = This development provides an efficient, extensible, machine checked collections framework. The library adopts the concepts of interface, implementation and generic algorithm from object-oriented programming and implements them in Isabelle/HOL. The framework features the use of data refinement techniques to refine an abstract specification (using high-level concepts like sets) to a more concrete implementation (using collection datastructures, like red-black-trees). The code-generator of Isabelle/HOL can be used to generate efficient code. extra-history = Change history: [2010-10-08]: New Interfaces: OrderedSet, OrderedMap, List. Fifo now implements list-interface: Function names changed: put/get --> enqueue/dequeue. New Implementations: ArrayList, ArrayHashMap, ArrayHashSet, TrieMap, TrieSet. Invariant-free datastructures: Invariant implicitely hidden in typedef. Record-interfaces: All operations of an interface encapsulated as record. Examples moved to examples subdirectory.
[2010-12-01]: New Interfaces: Priority Queues, Annotated Lists. Implemented by finger trees, (skew) binomial queues.
[2011-10-10]: SetSpec: Added operations: sng, isSng, bexists, size_abort, diff, filter, iterate_rule_insertP MapSpec: Added operations: sng, isSng, iterate_rule_insertP, bexists, size, size_abort, restrict, map_image_filter, map_value_image_filter Some maintenance changes
[2012-04-25]: New iterator foundation by Tuerk. Various maintenance changes.
[2012-08]: Collections V2. New features: Polymorphic iterators. Generic algorithm instantiation where required. Naming scheme changed from xx_opname to xx.opname. A compatibility file CollectionsV1 tries to simplify porting of existing theories, by providing old naming scheme and the old monomorphic iterator locales.
[2013-09]: Added Generic Collection Framework based on Autoref. The GenCF provides: Arbitrary nesting, full integration with Autoref.
[2014-06]: Maintenace changes to GenCF: Optimized inj_image on list_set. op_set_cart (Cartesian product). big-Union operation. atLeastLessThan - operation ({a..<b})
notify = lammich@in.tum.de [Containers] title = Light-weight Containers author = Andreas Lochbihler contributors = René Thiemann date = 2013-04-15 topic = Computer science/Data structures abstract = This development provides a framework for container types like sets and maps such that generated code implements these containers with different (efficient) data structures. Thanks to type classes and refinement during code generation, this light-weight approach can seamlessly replace Isabelle's default setup for code generation. Heuristics automatically pick one of the available data structures depending on the type of elements to be stored, but users can also choose on their own. The extensible design permits to add more implementations at any time.

To support arbitrary nesting of sets, we define a linear order on sets based on a linear order of the elements and provide efficient implementations. It even allows to compare complements with non-complements. extra-history = Change history: [2013-07-11]: add pretty printing for sets (revision 7f3f52c5f5fa)
[2013-09-20]: provide generators for canonical type class instantiations (revision 159f4401f4a8 by René Thiemann)
[2014-07-08]: add support for going from partial functions to mappings (revision 7a6fc957e8ed)
[2018-03-05]: add two application examples: depth-first search and 2SAT (revision e5e1a1da2411) notify = mail@andreas-lochbihler.de [FileRefinement] title = File Refinement author = Karen Zee , Viktor Kuncak date = 2004-12-09 topic = Computer science/Data structures abstract = These theories illustrates the verification of basic file operations (file creation, file read and file write) in the Isabelle theorem prover. We describe a file at two levels of abstraction: an abstract file represented as a resizable array, and a concrete file represented using data blocks. notify = kkz@mit.edu [Datatype_Order_Generator] title = Generating linear orders for datatypes author = René Thiemann date = 2012-08-07 topic = Computer science/Data structures abstract = We provide a framework for registering automatic methods to derive class instances of datatypes, as it is possible using Haskell's ``deriving Ord, Show, ...'' feature.

We further implemented such automatic methods to derive (linear) orders or hash-functions which are required in the Isabelle Collection Framework. Moreover, for the tactic of Huffman and Krauss to show that a datatype is countable, we implemented a wrapper so that this tactic becomes accessible in our framework.

Our formalization was performed as part of the IsaFoR/CeTA project. With our new tactic we could completely remove tedious proofs for linear orders of two datatypes.

This development is aimed at datatypes generated by the "old_datatype" command. notify = rene.thiemann@uibk.ac.at [Deriving] title = Deriving class instances for datatypes author = Christian Sternagel , René Thiemann date = 2015-03-11 topic = Computer science/Data structures abstract =

We provide a framework for registering automatic methods to derive class instances of datatypes, as it is possible using Haskell's ``deriving Ord, Show, ...'' feature.

We further implemented such automatic methods to derive comparators, linear orders, parametrizable equality functions, and hash-functions which are required in the Isabelle Collection Framework and the Container Framework. Moreover, for the tactic of Blanchette to show that a datatype is countable, we implemented a wrapper so that this tactic becomes accessible in our framework. All of the generators are based on the infrastructure that is provided by the BNF-based datatype package.

Our formalization was performed as part of the IsaFoR/CeTA project. With our new tactics we could remove several tedious proofs for (conditional) linear orders, and conditional equality operators within IsaFoR and the Container Framework.

notify = rene.thiemann@uibk.ac.at [List-Index] title = List Index date = 2010-02-20 author = Tobias Nipkow topic = Computer science/Data structures abstract = This theory provides functions for finding the index of an element in a list, by predicate and by value. notify = nipkow@in.tum.de [List-Infinite] title = Infinite Lists date = 2011-02-23 author = David Trachtenherz <> topic = Computer science/Data structures abstract = We introduce a theory of infinite lists in HOL formalized as functions over naturals (folder ListInf, theories ListInf and ListInf_Prefix). It also provides additional results for finite lists (theory ListInf/List2), natural numbers (folder CommonArith, esp. division/modulo, naturals with infinity), sets (folder CommonSet, esp. cutting/truncating sets, traversing sets of naturals). notify = nipkow@in.tum.de [Matrix] title = Executable Matrix Operations on Matrices of Arbitrary Dimensions topic = Computer science/Data structures date = 2010-06-17 author = Christian Sternagel , René Thiemann license = LGPL abstract = We provide the operations of matrix addition, multiplication, transposition, and matrix comparisons as executable functions over ordered semirings. Moreover, it is proven that strongly normalizing (monotone) orders can be lifted to strongly normalizing (monotone) orders over matrices. We further show that the standard semirings over the naturals, integers, and rationals, as well as the arctic semirings satisfy the axioms that are required by our matrix theory. Our formalization is part of the CeTA system which contains several termination techniques. The provided theories have been essential to formalize matrix-interpretations and arctic interpretations. extra-history = Change history: [2010-09-17]: Moved theory on arbitrary (ordered) semirings to Abstract Rewriting. notify = rene.thiemann@uibk.ac.at, christian.sternagel@uibk.ac.at [Matrix_Tensor] title = Tensor Product of Matrices topic = Computer science/Data structures, Mathematics/Algebra date = 2016-01-18 author = T.V.H. Prathamesh abstract = In this work, the Kronecker tensor product of matrices and the proofs of some of its properties are formalized. Properties which have been formalized include associativity of the tensor product and the mixed-product property. notify = prathamesh@imsc.res.in [Huffman] title = The Textbook Proof of Huffman's Algorithm author = Jasmin Christian Blanchette date = 2008-10-15 topic = Computer science/Data structures abstract = Huffman's algorithm is a procedure for constructing a binary tree with minimum weighted path length. This report presents a formal proof of the correctness of Huffman's algorithm written using Isabelle/HOL. Our proof closely follows the sketches found in standard algorithms textbooks, uncovering a few snags in the process. Another distinguishing feature of our formalization is the use of custom induction rules to help Isabelle's automatic tactics, leading to very short proofs for most of the lemmas. notify = jasmin.blanchette@gmail.com [Partial_Function_MR] title = Mutually Recursive Partial Functions author = René Thiemann topic = Computer science/Functional programming date = 2014-02-18 license = LGPL abstract = We provide a wrapper around the partial-function command that supports mutual recursion. notify = rene.thiemann@uibk.ac.at [Lifting_Definition_Option] title = Lifting Definition Option author = René Thiemann topic = Computer science/Functional programming date = 2014-10-13 license = LGPL abstract = We implemented a command that can be used to easily generate elements of a restricted type {x :: 'a. P x}, provided the definition is of the form f ys = (if check ys then Some(generate ys :: 'a) else None) where ys is a list of variables y1 ... yn and check ys ==> P(generate ys) can be proved.

In principle, such a definition is also directly possible using the lift_definition command. However, then this definition will not be suitable for code-generation. To this end, we automated a more complex construction of Joachim Breitner which is amenable for code-generation, and where the test check ys will only be performed once. In the automation, one auxiliary type is created, and Isabelle's lifting- and transfer-package is invoked several times. notify = rene.thiemann@uibk.ac.at [Coinductive] title = Coinductive topic = Computer science/Functional programming author = Andreas Lochbihler contributors = Johannes Hölzl date = 2010-02-12 abstract = This article collects formalisations of general-purpose coinductive data types and sets. Currently, it contains coinductive natural numbers, coinductive lists, i.e. lazy lists or streams, infinite streams, coinductive terminated lists, coinductive resumptions, a library of operations on coinductive lists, and a version of König's lemma as an application for coinductive lists.
The initial theory was contributed by Paulson and Wenzel. Extensions and other coinductive formalisations of general interest are welcome. extra-history = Change history: [2010-06-10]: coinductive lists: setup for quotient package (revision 015574f3bf3c)
[2010-06-28]: new codatatype terminated lazy lists (revision e12de475c558)
[2010-08-04]: terminated lazy lists: setup for quotient package; more lemmas (revision 6ead626f1d01)
[2010-08-17]: Koenig's lemma as an example application for coinductive lists (revision f81ce373fa96)
[2011-02-01]: lazy implementation of coinductive (terminated) lists for the code generator (revision 6034973dce83)
[2011-07-20]: new codatatype resumption (revision 811364c776c7)
[2012-06-27]: new codatatype stream with operations (with contributions by Peter Gammie) (revision dd789a56473c)
[2013-03-13]: construct codatatypes with the BNF package and adjust the definitions and proofs, setup for lifting and transfer packages (revision f593eda5b2c0)
[2013-09-20]: stream theory uses type and operations from HOL/BNF/Examples/Stream (revision 692809b2b262)
[2014-04-03]: ccpo structure on codatatypes used to define ldrop, ldropWhile, lfilter, lconcat as least fixpoint; ccpo topology on coinductive lists contributed by Johannes Hölzl; added examples (revision 23cd8156bd42)
notify = mail@andreas-lochbihler.de [Stream-Fusion] title = Stream Fusion author = Brian Huffman topic = Computer science/Functional programming date = 2009-04-29 abstract = Stream Fusion is a system for removing intermediate list structures from Haskell programs; it consists of a Haskell library along with several compiler rewrite rules. (The library is available online.)

These theories contain a formalization of much of the Stream Fusion library in HOLCF. Lazy list and stream types are defined, along with coercions between the two types, as well as an equivalence relation for streams that generate the same list. List and stream versions of map, filter, foldr, enumFromTo, append, zipWith, and concatMap are defined, and the stream versions are shown to respect stream equivalence. notify = brianh@cs.pdx.edu [Tycon] title = Type Constructor Classes and Monad Transformers author = Brian Huffman date = 2012-06-26 topic = Computer science/Functional programming abstract = These theories contain a formalization of first class type constructors and axiomatic constructor classes for HOLCF. This work is described in detail in the ICFP 2012 paper Formal Verification of Monad Transformers by the author. The formalization is a revised and updated version of earlier joint work with Matthews and White.

Based on the hierarchy of type classes in Haskell, we define classes for functors, monads, monad-plus, etc. Each one includes all the standard laws as axioms. We also provide a new user command, tycondef, for defining new type constructors in HOLCF. Using tycondef, we instantiate the type class hierarchy with various monads and monad transformers. notify = huffman@in.tum.de [CoreC++] title = CoreC++ author = Daniel Wasserrab date = 2006-05-15 topic = Computer science/Programming languages/Language definitions abstract = We present an operational semantics and type safety proof for multiple inheritance in C++. The semantics models the behavior of method calls, field accesses, and two forms of casts in C++ class hierarchies. For explanations see the OOPSLA 2006 paper by Wasserrab, Nipkow, Snelting and Tip. notify = nipkow@in.tum.de [FeatherweightJava] title = A Theory of Featherweight Java in Isabelle/HOL author = J. Nathan Foster , Dimitrios Vytiniotis date = 2006-03-31 topic = Computer science/Programming languages/Language definitions abstract = We formalize the type system, small-step operational semantics, and type soundness proof for Featherweight Java, a simple object calculus, in Isabelle/HOL. notify = kleing@cse.unsw.edu.au [Jinja] title = Jinja is not Java author = Gerwin Klein , Tobias Nipkow date = 2005-06-01 topic = Computer science/Programming languages/Language definitions abstract = We introduce Jinja, a Java-like programming language with a formal semantics designed to exhibit core features of the Java language architecture. Jinja is a compromise between realism of the language and tractability and clarity of the formal semantics. The following aspects are formalised: a big and a small step operational semantics for Jinja and a proof of their equivalence; a type system and a definite initialisation analysis; a type safety proof of the small step semantics; a virtual machine (JVM), its operational semantics and its type system; a type safety proof for the JVM; a bytecode verifier, i.e. data flow analyser for the JVM; a correctness proof of the bytecode verifier w.r.t. the type system; a compiler and a proof that it preserves semantics and well-typedness. The emphasis of this work is not on particular language features but on providing a unified model of the source language, the virtual machine and the compiler. The whole development has been carried out in the theorem prover Isabelle/HOL. notify = kleing@cse.unsw.edu.au, nipkow@in.tum.de [JinjaThreads] title = Jinja with Threads author = Andreas Lochbihler date = 2007-12-03 topic = Computer science/Programming languages/Language definitions abstract = We extend the Jinja source code semantics by Klein and Nipkow with Java-style arrays and threads. Concurrency is captured in a generic framework semantics for adding concurrency through interleaving to a sequential semantics, which features dynamic thread creation, inter-thread communication via shared memory, lock synchronisation and joins. Also, threads can suspend themselves and be notified by others. We instantiate the framework with the adapted versions of both Jinja source and byte code and show type safety for the multithreaded case. Equally, the compiler from source to byte code is extended, for which we prove weak bisimilarity between the source code small step semantics and the defensive Jinja virtual machine. On top of this, we formalise the JMM and show the DRF guarantee and consistency. For description of the different parts, see Lochbihler's papers at FOOL 2008, ESOP 2010, ITP 2011, and ESOP 2012. extra-history = Change history: [2008-04-23]: added bytecode formalisation with arrays and threads, added thread joins (revision f74a8be156a7)
[2009-04-27]: added verified compiler from source code to bytecode; encapsulate native methods in separate semantics (revision e4f26541e58a)
[2009-11-30]: extended compiler correctness proof to infinite and deadlocking computations (revision e50282397435)
[2010-06-08]: added thread interruption; new abstract memory model with sequential consistency as implementation (revision 0cb9e8dbd78d)
[2010-06-28]: new thread interruption model (revision c0440d0a1177)
[2010-10-15]: preliminary version of the Java memory model for source code (revision 02fee0ef3ca2)
[2010-12-16]: improved version of the Java memory model, also for bytecode executable scheduler for source code semantics (revision 1f41c1842f5a)
[2011-02-02]: simplified code generator setup new random scheduler (revision 3059dafd013f)
[2011-07-21]: new interruption model, generalized JMM proof of DRF guarantee, allow class Object to declare methods and fields, simplified subtyping relation, corrected division and modulo implementation (revision 46e4181ed142)
[2012-02-16]: added example programs (revision bf0b06c8913d)
[2012-11-21]: type safety proof for the Java memory model, allow spurious wake-ups (revision 76063d860ae0)
[2013-05-16]: support for non-deterministic memory allocators (revision cc3344a49ced)
[2017-10-20]: add an atomic compare-and-swap operation for volatile fields (revision a6189b1d6b30)
notify = mail@andreas-lochbihler.de [Locally-Nameless-Sigma] title = Locally Nameless Sigma Calculus author = Ludovic Henrio , Florian Kammüller , Bianca Lutz , Henry Sudhof date = 2010-04-30 topic = Computer science/Programming languages/Language definitions abstract = We present a Theory of Objects based on the original functional sigma-calculus by Abadi and Cardelli but with an additional parameter to methods. We prove confluence of the operational semantics following the outline of Nipkow's proof of confluence for the lambda-calculus reusing his theory Commutation, a generic diamond lemma reduction. We furthermore formalize a simple type system for our sigma-calculus including a proof of type safety. The entire development uses the concept of Locally Nameless representation for binders. We reuse an earlier proof of confluence for a simpler sigma-calculus based on de Bruijn indices and lists to represent objects. notify = nipkow@in.tum.de [Attack_Trees] title = Attack Trees in Isabelle for GDPR compliance of IoT healthcare systems author = Florian Kammueller topic = Computer science/Security date = 2020-04-27 notify = florian.kammuller@gmail.com abstract = In this article, we present a proof theory for Attack Trees. Attack Trees are a well established and useful model for the construction of attacks on systems since they allow a stepwise exploration of high level attacks in application scenarios. Using the expressiveness of Higher Order Logic in Isabelle, we develop a generic theory of Attack Trees with a state-based semantics based on Kripke structures and CTL. The resulting framework allows mechanically supported logic analysis of the meta-theory of the proof calculus of Attack Trees and at the same time the developed proof theory enables application to case studies. A central correctness and completeness result proved in Isabelle establishes a connection between the notion of Attack Tree validity and CTL. The application is illustrated on the example of a healthcare IoT system and GDPR compliance verification. [AutoFocus-Stream] title = AutoFocus Stream Processing for Single-Clocking and Multi-Clocking Semantics author = David Trachtenherz <> date = 2011-02-23 topic = Computer science/Programming languages/Language definitions abstract = We formalize the AutoFocus Semantics (a time-synchronous subset of the Focus formalism) as stream processing functions on finite and infinite message streams represented as finite/infinite lists. The formalization comprises both the conventional single-clocking semantics (uniform global clock for all components and communications channels) and its extension to multi-clocking semantics (internal execution clocking of a component may be a multiple of the external communication clocking). The semantics is defined by generic stream processing functions making it suitable for simulation/code generation in Isabelle/HOL. Furthermore, a number of AutoFocus semantics properties are formalized using definitions from the IntervalLogic theories. notify = nipkow@in.tum.de [FocusStreamsCaseStudies] title = Stream Processing Components: Isabelle/HOL Formalisation and Case Studies author = Maria Spichkova date = 2013-11-14 topic = Computer science/Programming languages/Language definitions abstract = This set of theories presents an Isabelle/HOL formalisation of stream processing components introduced in Focus, a framework for formal specification and development of interactive systems. This is an extended and updated version of the formalisation, which was elaborated within the methodology "Focus on Isabelle". In addition, we also applied the formalisation on three case studies that cover different application areas: process control (Steam Boiler System), data transmission (FlexRay communication protocol), memory and processing components (Automotive-Gateway System). notify = lp15@cam.ac.uk, maria.spichkova@rmit.edu.au [Isabelle_Meta_Model] title = A Meta-Model for the Isabelle API author = Frédéric Tuong , Burkhart Wolff date = 2015-09-16 topic = Computer science/Programming languages/Language definitions abstract = We represent a theory of (a fragment of) Isabelle/HOL in Isabelle/HOL. The purpose of this exercise is to write packages for domain-specific specifications such as class models, B-machines, ..., and generally speaking, any domain-specific languages whose abstract syntax can be defined by a HOL "datatype". On this basis, the Isabelle code-generator can then be used to generate code for global context transformations as well as tactic code.

Consequently the package is geared towards parsing, printing and code-generation to the Isabelle API. It is at the moment not sufficiently rich for doing meta theory on Isabelle itself. Extensions in this direction are possible though.

Moreover, the chosen fragment is fairly rudimentary. However it should be easily adapted to one's needs if a package is written on top of it. The supported API contains types, terms, transformation of global context like definitions and data-type declarations as well as infrastructure for Isar-setups.

This theory is drawn from the Featherweight OCL project where it is used to construct a package for object-oriented data-type theories generated from UML class diagrams. The Featherweight OCL, for example, allows for both the direct execution of compiled tactic code by the Isabelle API as well as the generation of ".thy"-files for debugging purposes.

Gained experience from this project shows that the compiled code is sufficiently efficient for practical purposes while being based on a formal model on which properties of the package can be proven such as termination of certain transformations, correctness, etc. notify = tuong@users.gforge.inria.fr, wolff@lri.fr [Clean] title = Clean - An Abstract Imperative Programming Language and its Theory author = Frédéric Tuong , Burkhart Wolff topic = Computer science/Programming languages, Computer science/Semantics date = 2019-10-04 notify = wolff@lri.fr, ftuong@lri.fr abstract = Clean is based on a simple, abstract execution model for an imperative target language. “Abstract” is understood in contrast to “Concrete Semantics”; alternatively, the term “shallow-style embedding” could be used. It strives for a type-safe notion of program-variables, an incremental construction of the typed state-space, support of incremental verification, and open-world extensibility of new type definitions being intertwined with the program definitions. Clean is based on a “no-frills” state-exception monad with the usual definitions of bind and unit for the compositional glue of state-based computations. Clean offers conditionals and loops supporting C-like control-flow operators such as break and return. The state-space construction is based on the extensible record package. Direct recursion of procedures is supported. Clean’s design strives for extreme simplicity. It is geared towards symbolic execution and proven correct verification tools. The underlying libraries of this package, however, deliberately restrict themselves to the most elementary infrastructure for these tasks. The package is intended to serve as demonstrator semantic backend for Isabelle/C, or for the test-generation techniques. [PCF] title = Logical Relations for PCF author = Peter Gammie date = 2012-07-01 topic = Computer science/Programming languages/Lambda calculi abstract = We apply Andy Pitts's methods of defining relations over domains to several classical results in the literature. We show that the Y combinator coincides with the domain-theoretic fixpoint operator, that parallel-or and the Plotkin existential are not definable in PCF, that the continuation semantics for PCF coincides with the direct semantics, and that our domain-theoretic semantics for PCF is adequate for reasoning about contextual equivalence in an operational semantics. Our version of PCF is untyped and has both strict and non-strict function abstractions. The development is carried out in HOLCF. notify = peteg42@gmail.com [POPLmark-deBruijn] title = POPLmark Challenge Via de Bruijn Indices author = Stefan Berghofer date = 2007-08-02 topic = Computer science/Programming languages/Lambda calculi abstract = We present a solution to the POPLmark challenge designed by Aydemir et al., which has as a goal the formalization of the meta-theory of System F<:. The formalization is carried out in the theorem prover Isabelle/HOL using an encoding based on de Bruijn indices. We start with a relatively simple formalization covering only the basic features of System F<:, and explain how it can be extended to also cover records and more advanced binding constructs. notify = berghofe@in.tum.de [Lam-ml-Normalization] title = Strong Normalization of Moggis's Computational Metalanguage author = Christian Doczkal date = 2010-08-29 topic = Computer science/Programming languages/Lambda calculi abstract = Handling variable binding is one of the main difficulties in formal proofs. In this context, Moggi's computational metalanguage serves as an interesting case study. It features monadic types and a commuting conversion rule that rearranges the binding structure. Lindley and Stark have given an elegant proof of strong normalization for this calculus. The key construction in their proof is a notion of relational TT-lifting, using stacks of elimination contexts to obtain a Girard-Tait style logical relation. I give a formalization of their proof in Isabelle/HOL-Nominal with a particular emphasis on the treatment of bound variables. notify = doczkal@ps.uni-saarland.de, nipkow@in.tum.de [MiniML] title = Mini ML author = Wolfgang Naraschewski <>, Tobias Nipkow date = 2004-03-19 topic = Computer science/Programming languages/Type systems abstract = This theory defines the type inference rules and the type inference algorithm W for MiniML (simply-typed lambda terms with let) due to Milner. It proves the soundness and completeness of W w.r.t. the rules. notify = kleing@cse.unsw.edu.au [Simpl] title = A Sequential Imperative Programming Language Syntax, Semantics, Hoare Logics and Verification Environment author = Norbert Schirmer <> date = 2008-02-29 topic = Computer science/Programming languages/Language definitions, Computer science/Programming languages/Logics license = LGPL abstract = We present the theory of Simpl, a sequential imperative programming language. We introduce its syntax, its semantics (big and small-step operational semantics) and Hoare logics for both partial as well as total correctness. We prove soundness and completeness of the Hoare logic. We integrate and automate the Hoare logic in Isabelle/HOL to obtain a practically usable verification environment for imperative programs. Simpl is independent of a concrete programming language but expressive enough to cover all common language features: mutually recursive procedures, abrupt termination and exceptions, runtime faults, local and global variables, pointers and heap, expressions with side effects, pointers to procedures, partial application and closures, dynamic method invocation and also unbounded nondeterminism. notify = kleing@cse.unsw.edu.au, norbert.schirmer@web.de [Separation_Algebra] title = Separation Algebra author = Gerwin Klein , Rafal Kolanski , Andrew Boyton date = 2012-05-11 topic = Computer science/Programming languages/Logics license = BSD abstract = We present a generic type class implementation of separation algebra for Isabelle/HOL as well as lemmas and generic tactics which can be used directly for any instantiation of the type class.

The ex directory contains example instantiations that include structures such as a heap or virtual memory.

The abstract separation algebra is based upon "Abstract Separation Logic" by Calcagno et al. These theories are also the basis of the ITP 2012 rough diamond "Mechanised Separation Algebra" by the authors.

The aim of this work is to support and significantly reduce the effort for future separation logic developments in Isabelle/HOL by factoring out the part of separation logic that can be treated abstractly once and for all. This includes developing typical default rule sets for reasoning as well as automated tactic support for separation logic. notify = kleing@cse.unsw.edu.au, rafal.kolanski@nicta.com.au [Separation_Logic_Imperative_HOL] title = A Separation Logic Framework for Imperative HOL author = Peter Lammich , Rene Meis date = 2012-11-14 topic = Computer science/Programming languages/Logics license = BSD abstract = We provide a framework for separation-logic based correctness proofs of Imperative HOL programs. Our framework comes with a set of proof methods to automate canonical tasks such as verification condition generation and frame inference. Moreover, we provide a set of examples that show the applicability of our framework. The examples include algorithms on lists, hash-tables, and union-find trees. We also provide abstract interfaces for lists, maps, and sets, that allow to develop generic imperative algorithms and use data-refinement techniques.
As we target Imperative HOL, our programs can be translated to efficiently executable code in various target languages, including ML, OCaml, Haskell, and Scala. notify = lammich@in.tum.de [Inductive_Confidentiality] title = Inductive Study of Confidentiality author = Giampaolo Bella date = 2012-05-02 topic = Computer science/Security abstract = This document contains the full theory files accompanying article Inductive Study of Confidentiality --- for Everyone in Formal Aspects of Computing. They aim at an illustrative and didactic presentation of the Inductive Method of protocol analysis, focusing on the treatment of one of the main goals of security protocols: confidentiality against a threat model. The treatment of confidentiality, which in fact forms a key aspect of all protocol analysis tools, has been found cryptic by many learners of the Inductive Method, hence the motivation for this work. The theory files in this document guide the reader step by step towards design and proof of significant confidentiality theorems. These are developed against two threat models, the standard Dolev-Yao and a more audacious one, the General Attacker, which turns out to be particularly useful also for teaching purposes. notify = giamp@dmi.unict.it [Possibilistic_Noninterference] title = Possibilistic Noninterference author = Andrei Popescu , Johannes Hölzl date = 2012-09-10 topic = Computer science/Security, Computer science/Programming languages/Type systems abstract = We formalize a wide variety of Volpano/Smith-style noninterference notions for a while language with parallel composition. We systematize and classify these notions according to compositionality w.r.t. the language constructs. Compositionality yields sound syntactic criteria (a.k.a. type systems) in a uniform way.

An article about these proofs is published in the proceedings of the conference Certified Programs and Proofs 2012. notify = hoelzl@in.tum.de [SIFUM_Type_Systems] title = A Formalization of Assumptions and Guarantees for Compositional Noninterference author = Sylvia Grewe , Heiko Mantel , Daniel Schoepe date = 2014-04-23 topic = Computer science/Security, Computer science/Programming languages/Type systems abstract = Research in information-flow security aims at developing methods to identify undesired information leaks within programs from private (high) sources to public (low) sinks. For a concurrent system, it is desirable to have compositional analysis methods that allow for analyzing each thread independently and that nevertheless guarantee that the parallel composition of successfully analyzed threads satisfies a global security guarantee. However, such a compositional analysis should not be overly pessimistic about what an environment might do with shared resources. Otherwise, the analysis will reject many intuitively secure programs.

The paper "Assumptions and Guarantees for Compositional Noninterference" by Mantel et. al. presents one solution for this problem: an approach for compositionally reasoning about non-interference in concurrent programs via rely-guarantee-style reasoning. We present an Isabelle/HOL formalization of the concepts and proofs of this approach. notify = grewe@cs.tu-darmstadt.de [Dependent_SIFUM_Type_Systems] title = A Dependent Security Type System for Concurrent Imperative Programs author = Toby Murray , Robert Sison<>, Edward Pierzchalski<>, Christine Rizkallah notify = toby.murray@unimelb.edu.au date = 2016-06-25 topic = Computer science/Security, Computer science/Programming languages/Type systems abstract = The paper "Compositional Verification and Refinement of Concurrent Value-Dependent Noninterference" by Murray et. al. (CSF 2016) presents a dependent security type system for compositionally verifying a value-dependent noninterference property, defined in (Murray, PLAS 2015), for concurrent programs. This development formalises that security definition, the type system and its soundness proof, and demonstrates its application on some small examples. It was derived from the SIFUM_Type_Systems AFP entry, by Sylvia Grewe, Heiko Mantel and Daniel Schoepe, and whose structure it inherits. extra-history = Change history: [2016-08-19]: Removed unused "stop" parameter and "stop_no_eval" assumption from the sifum_security locale. (revision dbc482d36372) [2016-09-27]: Added security locale support for the imposition of requirements on the initial memory. (revision cce4ceb74ddb) [Dependent_SIFUM_Refinement] title = Compositional Security-Preserving Refinement for Concurrent Imperative Programs author = Toby Murray , Robert Sison<>, Edward Pierzchalski<>, Christine Rizkallah notify = toby.murray@unimelb.edu.au date = 2016-06-28 topic = Computer science/Security abstract = The paper "Compositional Verification and Refinement of Concurrent Value-Dependent Noninterference" by Murray et. al. (CSF 2016) presents a compositional theory of refinement for a value-dependent noninterference property, defined in (Murray, PLAS 2015), for concurrent programs. This development formalises that refinement theory, and demonstrates its application on some small examples. extra-history = Change history: [2016-08-19]: Removed unused "stop" parameters from the sifum_refinement locale. (revision dbc482d36372) [2016-09-02]: TobyM extended "simple" refinement theory to be usable for all bisimulations. (revision 547f31c25f60) [Relational-Incorrectness-Logic] title = An Under-Approximate Relational Logic author = Toby Murray topic = Computer science/Programming languages/Logics, Computer science/Security date = 2020-03-12 notify = toby.murray@unimelb.edu.au abstract = Recently, authors have proposed under-approximate logics for reasoning about programs. So far, all such logics have been confined to reasoning about individual program behaviours. Yet there exist many over-approximate relational logics for reasoning about pairs of programs and relating their behaviours. We present the first under-approximate relational logic, for the simple imperative language IMP. We prove our logic is both sound and complete. Additionally, we show how reasoning in this logic can be decomposed into non-relational reasoning in an under-approximate Hoare logic, mirroring Beringer’s result for over-approximate relational logics. We illustrate the application of our logic on some small examples in which we provably demonstrate the presence of insecurity. [Strong_Security] title = A Formalization of Strong Security author = Sylvia Grewe , Alexander Lux , Heiko Mantel , Jens Sauer date = 2014-04-23 topic = Computer science/Security, Computer science/Programming languages/Type systems abstract = Research in information-flow security aims at developing methods to identify undesired information leaks within programs from private sources to public sinks. Noninterference captures this intuition. Strong security from Sabelfeld and Sands formalizes noninterference for concurrent systems.

We present an Isabelle/HOL formalization of strong security for arbitrary security lattices (Sabelfeld and Sands use a two-element security lattice in the original publication). The formalization includes compositionality proofs for strong security and a soundness proof for a security type system that checks strong security for programs in a simple while language with dynamic thread creation.

Our formalization of the security type system is abstract in the language for expressions and in the semantic side conditions for expressions. It can easily be instantiated with different syntactic approximations for these side conditions. The soundness proof of such an instantiation boils down to showing that these syntactic approximations imply the semantic side conditions. notify = grewe@cs.tu-darmstadt.de [WHATandWHERE_Security] title = A Formalization of Declassification with WHAT-and-WHERE-Security author = Sylvia Grewe , Alexander Lux , Heiko Mantel , Jens Sauer date = 2014-04-23 topic = Computer science/Security, Computer science/Programming languages/Type systems abstract = Research in information-flow security aims at developing methods to identify undesired information leaks within programs from private sources to public sinks. Noninterference captures this intuition by requiring that no information whatsoever flows from private sources to public sinks. However, in practice this definition is often too strict: Depending on the intuitive desired security policy, the controlled declassification of certain private information (WHAT) at certain points in the program (WHERE) might not result in an undesired information leak.

We present an Isabelle/HOL formalization of such a security property for controlled declassification, namely WHAT&WHERE-security from "Scheduler-Independent Declassification" by Lux, Mantel, and Perner. The formalization includes compositionality proofs for and a soundness proof for a security type system that checks for programs in a simple while language with dynamic thread creation.

Our formalization of the security type system is abstract in the language for expressions and in the semantic side conditions for expressions. It can easily be instantiated with different syntactic approximations for these side conditions. The soundness proof of such an instantiation boils down to showing that these syntactic approximations imply the semantic side conditions.

This Isabelle/HOL formalization uses theories from the entry Strong Security. notify = grewe@cs.tu-darmstadt.de [VolpanoSmith] title = A Correctness Proof for the Volpano/Smith Security Typing System author = Gregor Snelting , Daniel Wasserrab date = 2008-09-02 topic = Computer science/Programming languages/Type systems, Computer science/Security abstract = The Volpano/Smith/Irvine security type systems requires that variables are annotated as high (secret) or low (public), and provides typing rules which guarantee that secret values cannot leak to public output ports. This property of a program is called confidentiality. For a simple while-language without threads, our proof shows that typeability in the Volpano/Smith system guarantees noninterference. Noninterference means that if two initial states for program execution are low-equivalent, then the final states are low-equivalent as well. This indeed implies that secret values cannot leak to public ports. The proof defines an abstract syntax and operational semantics for programs, formalizes noninterference, and then proceeds by rule induction on the operational semantics. The mathematically most intricate part is the treatment of implicit flows. Note that the Volpano/Smith system is not flow-sensitive and thus quite unprecise, resulting in false alarms. However, due to the correctness property, all potential breaks of confidentiality are discovered. notify = [Abstract-Hoare-Logics] title = Abstract Hoare Logics author = Tobias Nipkow date = 2006-08-08 topic = Computer science/Programming languages/Logics abstract = These therories describe Hoare logics for a number of imperative language constructs, from while-loops to mutually recursive procedures. Both partial and total correctness are treated. In particular a proof system for total correctness of recursive procedures in the presence of unbounded nondeterminism is presented. notify = nipkow@in.tum.de [Stone_Algebras] title = Stone Algebras author = Walter Guttmann notify = walter.guttmann@canterbury.ac.nz date = 2016-09-06 topic = Mathematics/Order abstract = A range of algebras between lattices and Boolean algebras generalise the notion of a complement. We develop a hierarchy of these pseudo-complemented algebras that includes Stone algebras. Independently of this theory we study filters based on partial orders. Both theories are combined to prove Chen and Grätzer's construction theorem for Stone algebras. The latter involves extensive reasoning about algebraic structures in addition to reasoning in algebraic structures. [Kleene_Algebra] title = Kleene Algebra author = Alasdair Armstrong <>, Georg Struth , Tjark Weber date = 2013-01-15 topic = Computer science/Programming languages/Logics, Computer science/Automata and formal languages, Mathematics/Algebra abstract = These files contain a formalisation of variants of Kleene algebras and their most important models as axiomatic type classes in Isabelle/HOL. Kleene algebras are foundational structures in computing with applications ranging from automata and language theory to computational modeling, program construction and verification.

We start with formalising dioids, which are additively idempotent semirings, and expand them by axiomatisations of the Kleene star for finite iteration and an omega operation for infinite iteration. We show that powersets over a given monoid, (regular) languages, sets of paths in a graph, sets of computation traces, binary relations and formal power series form Kleene algebras, and consider further models based on lattices, max-plus semirings and min-plus semirings. We also demonstrate that dioids are closed under the formation of matrices (proofs for Kleene algebras remain to be completed).

On the one hand we have aimed at a reference formalisation of variants of Kleene algebras that covers a wide range of variants and the core theorems in a structured and modular way and provides readable proofs at text book level. On the other hand, we intend to use this algebraic hierarchy and its models as a generic algebraic middle-layer from which programming applications can quickly be explored, implemented and verified. notify = g.struth@sheffield.ac.uk, tjark.weber@it.uu.se [KAT_and_DRA] title = Kleene Algebra with Tests and Demonic Refinement Algebras author = Alasdair Armstrong <>, Victor B. F. Gomes , Georg Struth date = 2014-01-23 topic = Computer science/Programming languages/Logics, Computer science/Automata and formal languages, Mathematics/Algebra abstract = We formalise Kleene algebra with tests (KAT) and demonic refinement algebra (DRA) in Isabelle/HOL. KAT is relevant for program verification and correctness proofs in the partial correctness setting. While DRA targets similar applications in the context of total correctness. Our formalisation contains the two most important models of these algebras: binary relations in the case of KAT and predicate transformers in the case of DRA. In addition, we derive the inference rules for Hoare logic in KAT and its relational model and present a simple formally verified program verification tool prototype based on the algebraic approach. notify = g.struth@dcs.shef.ac.uk [KAD] title = Kleene Algebras with Domain author = Victor B. F. Gomes , Walter Guttmann , Peter Höfner , Georg Struth , Tjark Weber date = 2016-04-12 topic = Computer science/Programming languages/Logics, Computer science/Automata and formal languages, Mathematics/Algebra abstract = Kleene algebras with domain are Kleene algebras endowed with an operation that maps each element of the algebra to its domain of definition (or its complement) in abstract fashion. They form a simple algebraic basis for Hoare logics, dynamic logics or predicate transformer semantics. We formalise a modular hierarchy of algebras with domain and antidomain (domain complement) operations in Isabelle/HOL that ranges from domain and antidomain semigroups to modal Kleene algebras and divergence Kleene algebras. We link these algebras with models of binary relations and program traces. We include some examples from modal logics, termination and program analysis. notify = walter.guttman@canterbury.ac.nz, g.struth@sheffield.ac.uk, tjark.weber@it.uu.se [Regular_Algebras] title = Regular Algebras author = Simon Foster , Georg Struth date = 2014-05-21 topic = Computer science/Automata and formal languages, Mathematics/Algebra abstract = Regular algebras axiomatise the equational theory of regular expressions as induced by regular language identity. We use Isabelle/HOL for a detailed systematic study of regular algebras given by Boffa, Conway, Kozen and Salomaa. We investigate the relationships between these classes, formalise a soundness proof for the smallest class (Salomaa's) and obtain completeness of the largest one (Boffa's) relative to a deep result by Krob. In addition we provide a large collection of regular identities in the general setting of Boffa's axiom. Our regular algebra hierarchy is orthogonal to the Kleene algebra hierarchy in the Archive of Formal Proofs; we have not aimed at an integration for pragmatic reasons. notify = simon.foster@york.ac.uk, g.struth@sheffield.ac.uk [BytecodeLogicJmlTypes] title = A Bytecode Logic for JML and Types author = Lennart Beringer <>, Martin Hofmann date = 2008-12-12 topic = Computer science/Programming languages/Logics abstract = This document contains the Isabelle/HOL sources underlying the paper A bytecode logic for JML and types by Beringer and Hofmann, updated to Isabelle 2008. We present a program logic for a subset of sequential Java bytecode that is suitable for representing both, features found in high-level specification language JML as well as interpretations of high-level type systems. To this end, we introduce a fine-grained collection of assertions, including strong invariants, local annotations and VDM-reminiscent partial-correctness specifications. Thanks to a goal-oriented structure and interpretation of judgements, verification may proceed without recourse to an additional control flow analysis. The suitability for interpreting intensional type systems is illustrated by the proof-carrying-code style encoding of a type system for a first-order functional language which guarantees a constant upper bound on the number of objects allocated throughout an execution, be the execution terminating or non-terminating. Like the published paper, the formal development is restricted to a comparatively small subset of the JVML, lacking (among other features) exceptions, arrays, virtual methods, and static fields. This shortcoming has been overcome meanwhile, as our paper has formed the basis of the Mobius base logic, a program logic for the full sequential fragment of the JVML. Indeed, the present formalisation formed the basis of a subsequent formalisation of the Mobius base logic in the proof assistant Coq, which includes a proof of soundness with respect to the Bicolano operational semantics by Pichardie. notify = [DataRefinementIBP] title = Semantics and Data Refinement of Invariant Based Programs author = Viorel Preoteasa , Ralph-Johan Back date = 2010-05-28 topic = Computer science/Programming languages/Logics abstract = The invariant based programming is a technique of constructing correct programs by first identifying the basic situations (pre- and post-conditions and invariants) that can occur during the execution of the program, and then defining the transitions and proving that they preserve the invariants. Data refinement is a technique of building correct programs working on concrete datatypes as refinements of more abstract programs. In the theories presented here we formalize the predicate transformer semantics for invariant based programs and their data refinement. extra-history = Change history: [2012-01-05]: Moved some general complete lattice properties to the AFP entry Lattice Properties. Changed the definition of the data refinement relation to be more general and updated all corresponding theorems. Added new syntax for demonic and angelic update statements. notify = viorel.preoteasa@aalto.fi [RefinementReactive] title = Formalization of Refinement Calculus for Reactive Systems author = Viorel Preoteasa date = 2014-10-08 topic = Computer science/Programming languages/Logics abstract = We present a formalization of refinement calculus for reactive systems. Refinement calculus is based on monotonic predicate transformers (monotonic functions from sets of post-states to sets of pre-states), and it is a powerful formalism for reasoning about imperative programs. We model reactive systems as monotonic property transformers that transform sets of output infinite sequences into sets of input infinite sequences. Within this semantics we can model refinement of reactive systems, (unbounded) angelic and demonic nondeterminism, sequential composition, and other semantic properties. We can model systems that may fail for some inputs, and we can model compatibility of systems. We can specify systems that have liveness properties using linear temporal logic, and we can refine system specifications into systems based on symbolic transitions systems, suitable for implementations. notify = viorel.preoteasa@aalto.fi [SIFPL] title = Secure information flow and program logics author = Lennart Beringer <>, Martin Hofmann date = 2008-11-10 topic = Computer science/Programming languages/Logics, Computer science/Security abstract = We present interpretations of type systems for secure information flow in Hoare logic, complementing previous encodings in relational program logics. We first treat the imperative language IMP, extended by a simple procedure call mechanism. For this language we consider base-line non-interference in the style of Volpano et al. and the flow-sensitive type system by Hunt and Sands. In both cases, we show how typing derivations may be used to automatically generate proofs in the program logic that certify the absence of illicit flows. We then add instructions for object creation and manipulation, and derive appropriate proof rules for base-line non-interference. As a consequence of our work, standard verification technology may be used for verifying that a concrete program satisfies the non-interference property.

The present proof development represents an update of the formalisation underlying our paper [CSF 2007] and is intended to resolve any ambiguities that may be present in the paper. notify = lennart.beringer@ifi.lmu.de [TLA] title = A Definitional Encoding of TLA* in Isabelle/HOL author = Gudmund Grov , Stephan Merz date = 2011-11-19 topic = Computer science/Programming languages/Logics abstract = We mechanise the logic TLA* [Merz 1999], an extension of Lamport's Temporal Logic of Actions (TLA) [Lamport 1994] for specifying and reasoning about concurrent and reactive systems. Aiming at a framework for mechanising] the verification of TLA (or TLA*) specifications, this contribution reuses some elements from a previous axiomatic encoding of TLA in Isabelle/HOL by the second author [Merz 1998], which has been part of the Isabelle distribution. In contrast to that previous work, we give here a shallow, definitional embedding, with the following highlights:

  • a theory of infinite sequences, including a formalisation of the concepts of stuttering invariance central to TLA and TLA*;
  • a definition of the semantics of TLA*, which extends TLA by a mutually-recursive definition of formulas and pre-formulas, generalising TLA action formulas;
  • a substantial set of derived proof rules, including the TLA* axioms and Lamport's proof rules for system verification;
  • a set of examples illustrating the usage of Isabelle/TLA* for reasoning about systems.
Note that this work is unrelated to the ongoing development of a proof system for the specification language TLA+, which includes an encoding of TLA+ as a new Isabelle object logic [Chaudhuri et al 2010]. notify = ggrov@inf.ed.ac.uk [Compiling-Exceptions-Correctly] title = Compiling Exceptions Correctly author = Tobias Nipkow date = 2004-07-09 topic = Computer science/Programming languages/Compiling abstract = An exception compilation scheme that dynamically creates and removes exception handler entries on the stack. A formalization of an article of the same name by Hutton and Wright. notify = nipkow@in.tum.de [NormByEval] title = Normalization by Evaluation author = Klaus Aehlig , Tobias Nipkow date = 2008-02-18 topic = Computer science/Programming languages/Compiling abstract = This article formalizes normalization by evaluation as implemented in Isabelle. Lambda calculus plus term rewriting is compiled into a functional program with pattern matching. It is proved that the result of a successful evaluation is a) correct, i.e. equivalent to the input, and b) in normal form. notify = nipkow@in.tum.de [Program-Conflict-Analysis] title = Formalization of Conflict Analysis of Programs with Procedures, Thread Creation, and Monitors topic = Computer science/Programming languages/Static analysis author = Peter Lammich , Markus Müller-Olm date = 2007-12-14 abstract = In this work we formally verify the soundness and precision of a static program analysis that detects conflicts (e. g. data races) in programs with procedures, thread creation and monitors with the Isabelle theorem prover. As common in static program analysis, our program model abstracts guarded branching by nondeterministic branching, but completely interprets the call-/return behavior of procedures, synchronization by monitors, and thread creation. The analysis is based on the observation that all conflicts already occur in a class of particularly restricted schedules. These restricted schedules are suited to constraint-system-based program analysis. The formalization is based upon a flowgraph-based program model with an operational semantics as reference point. notify = peter.lammich@uni-muenster.de [Shivers-CFA] title = Shivers' Control Flow Analysis topic = Computer science/Programming languages/Static analysis author = Joachim Breitner date = 2010-11-16 abstract = In his dissertation, Olin Shivers introduces a concept of control flow graphs for functional languages, provides an algorithm to statically derive a safe approximation of the control flow graph and proves this algorithm correct. In this research project, Shivers' algorithms and proofs are formalized in the HOLCF extension of HOL. notify = mail@joachim-breitner.de, nipkow@in.tum.de [Slicing] title = Towards Certified Slicing author = Daniel Wasserrab date = 2008-09-16 topic = Computer science/Programming languages/Static analysis abstract = Slicing is a widely-used technique with applications in e.g. compiler technology and software security. Thus verification of algorithms in these areas is often based on the correctness of slicing, which should ideally be proven independent of concrete programming languages and with the help of well-known verifying techniques such as proof assistants. As a first step in this direction, this contribution presents a framework for dynamic and static intraprocedural slicing based on control flow and program dependence graphs. Abstracting from concrete syntax we base the framework on a graph representation of the program fulfilling certain structural and well-formedness properties.

The formalization consists of the basic framework (in subdirectory Basic/), the correctness proof for dynamic slicing (in subdirectory Dynamic/), the correctness proof for static intraprocedural slicing (in subdirectory StaticIntra/) and instantiations of the framework with a simple While language (in subdirectory While/) and the sophisticated object-oriented bytecode language of Jinja (in subdirectory JinjaVM/). For more information on the framework, see the TPHOLS 2008 paper by Wasserrab and Lochbihler and the PLAS 2009 paper by Wasserrab et al. notify = [HRB-Slicing] title = Backing up Slicing: Verifying the Interprocedural Two-Phase Horwitz-Reps-Binkley Slicer author = Daniel Wasserrab date = 2009-11-13 topic = Computer science/Programming languages/Static analysis abstract = After verifying dynamic and static interprocedural slicing, we present a modular framework for static interprocedural slicing. To this end, we formalized the standard two-phase slicer from Horwitz, Reps and Binkley (see their TOPLAS 12(1) 1990 paper) together with summary edges as presented by Reps et al. (see FSE 1994). The framework is again modular in the programming language by using an abstract CFG, defined via structural and well-formedness properties. Using a weak simulation between the original and sliced graph, we were able to prove the correctness of static interprocedural slicing. We also instantiate our framework with a simple While language with procedures. This shows that the chosen abstractions are indeed valid. notify = nipkow@in.tum.de [WorkerWrapper] title = The Worker/Wrapper Transformation author = Peter Gammie date = 2009-10-30 topic = Computer science/Programming languages/Transformations abstract = Gill and Hutton formalise the worker/wrapper transformation, building on the work of Launchbury and Peyton-Jones who developed it as a way of changing the type at which a recursive function operates. This development establishes the soundness of the technique and several examples of its use. notify = peteg42@gmail.com, nipkow@in.tum.de [JiveDataStoreModel] title = Jive Data and Store Model author = Nicole Rauch , Norbert Schirmer <> date = 2005-06-20 license = LGPL topic = Computer science/Programming languages/Misc abstract = This document presents the formalization of an object-oriented data and store model in Isabelle/HOL. This model is being used in the Java Interactive Verification Environment, Jive. notify = kleing@cse.unsw.edu.au, schirmer@in.tum.de [HotelKeyCards] title = Hotel Key Card System author = Tobias Nipkow date = 2006-09-09 topic = Computer science/Security abstract = Two models of an electronic hotel key card system are contrasted: a state based and a trace based one. Both are defined, verified, and proved equivalent in the theorem prover Isabelle/HOL. It is shown that if a guest follows a certain safety policy regarding her key cards, she can be sure that nobody but her can enter her room. notify = nipkow@in.tum.de [RSAPSS] title = SHA1, RSA, PSS and more author = Christina Lindenberg <>, Kai Wirt <> date = 2005-05-02 topic = Computer science/Security/Cryptography abstract = Formal verification is getting more and more important in computer science. However the state of the art formal verification methods in cryptography are very rudimentary. These theories are one step to provide a tool box allowing the use of formal methods in every aspect of cryptography. Moreover we present a proof of concept for the feasibility of verification techniques to a standard signature algorithm. notify = nipkow@in.tum.de [InformationFlowSlicing] title = Information Flow Noninterference via Slicing author = Daniel Wasserrab date = 2010-03-23 topic = Computer science/Security abstract =

In this contribution, we show how correctness proofs for intra- and interprocedural slicing can be used to prove that slicing is able to guarantee information flow noninterference. Moreover, we also illustrate how to lift the control flow graphs of the respective frameworks such that they fulfil the additional assumptions needed in the noninterference proofs. A detailed description of the intraprocedural proof and its interplay with the slicing framework can be found in the PLAS'09 paper by Wasserrab et al.

This entry contains the part for intra-procedural slicing. See entry InformationFlowSlicing_Inter for the inter-procedural part.

extra-history = Change history: [2016-06-10]: The original entry InformationFlowSlicing contained both the inter- and intra-procedural case was split into two for easier maintenance. notify = [InformationFlowSlicing_Inter] title = Inter-Procedural Information Flow Noninterference via Slicing author = Daniel Wasserrab date = 2010-03-23 topic = Computer science/Security abstract =

In this contribution, we show how correctness proofs for intra- and interprocedural slicing can be used to prove that slicing is able to guarantee information flow noninterference. Moreover, we also illustrate how to lift the control flow graphs of the respective frameworks such that they fulfil the additional assumptions needed in the noninterference proofs. A detailed description of the intraprocedural proof and its interplay with the slicing framework can be found in the PLAS'09 paper by Wasserrab et al.

This entry contains the part for inter-procedural slicing. See entry InformationFlowSlicing for the intra-procedural part.

extra-history = Change history: [2016-06-10]: The original entry InformationFlowSlicing contained both the inter- and intra-procedural case was split into two for easier maintenance. notify = [ComponentDependencies] title = Formalisation and Analysis of Component Dependencies author = Maria Spichkova date = 2014-04-28 topic = Computer science/System description languages abstract = This set of theories presents a formalisation in Isabelle/HOL of data dependencies between components. The approach allows to analyse system structure oriented towards efficient checking of system: it aims at elaborating for a concrete system, which parts of the system are necessary to check a given property. notify = maria.spichkova@rmit.edu.au [Verified-Prover] title = A Mechanically Verified, Efficient, Sound and Complete Theorem Prover For First Order Logic author = Tom Ridge <> date = 2004-09-28 topic = Logic/General logic/Mechanization of proofs abstract = Soundness and completeness for a system of first order logic are formally proved, building on James Margetson's formalization of work by Wainer and Wallen. The completeness proofs naturally suggest an algorithm to derive proofs. This algorithm, which can be implemented tail recursively, is formalized in Isabelle/HOL. The algorithm can be executed via the rewriting tactics of Isabelle. Alternatively, the definitions can be exported to OCaml, yielding a directly executable program. notify = lp15@cam.ac.uk [Completeness] title = Completeness theorem author = James Margetson <>, Tom Ridge <> date = 2004-09-20 topic = Logic/Proof theory abstract = The completeness of first-order logic is proved, following the first five pages of Wainer and Wallen's chapter of the book Proof Theory by Aczel et al., CUP, 1992. Their presentation of formulas allows the proofs to use symmetry arguments. Margetson formalized this theorem by early 2000. The Isar conversion is thanks to Tom Ridge. A paper describing the formalization is available [pdf]. notify = lp15@cam.ac.uk [Ordinal] title = Countable Ordinals author = Brian Huffman date = 2005-11-11 topic = Logic/Set theory abstract = This development defines a well-ordered type of countable ordinals. It includes notions of continuous and normal functions, recursively defined functions over ordinals, least fixed-points, and derivatives. Much of ordinal arithmetic is formalized, including exponentials and logarithms. The development concludes with formalizations of Cantor Normal Form and Veblen hierarchies over normal functions. notify = lcp@cl.cam.ac.uk [Ordinals_and_Cardinals] title = Ordinals and Cardinals author = Andrei Popescu <> date = 2009-09-01 topic = Logic/Set theory abstract = We develop a basic theory of ordinals and cardinals in Isabelle/HOL, up to the point where some cardinality facts relevant for the ``working mathematician" become available. Unlike in set theory, here we do not have at hand canonical notions of ordinal and cardinal. Therefore, here an ordinal is merely a well-order relation and a cardinal is an ordinal minim w.r.t. order embedding on its field. extra-history = Change history: [2012-09-25]: This entry has been discontinued because it is now part of the Isabelle distribution. notify = uuomul@yahoo.com, nipkow@in.tum.de [FOL-Fitting] title = First-Order Logic According to Fitting author = Stefan Berghofer contributors = Asta Halkjær From date = 2007-08-02 topic = Logic/General logic/Classical first-order logic abstract = We present a formalization of parts of Melvin Fitting's book "First-Order Logic and Automated Theorem Proving". The formalization covers the syntax of first-order logic, its semantics, the model existence theorem, a natural deduction proof calculus together with a proof of correctness and completeness, as well as the Löwenheim-Skolem theorem. extra-history = Change history: [2018-07-21]: Proved completeness theorem for open formulas. Proofs are now written in the declarative style. Enumeration of pairs and datatypes is automated using the Countable theory. notify = berghofe@in.tum.de [Epistemic_Logic] title = Epistemic Logic author = Asta Halkjær From topic = Logic/General logic/Logics of knowledge and belief date = 2018-10-29 notify = ahfrom@dtu.dk abstract = This work is a formalization of epistemic logic with countably many agents. It includes proofs of soundness and completeness for the axiom system K. The completeness proof is based on the textbook "Reasoning About Knowledge" by Fagin, Halpern, Moses and Vardi (MIT Press 1995). [SequentInvertibility] title = Invertibility in Sequent Calculi author = Peter Chapman <> date = 2009-08-28 topic = Logic/Proof theory license = LGPL abstract = The invertibility of the rules of a sequent calculus is important for guiding proof search and can be used in some formalised proofs of Cut admissibility. We present sufficient conditions for when a rule is invertible with respect to a calculus. We illustrate the conditions with examples. It must be noted we give purely syntactic criteria; no guarantees are given as to the suitability of the rules. notify = pc@cs.st-andrews.ac.uk, nipkow@in.tum.de [LinearQuantifierElim] title = Quantifier Elimination for Linear Arithmetic author = Tobias Nipkow date = 2008-01-11 topic = Logic/General logic/Decidability of theories abstract = This article formalizes quantifier elimination procedures for dense linear orders, linear real arithmetic and Presburger arithmetic. In each case both a DNF-based non-elementary algorithm and one or more (doubly) exponential NNF-based algorithms are formalized, including the well-known algorithms by Ferrante and Rackoff and by Cooper. The NNF-based algorithms for dense linear orders are new but based on Ferrante and Rackoff and on an algorithm by Loos and Weisspfenning which simulates infenitesimals. All algorithms are directly executable. In particular, they yield reflective quantifier elimination procedures for HOL itself. The formalization makes heavy use of locales and is therefore highly modular. notify = nipkow@in.tum.de [Nat-Interval-Logic] title = Interval Temporal Logic on Natural Numbers author = David Trachtenherz <> date = 2011-02-23 topic = Logic/General logic/Temporal logic abstract = We introduce a theory of temporal logic operators using sets of natural numbers as time domain, formalized in a shallow embedding manner. The theory comprises special natural intervals (theory IL_Interval: open and closed intervals, continuous and modulo intervals, interval traversing results), operators for shifting intervals to left/right on the number axis as well as expanding/contracting intervals by constant factors (theory IL_IntervalOperators.thy), and ultimately definitions and results for unary and binary temporal operators on arbitrary natural sets (theory IL_TemporalOperators). notify = nipkow@in.tum.de [Recursion-Theory-I] title = Recursion Theory I author = Michael Nedzelsky <> date = 2008-04-05 topic = Logic/Computability abstract = This document presents the formalization of introductory material from recursion theory --- definitions and basic properties of primitive recursive functions, Cantor pairing function and computably enumerable sets (including a proof of existence of a one-complete computably enumerable set and a proof of the Rice's theorem). notify = MichaelNedzelsky@yandex.ru [Free-Boolean-Algebra] topic = Logic/General logic/Classical propositional logic title = Free Boolean Algebra author = Brian Huffman date = 2010-03-29 abstract = This theory defines a type constructor representing the free Boolean algebra over a set of generators. Values of type (α)formula represent propositional formulas with uninterpreted variables from type α, ordered by implication. In addition to all the standard Boolean algebra operations, the library also provides a function for building homomorphisms to any other Boolean algebra type. notify = brianh@cs.pdx.edu [Sort_Encodings] title = Sound and Complete Sort Encodings for First-Order Logic author = Jasmin Christian Blanchette , Andrei Popescu date = 2013-06-27 topic = Logic/General logic/Mechanization of proofs abstract = This is a formalization of the soundness and completeness properties for various efficient encodings of sorts in unsorted first-order logic used by Isabelle's Sledgehammer tool.

Essentially, the encodings proceed as follows: a many-sorted problem is decorated with (as few as possible) tags or guards that make the problem monotonic; then sorts can be soundly erased.

The development employs a formalization of many-sorted first-order logic in clausal form (clauses, structures and the basic properties of the satisfaction relation), which could be of interest as the starting point for other formalizations of first-order logic metatheory. notify = uuomul@yahoo.com [Lambda_Free_RPOs] title = Formalization of Recursive Path Orders for Lambda-Free Higher-Order Terms author = Jasmin Christian Blanchette , Uwe Waldmann , Daniel Wand date = 2016-09-23 topic = Logic/Rewriting abstract = This Isabelle/HOL formalization defines recursive path orders (RPOs) for higher-order terms without lambda-abstraction and proves many useful properties about them. The main order fully coincides with the standard RPO on first-order terms also in the presence of currying, distinguishing it from previous work. An optimized variant is formalized as well. It appears promising as the basis of a higher-order superposition calculus. notify = jasmin.blanchette@gmail.com [Lambda_Free_KBOs] title = Formalization of Knuth–Bendix Orders for Lambda-Free Higher-Order Terms author = Heiko Becker , Jasmin Christian Blanchette , Uwe Waldmann , Daniel Wand date = 2016-11-12 topic = Logic/Rewriting abstract = This Isabelle/HOL formalization defines Knuth–Bendix orders for higher-order terms without lambda-abstraction and proves many useful properties about them. The main order fully coincides with the standard transfinite KBO with subterm coefficients on first-order terms. It appears promising as the basis of a higher-order superposition calculus. notify = jasmin.blanchette@gmail.com [Lambda_Free_EPO] title = Formalization of the Embedding Path Order for Lambda-Free Higher-Order Terms author = Alexander Bentkamp topic = Logic/Rewriting date = 2018-10-19 notify = a.bentkamp@vu.nl abstract = This Isabelle/HOL formalization defines the Embedding Path Order (EPO) for higher-order terms without lambda-abstraction and proves many useful properties about it. In contrast to the lambda-free recursive path orders, it does not fully coincide with RPO on first-order terms, but it is compatible with arbitrary higher-order contexts. [Nested_Multisets_Ordinals] title = Formalization of Nested Multisets, Hereditary Multisets, and Syntactic Ordinals author = Jasmin Christian Blanchette , Mathias Fleury , Dmitriy Traytel date = 2016-11-12 topic = Logic/Rewriting abstract = This Isabelle/HOL formalization introduces a nested multiset datatype and defines Dershowitz and Manna's nested multiset order. The order is proved well founded and linear. By removing one constructor, we transform the nested multisets into hereditary multisets. These are isomorphic to the syntactic ordinals—the ordinals can be recursively expressed in Cantor normal form. Addition, subtraction, multiplication, and linear orders are provided on this type. notify = jasmin.blanchette@gmail.com [Abstract-Rewriting] title = Abstract Rewriting topic = Logic/Rewriting date = 2010-06-14 author = Christian Sternagel , René Thiemann license = LGPL abstract = We present an Isabelle formalization of abstract rewriting (see, e.g., the book by Baader and Nipkow). First, we define standard relations like joinability, meetability, conversion, etc. Then, we formalize important properties of abstract rewrite systems, e.g., confluence and strong normalization. Our main concern is on strong normalization, since this formalization is the basis of CeTA (which is mainly about strong normalization of term rewrite systems). Hence lemmas involving strong normalization constitute by far the biggest part of this theory. One of those is Newman's lemma. extra-history = Change history: [2010-09-17]: Added theories defining several (ordered) semirings related to strong normalization and giving some standard instances.
[2013-10-16]: Generalized delta-orders from rationals to Archimedean fields. notify = christian.sternagel@uibk.ac.at, rene.thiemann@uibk.ac.at [First_Order_Terms] title = First-Order Terms author = Christian Sternagel , René Thiemann topic = Logic/Rewriting, Computer science/Algorithms license = LGPL date = 2018-02-06 notify = c.sternagel@gmail.com, rene.thiemann@uibk.ac.at abstract = We formalize basic results on first-order terms, including matching and a first-order unification algorithm, as well as well-foundedness of the subsumption order. This entry is part of the Isabelle Formalization of Rewriting IsaFoR, where first-order terms are omni-present: the unification algorithm is used to certify several confluence and termination techniques, like critical-pair computation and dependency graph approximations; and the subsumption order is a crucial ingredient for completion. [Free-Groups] title = Free Groups author = Joachim Breitner date = 2010-06-24 topic = Mathematics/Algebra abstract = Free Groups are, in a sense, the most generic kind of group. They are defined over a set of generators with no additional relations in between them. They play an important role in the definition of group presentations and in other fields. This theory provides the definition of Free Group as the set of fully canceled words in the generators. The universal property is proven, as well as some isomorphisms results about Free Groups. extra-history = Change history: [2011-12-11]: Added the Ping Pong Lemma. notify = [CofGroups] title = An Example of a Cofinitary Group in Isabelle/HOL author = Bart Kastermans date = 2009-08-04 topic = Mathematics/Algebra abstract = We formalize the usual proof that the group generated by the function k -> k + 1 on the integers gives rise to a cofinitary group. notify = nipkow@in.tum.de [Group-Ring-Module] title = Groups, Rings and Modules author = Hidetsune Kobayashi <>, L. Chen <>, H. Murao <> date = 2004-05-18 topic = Mathematics/Algebra abstract = The theory of groups, rings and modules is developed to a great depth. Group theory results include Zassenhaus's theorem and the Jordan-Hoelder theorem. The ring theory development includes ideals, quotient rings and the Chinese remainder theorem. The module development includes the Nakayama lemma, exact sequences and Tensor products. notify = lp15@cam.ac.uk [Robbins-Conjecture] title = A Complete Proof of the Robbins Conjecture author = Matthew Wampler-Doty <> date = 2010-05-22 topic = Mathematics/Algebra abstract = This document gives a formalization of the proof of the Robbins conjecture, following A. Mann, A Complete Proof of the Robbins Conjecture, 2003. notify = nipkow@in.tum.de [Valuation] title = Fundamental Properties of Valuation Theory and Hensel's Lemma author = Hidetsune Kobayashi <> date = 2007-08-08 topic = Mathematics/Algebra abstract = Convergence with respect to a valuation is discussed as convergence of a Cauchy sequence. Cauchy sequences of polynomials are defined. They are used to formalize Hensel's lemma. notify = lp15@cam.ac.uk [Rank_Nullity_Theorem] title = Rank-Nullity Theorem in Linear Algebra author = Jose Divasón , Jesús Aransay topic = Mathematics/Algebra date = 2013-01-16 abstract = In this contribution, we present some formalizations based on the HOL-Multivariate-Analysis session of Isabelle. Firstly, a generalization of several theorems of such library are presented. Secondly, some definitions and proofs involving Linear Algebra and the four fundamental subspaces of a matrix are shown. Finally, we present a proof of the result known in Linear Algebra as the ``Rank-Nullity Theorem'', which states that, given any linear map f from a finite dimensional vector space V to a vector space W, then the dimension of V is equal to the dimension of the kernel of f (which is a subspace of V) and the dimension of the range of f (which is a subspace of W). The proof presented here is based on the one given by Sheldon Axler in his book Linear Algebra Done Right. As a corollary of the previous theorem, and taking advantage of the relationship between linear maps and matrices, we prove that, for every matrix A (which has associated a linear map between finite dimensional vector spaces), the sum of its null space and its column space (which is equal to the range of the linear map) is equal to the number of columns of A. extra-history = Change history: [2014-07-14]: Added some generalizations that allow us to formalize the Rank-Nullity Theorem over finite dimensional vector spaces, instead of over the more particular euclidean spaces. Updated abstract. notify = jose.divasonm@unirioja.es, jesus-maria.aransay@unirioja.es [Affine_Arithmetic] title = Affine Arithmetic author = Fabian Immler date = 2014-02-07 topic = Mathematics/Analysis abstract = We give a formalization of affine forms as abstract representations of zonotopes. We provide affine operations as well as overapproximations of some non-affine operations like multiplication and division. Expressions involving those operations can automatically be turned into (executable) functions approximating the original expression in affine arithmetic. extra-history = Change history: [2015-01-31]: added algorithm for zonotope/hyperplane intersection
[2017-09-20]: linear approximations for all symbols from the floatarith data type notify = immler@in.tum.de [Laplace_Transform] title = Laplace Transform author = Fabian Immler topic = Mathematics/Analysis date = 2019-08-14 notify = fimmler@cs.cmu.edu abstract = This entry formalizes the Laplace transform and concrete Laplace transforms for arithmetic functions, frequency shift, integration and (higher) differentiation in the time domain. It proves Lerch's lemma and uniqueness of the Laplace transform for continuous functions. In order to formalize the foundational assumptions, this entry contains a formalization of piecewise continuous functions and functions of exponential order. [Cauchy] title = Cauchy's Mean Theorem and the Cauchy-Schwarz Inequality author = Benjamin Porter <> date = 2006-03-14 topic = Mathematics/Analysis abstract = This document presents the mechanised proofs of two popular theorems attributed to Augustin Louis Cauchy - Cauchy's Mean Theorem and the Cauchy-Schwarz Inequality. notify = kleing@cse.unsw.edu.au [Integration] title = Integration theory and random variables author = Stefan Richter date = 2004-11-19 topic = Mathematics/Analysis abstract = Lebesgue-style integration plays a major role in advanced probability. We formalize concepts of elementary measure theory, real-valued random variables as Borel-measurable functions, and a stepwise inductive definition of the integral itself. All proofs are carried out in human readable style using the Isar language. extra-note = Note: This article is of historical interest only. Lebesgue-style integration and probability theory are now available as part of the Isabelle/HOL distribution (directory Probability). notify = richter@informatik.rwth-aachen.de, nipkow@in.tum.de, hoelzl@in.tum.de [Ordinary_Differential_Equations] title = Ordinary Differential Equations author = Fabian Immler , Johannes Hölzl topic = Mathematics/Analysis date = 2012-04-26 abstract =

Session Ordinary-Differential-Equations formalizes ordinary differential equations (ODEs) and initial value problems. This work comprises proofs for local and global existence of unique solutions (Picard-Lindelöf theorem). Moreover, it contains a formalization of the (continuous or even differentiable) dependency of the flow on initial conditions as the flow of ODEs.

Not in the generated document are the following sessions:

  • HOL-ODE-Numerics: Rigorous numerical algorithms for computing enclosures of solutions based on Runge-Kutta methods and affine arithmetic. Reachability analysis with splitting and reduction at hyperplanes.
  • HOL-ODE-Examples: Applications of the numerical algorithms to concrete systems of ODEs.
  • Lorenz_C0, Lorenz_C1: Verified algorithms for checking C1-information according to Tucker's proof, computation of C0-information.

extra-history = Change history: [2014-02-13]: added an implementation of the Euler method based on affine arithmetic
[2016-04-14]: added flow and variational equation
[2016-08-03]: numerical algorithms for reachability analysis (using second-order Runge-Kutta methods, splitting, and reduction) implemented using Lammich's framework for automatic refinement
[2017-09-20]: added Poincare map and propagation of variational equation in reachability analysis, verified algorithms for C1-information and computations for C0-information of the Lorenz attractor. notify = immler@in.tum.de, hoelzl@in.tum.de [Polynomials] title = Executable Multivariate Polynomials author = Christian Sternagel , René Thiemann , Alexander Maletzky , Fabian Immler , Florian Haftmann , Andreas Lochbihler , Alexander Bentkamp date = 2010-08-10 topic = Mathematics/Analysis, Mathematics/Algebra, Computer science/Algorithms/Mathematical license = LGPL abstract = We define multivariate polynomials over arbitrary (ordered) semirings in combination with (executable) operations like addition, multiplication, and substitution. We also define (weak) monotonicity of polynomials and comparison of polynomials where we provide standard estimations like absolute positiveness or the more recent approach of Neurauter, Zankl, and Middeldorp. Moreover, it is proven that strongly normalizing (monotone) orders can be lifted to strongly normalizing (monotone) orders over polynomials. Our formalization was performed as part of the IsaFoR/CeTA-system which contains several termination techniques. The provided theories have been essential to formalize polynomial interpretations.

This formalization also contains an abstract representation as coefficient functions with finite support and a type of power-products. If this type is ordered by a linear (term) ordering, various additional notions, such as leading power-product, leading coefficient etc., are introduced as well. Furthermore, a lot of generic properties of, and functions on, multivariate polynomials are formalized, including the substitution and evaluation homomorphisms, embeddings of polynomial rings into larger rings (i.e. with one additional indeterminate), homogenization and dehomogenization of polynomials, and the canonical isomorphism between R[X,Y] and R[X][Y]. extra-history = Change history: [2010-09-17]: Moved theories on arbitrary (ordered) semirings to Abstract Rewriting.
[2016-10-28]: Added abstract representation of polynomials and authors Maletzky/Immler.
[2018-01-23]: Added authors Haftmann, Lochbihler after incorporating their formalization of multivariate polynomials based on Polynomial mappings. Moved material from Bentkamp's entry "Deep Learning".
[2019-04-18]: Added material about polynomials whose power-products are represented themselves by polynomial mappings. notify = rene.thiemann@uibk.ac.at, christian.sternagel@uibk.ac.at, alexander.maletzky@risc.jku.at, immler@in.tum.de [Sqrt_Babylonian] title = Computing N-th Roots using the Babylonian Method author = René Thiemann date = 2013-01-03 topic = Mathematics/Analysis license = LGPL abstract = We implement the Babylonian method to compute n-th roots of numbers. We provide precise algorithms for naturals, integers and rationals, and offer an approximation algorithm for square roots over linear ordered fields. Moreover, there are precise algorithms to compute the floor and the ceiling of n-th roots. extra-history = Change history: [2013-10-16]: Added algorithms to compute floor and ceiling of sqrt of integers. [2014-07-11]: Moved NthRoot_Impl from Real-Impl to this entry. notify = rene.thiemann@uibk.ac.at [Sturm_Sequences] title = Sturm's Theorem author = Manuel Eberl date = 2014-01-11 topic = Mathematics/Analysis abstract = Sturm's Theorem states that polynomial sequences with certain properties, so-called Sturm sequences, can be used to count the number of real roots of a real polynomial. This work contains a proof of Sturm's Theorem and code for constructing Sturm sequences efficiently. It also provides the “sturm” proof method, which can decide certain statements about the roots of real polynomials, such as “the polynomial P has exactly n roots in the interval I” or “P(x) > Q(x) for all x ∈ ℝ”. notify = eberlm@in.tum.de [Sturm_Tarski] title = The Sturm-Tarski Theorem author = Wenda Li date = 2014-09-19 topic = Mathematics/Analysis abstract = We have formalized the Sturm-Tarski theorem (also referred as the Tarski theorem), which generalizes Sturm's theorem. Sturm's theorem is usually used as a way to count distinct real roots, while the Sturm-Tarksi theorem forms the basis for Tarski's classic quantifier elimination for real closed field. notify = wl302@cam.ac.uk [Markov_Models] title = Markov Models author = Johannes Hölzl , Tobias Nipkow date = 2012-01-03 topic = Mathematics/Probability theory, Computer science/Automata and formal languages abstract = This is a formalization of Markov models in Isabelle/HOL. It builds on Isabelle's probability theory. The available models are currently Discrete-Time Markov Chains and a extensions of them with rewards.

As application of these models we formalize probabilistic model checking of pCTL formulas, analysis of IPv4 address allocation in ZeroConf and an analysis of the anonymity of the Crowds protocol. See here for the corresponding paper. notify = hoelzl@in.tum.de [Probabilistic_System_Zoo] title = A Zoo of Probabilistic Systems author = Johannes Hölzl , Andreas Lochbihler , Dmitriy Traytel date = 2015-05-27 topic = Computer science/Automata and formal languages abstract = Numerous models of probabilistic systems are studied in the literature. Coalgebra has been used to classify them into system types and compare their expressiveness. We formalize the resulting hierarchy of probabilistic system types by modeling the semantics of the different systems as codatatypes. This approach yields simple and concise proofs, as bisimilarity coincides with equality for codatatypes.

This work is described in detail in the ITP 2015 publication by the authors. notify = traytel@in.tum.de [Density_Compiler] title = A Verified Compiler for Probability Density Functions author = Manuel Eberl , Johannes Hölzl , Tobias Nipkow date = 2014-10-09 topic = Mathematics/Probability theory, Computer science/Programming languages/Compiling abstract = Bhat et al. [TACAS 2013] developed an inductive compiler that computes density functions for probability spaces described by programs in a probabilistic functional language. In this work, we implement such a compiler for a modified version of this language within the theorem prover Isabelle and give a formal proof of its soundness w.r.t. the semantics of the source and target language. Together with Isabelle's code generation for inductive predicates, this yields a fully verified, executable density compiler. The proof is done in two steps: First, an abstract compiler working with abstract functions modelled directly in the theorem prover's logic is defined and proved sound. Then, this compiler is refined to a concrete version that returns a target-language expression.

An article with the same title and authors is published in the proceedings of ESOP 2015. A detailed presentation of this work can be found in the first author's master's thesis. notify = hoelzl@in.tum.de [CAVA_Automata] title = The CAVA Automata Library author = Peter Lammich date = 2014-05-28 topic = Computer science/Automata and formal languages abstract = We report on the graph and automata library that is used in the fully verified LTL model checker CAVA. As most components of CAVA use some type of graphs or automata, a common automata library simplifies assembly of the components and reduces redundancy.

The CAVA Automata Library provides a hierarchy of graph and automata classes, together with some standard algorithms. Its object oriented design allows for sharing of algorithms, theorems, and implementations between its classes, and also simplifies extensions of the library. Moreover, it is integrated into the Automatic Refinement Framework, supporting automatic refinement of the abstract automata types to efficient data structures.

Note that the CAVA Automata Library is work in progress. Currently, it is very specifically tailored towards the requirements of the CAVA model checker. Nevertheless, the formalization techniques presented here allow an extension of the library to a wider scope. Moreover, they are not limited to graph libraries, but apply to class hierarchies in general.

The CAVA Automata Library is described in the paper: Peter Lammich, The CAVA Automata Library, Isabelle Workshop 2014. notify = lammich@in.tum.de [LTL] title = Linear Temporal Logic author = Salomon Sickert contributors = Benedikt Seidl date = 2016-03-01 topic = Logic/General logic/Temporal logic, Computer science/Automata and formal languages abstract = This theory provides a formalisation of linear temporal logic (LTL) and unifies previous formalisations within the AFP. This entry establishes syntax and semantics for this logic and decouples it from existing entries, yielding a common environment for theories reasoning about LTL. Furthermore a parser written in SML and an executable simplifier are provided. extra-history = Change history: [2019-03-12]: Support for additional operators, implementation of common equivalence relations, definition of syntactic fragments of LTL and the minimal disjunctive normal form.
notify = sickert@in.tum.de [LTL_to_GBA] title = Converting Linear-Time Temporal Logic to Generalized Büchi Automata author = Alexander Schimpf , Peter Lammich date = 2014-05-28 topic = Computer science/Automata and formal languages abstract = We formalize linear-time temporal logic (LTL) and the algorithm by Gerth et al. to convert LTL formulas to generalized Büchi automata. We also formalize some syntactic rewrite rules that can be applied to optimize the LTL formula before conversion. Moreover, we integrate the Stuttering Equivalence AFP-Entry by Stefan Merz, adapting the lemma that next-free LTL formula cannot distinguish between stuttering equivalent runs to our setting.

We use the Isabelle Refinement and Collection framework, as well as the Autoref tool, to obtain a refined version of our algorithm, from which efficiently executable code can be extracted. notify = lammich@in.tum.de [Gabow_SCC] title = Verified Efficient Implementation of Gabow's Strongly Connected Components Algorithm author = Peter Lammich date = 2014-05-28 topic = Computer science/Algorithms/Graph, Mathematics/Graph theory abstract = We present an Isabelle/HOL formalization of Gabow's algorithm for finding the strongly connected components of a directed graph. Using data refinement techniques, we extract efficient code that performs comparable to a reference implementation in Java. Our style of formalization allows for re-using large parts of the proofs when defining variants of the algorithm. We demonstrate this by verifying an algorithm for the emptiness check of generalized Büchi automata, re-using most of the existing proofs. notify = lammich@in.tum.de [Promela] title = Promela Formalization author = René Neumann date = 2014-05-28 topic = Computer science/System description languages abstract = We present an executable formalization of the language Promela, the description language for models of the model checker SPIN. This formalization is part of the work for a completely verified model checker (CAVA), but also serves as a useful (and executable!) description of the semantics of the language itself, something that is currently missing. The formalization uses three steps: It takes an abstract syntax tree generated from an SML parser, removes syntactic sugar and enriches it with type information. This further gets translated into a transition system, on which the semantic engine (read: successor function) operates. notify = [CAVA_LTL_Modelchecker] title = A Fully Verified Executable LTL Model Checker author = Javier Esparza , Peter Lammich , René Neumann , Tobias Nipkow , Alexander Schimpf , Jan-Georg Smaus date = 2014-05-28 topic = Computer science/Automata and formal languages abstract = We present an LTL model checker whose code has been completely verified using the Isabelle theorem prover. The checker consists of over 4000 lines of ML code. The code is produced using the Isabelle Refinement Framework, which allows us to split its correctness proof into (1) the proof of an abstract version of the checker, consisting of a few hundred lines of ``formalized pseudocode'', and (2) a verified refinement step in which mathematical sets and other abstract structures are replaced by implementations of efficient structures like red-black trees and functional arrays. This leads to a checker that, while still slower than unverified checkers, can already be used as a trusted reference implementation against which advanced implementations can be tested.

An early version of this model checker is described in the CAV 2013 paper with the same title. notify = lammich@in.tum.de [Fermat3_4] title = Fermat's Last Theorem for Exponents 3 and 4 and the Parametrisation of Pythagorean Triples author = Roelof Oosterhuis <> date = 2007-08-12 topic = Mathematics/Number theory abstract = This document presents the mechanised proofs of

  • Fermat's Last Theorem for exponents 3 and 4 and
  • the parametrisation of Pythagorean Triples.
notify = nipkow@in.tum.de, roelofoosterhuis@gmail.com [Perfect-Number-Thm] title = Perfect Number Theorem author = Mark Ijbema date = 2009-11-22 topic = Mathematics/Number theory abstract = These theories present the mechanised proof of the Perfect Number Theorem. notify = nipkow@in.tum.de [SumSquares] title = Sums of Two and Four Squares author = Roelof Oosterhuis <> date = 2007-08-12 topic = Mathematics/Number theory abstract = This document presents the mechanised proofs of the following results:
  • any prime number of the form 4m+1 can be written as the sum of two squares;
  • any natural number can be written as the sum of four squares
notify = nipkow@in.tum.de, roelofoosterhuis@gmail.com [Lehmer] title = Lehmer's Theorem author = Simon Wimmer , Lars Noschinski date = 2013-07-22 topic = Mathematics/Number theory abstract = In 1927, Lehmer presented criterions for primality, based on the converse of Fermat's litte theorem. This work formalizes the second criterion from Lehmer's paper, a necessary and sufficient condition for primality.

As a side product we formalize some properties of Euler's phi-function, the notion of the order of an element of a group, and the cyclicity of the multiplicative group of a finite field. notify = noschinl@gmail.com, simon.wimmer@tum.de [Pratt_Certificate] title = Pratt's Primality Certificates author = Simon Wimmer , Lars Noschinski date = 2013-07-22 topic = Mathematics/Number theory abstract = In 1975, Pratt introduced a proof system for certifying primes. He showed that a number p is prime iff a primality certificate for p exists. By showing a logarithmic upper bound on the length of the certificates in size of the prime number, he concluded that the decision problem for prime numbers is in NP. This work formalizes soundness and completeness of Pratt's proof system as well as an upper bound for the size of the certificate. notify = noschinl@gmail.com, simon.wimmer@tum.de [Monad_Memo_DP] title = Monadification, Memoization and Dynamic Programming author = Simon Wimmer , Shuwei Hu , Tobias Nipkow topic = Computer science/Programming languages/Transformations, Computer science/Algorithms, Computer science/Functional programming date = 2018-05-22 notify = wimmers@in.tum.de abstract = We present a lightweight framework for the automatic verified (functional or imperative) memoization of recursive functions. Our tool can turn a pure Isabelle/HOL function definition into a monadified version in a state monad or the Imperative HOL heap monad, and prove a correspondence theorem. We provide a variety of memory implementations for the two types of monads. A number of simple techniques allow us to achieve bottom-up computation and space-efficient memoization. The framework’s utility is demonstrated on a number of representative dynamic programming problems. A detailed description of our work can be found in the accompanying paper [2]. [Probabilistic_Timed_Automata] title = Probabilistic Timed Automata author = Simon Wimmer , Johannes Hölzl topic = Mathematics/Probability theory, Computer science/Automata and formal languages date = 2018-05-24 notify = wimmers@in.tum.de, hoelzl@in.tum.de abstract = We present a formalization of probabilistic timed automata (PTA) for which we try to follow the formula MDP + TA = PTA as far as possible: our work starts from our existing formalizations of Markov decision processes (MDP) and timed automata (TA) and combines them modularly. We prove the fundamental result for probabilistic timed automata: the region construction that is known from timed automata carries over to the probabilistic setting. In particular, this allows us to prove that minimum and maximum reachability probabilities can be computed via a reduction to MDP model checking, including the case where one wants to disregard unrealizable behavior. Further information can be found in our ITP paper [2]. [Hidden_Markov_Models] title = Hidden Markov Models author = Simon Wimmer topic = Mathematics/Probability theory, Computer science/Algorithms date = 2018-05-25 notify = wimmers@in.tum.de abstract = This entry contains a formalization of hidden Markov models [3] based on Johannes Hölzl's formalization of discrete time Markov chains [1]. The basic definitions are provided and the correctness of two main (dynamic programming) algorithms for hidden Markov models is proved: the forward algorithm for computing the likelihood of an observed sequence, and the Viterbi algorithm for decoding the most probable hidden state sequence. The Viterbi algorithm is made executable including memoization. Hidden markov models have various applications in natural language processing. For an introduction see Jurafsky and Martin [2]. [ArrowImpossibilityGS] title = Arrow and Gibbard-Satterthwaite author = Tobias Nipkow date = 2008-09-01 topic = Mathematics/Games and economics abstract = This article formalizes two proofs of Arrow's impossibility theorem due to Geanakoplos and derives the Gibbard-Satterthwaite theorem as a corollary. One formalization is based on utility functions, the other one on strict partial orders.

An article about these proofs is found here. notify = nipkow@in.tum.de [SenSocialChoice] title = Some classical results in Social Choice Theory author = Peter Gammie date = 2008-11-09 topic = Mathematics/Games and economics abstract = Drawing on Sen's landmark work "Collective Choice and Social Welfare" (1970), this development proves Arrow's General Possibility Theorem, Sen's Liberal Paradox and May's Theorem in a general setting. The goal was to make precise the classical statements and proofs of these results, and to provide a foundation for more recent results such as the Gibbard-Satterthwaite and Duggan-Schwartz theorems. notify = nipkow@in.tum.de [Vickrey_Clarke_Groves] title = VCG - Combinatorial Vickrey-Clarke-Groves Auctions author = Marco B. Caminati <>, Manfred Kerber , Christoph Lange, Colin Rowat date = 2015-04-30 topic = Mathematics/Games and economics abstract = A VCG auction (named after their inventors Vickrey, Clarke, and Groves) is a generalization of the single-good, second price Vickrey auction to the case of a combinatorial auction (multiple goods, from which any participant can bid on each possible combination). We formalize in this entry VCG auctions, including tie-breaking and prove that the functions for the allocation and the price determination are well-defined. Furthermore we show that the allocation function allocates goods only to participants, only goods in the auction are allocated, and no good is allocated twice. We also show that the price function is non-negative. These properties also hold for the automatically extracted Scala code. notify = mnfrd.krbr@gmail.com [Topology] title = Topology author = Stefan Friedrich <> date = 2004-04-26 topic = Mathematics/Topology abstract = This entry contains two theories. The first, Topology, develops the basic notions of general topology. The second, which can be viewed as a demonstration of the first, is called LList_Topology. It develops the topology of lazy lists. notify = lcp@cl.cam.ac.uk [Knot_Theory] title = Knot Theory author = T.V.H. Prathamesh date = 2016-01-20 topic = Mathematics/Topology abstract = This work contains a formalization of some topics in knot theory. The concepts that were formalized include definitions of tangles, links, framed links and link/tangle equivalence. The formalization is based on a formulation of links in terms of tangles. We further construct and prove the invariance of the Bracket polynomial. Bracket polynomial is an invariant of framed links closely linked to the Jones polynomial. This is perhaps the first attempt to formalize any aspect of knot theory in an interactive proof assistant. notify = prathamesh@imsc.res.in [Graph_Theory] title = Graph Theory author = Lars Noschinski date = 2013-04-28 topic = Mathematics/Graph theory abstract = This development provides a formalization of directed graphs, supporting (labelled) multi-edges and infinite graphs. A polymorphic edge type allows edges to be treated as pairs of vertices, if multi-edges are not required. Formalized properties are i.a. walks (and related concepts), connectedness and subgraphs and basic properties of isomorphisms.

This formalization is used to prove characterizations of Euler Trails, Shortest Paths and Kuratowski subgraphs. notify = noschinl@gmail.com [Planarity_Certificates] title = Planarity Certificates author = Lars Noschinski date = 2015-11-11 topic = Mathematics/Graph theory abstract = This development provides a formalization of planarity based on combinatorial maps and proves that Kuratowski's theorem implies combinatorial planarity. Moreover, it contains verified implementations of programs checking certificates for planarity (i.e., a combinatorial map) or non-planarity (i.e., a Kuratowski subgraph). notify = noschinl@gmail.com [Max-Card-Matching] title = Maximum Cardinality Matching author = Christine Rizkallah date = 2011-07-21 topic = Mathematics/Graph theory abstract =

A matching in a graph G is a subset M of the edges of G such that no two share an endpoint. A matching has maximum cardinality if its cardinality is at least as large as that of any other matching. An odd-set cover OSC of a graph G is a labeling of the nodes of G with integers such that every edge of G is either incident to a node labeled 1 or connects two nodes labeled with the same number i ≥ 2.

This article proves Edmonds theorem:
Let M be a matching in a graph G and let OSC be an odd-set cover of G. For any i ≥ 0, let n(i) be the number of nodes labeled i. If |M| = n(1) + ∑i ≥ 2(n(i) div 2), then M is a maximum cardinality matching.

notify = nipkow@in.tum.de [Girth_Chromatic] title = A Probabilistic Proof of the Girth-Chromatic Number Theorem author = Lars Noschinski date = 2012-02-06 topic = Mathematics/Graph theory abstract = This works presents a formalization of the Girth-Chromatic number theorem in graph theory, stating that graphs with arbitrarily large girth and chromatic number exist. The proof uses the theory of Random Graphs to prove the existence with probabilistic arguments. notify = noschinl@gmail.com [Random_Graph_Subgraph_Threshold] title = Properties of Random Graphs -- Subgraph Containment author = Lars Hupel date = 2014-02-13 topic = Mathematics/Graph theory, Mathematics/Probability theory abstract = Random graphs are graphs with a fixed number of vertices, where each edge is present with a fixed probability. We are interested in the probability that a random graph contains a certain pattern, for example a cycle or a clique. A very high edge probability gives rise to perhaps too many edges (which degrades performance for many algorithms), whereas a low edge probability might result in a disconnected graph. We prove a theorem about a threshold probability such that a higher edge probability will asymptotically almost surely produce a random graph with the desired subgraph. notify = hupel@in.tum.de [Flyspeck-Tame] title = Flyspeck I: Tame Graphs author = Gertrud Bauer <>, Tobias Nipkow date = 2006-05-22 topic = Mathematics/Graph theory abstract = These theories present the verified enumeration of tame plane graphs as defined by Thomas C. Hales in his proof of the Kepler Conjecture in his book Dense Sphere Packings. A Blueprint for Formal Proofs. [CUP 2012]. The values of the constants in the definition of tameness are identical to those in the Flyspeck project. The IJCAR 2006 paper by Nipkow, Bauer and Schultz refers to the original version of Hales' proof, the ITP 2011 paper by Nipkow refers to the Blueprint version of the proof. extra-history = Change history: [2010-11-02]: modified theories to reflect the modified definition of tameness in Hales' revised proof.
[2014-07-03]: modified constants in def of tameness and Archive according to the final state of the Flyspeck proof. notify = nipkow@in.tum.de [Well_Quasi_Orders] title = Well-Quasi-Orders author = Christian Sternagel date = 2012-04-13 topic = Mathematics/Combinatorics abstract = Based on Isabelle/HOL's type class for preorders, we introduce a type class for well-quasi-orders (wqo) which is characterized by the absence of "bad" sequences (our proofs are along the lines of the proof of Nash-Williams, from which we also borrow terminology). Our main results are instantiations for the product type, the list type, and a type of finite trees, which (almost) directly follow from our proofs of (1) Dickson's Lemma, (2) Higman's Lemma, and (3) Kruskal's Tree Theorem. More concretely:
  • If the sets A and B are wqo then their Cartesian product is wqo.
  • If the set A is wqo then the set of finite lists over A is wqo.
  • If the set A is wqo then the set of finite trees over A is wqo.
The research was funded by the Austrian Science Fund (FWF): J3202. extra-history = Change history: [2012-06-11]: Added Kruskal's Tree Theorem.
[2012-12-19]: New variant of Kruskal's tree theorem for terms (as opposed to variadic terms, i.e., trees), plus finite version of the tree theorem as corollary.
[2013-05-16]: Simplified construction of minimal bad sequences.
[2014-07-09]: Simplified proofs of Higman's lemma and Kruskal's tree theorem, based on homogeneous sequences.
[2016-01-03]: An alternative proof of Higman's lemma by open induction.
[2017-06-08]: Proved (classical) equivalence to inductive definition of almost-full relations according to the ITP 2012 paper "Stop When You Are Almost-Full" by Vytiniotis, Coquand, and Wahlstedt. notify = c.sternagel@gmail.com [Marriage] title = Hall's Marriage Theorem author = Dongchen Jiang , Tobias Nipkow date = 2010-12-17 topic = Mathematics/Combinatorics abstract = Two proofs of Hall's Marriage Theorem: one due to Halmos and Vaughan, one due to Rado. extra-history = Change history: [2011-09-09]: Added Rado's proof notify = nipkow@in.tum.de [Bondy] title = Bondy's Theorem author = Jeremy Avigad , Stefan Hetzl date = 2012-10-27 topic = Mathematics/Combinatorics abstract = A proof of Bondy's theorem following B. Bollabas, Combinatorics, 1986, Cambridge University Press. notify = avigad@cmu.edu, hetzl@logic.at [Ramsey-Infinite] title = Ramsey's theorem, infinitary version author = Tom Ridge <> date = 2004-09-20 topic = Mathematics/Combinatorics abstract = This formalization of Ramsey's theorem (infinitary version) is taken from Boolos and Jeffrey, Computability and Logic, 3rd edition, Chapter 26. It differs slightly from the text by assuming a slightly stronger hypothesis. In particular, the induction hypothesis is stronger, holding for any infinite subset of the naturals. This avoids the rather peculiar mapping argument between kj and aikj on p.263, which is unnecessary and slightly mars this really beautiful result. notify = lp15@cam.ac.uk [Derangements] title = Derangements Formula author = Lukas Bulwahn date = 2015-06-27 topic = Mathematics/Combinatorics abstract = The Derangements Formula describes the number of fixpoint-free permutations as a closed formula. This theorem is the 88th theorem in a list of the ``Top 100 Mathematical Theorems''. notify = lukas.bulwahn@gmail.com [Euler_Partition] title = Euler's Partition Theorem author = Lukas Bulwahn date = 2015-11-19 topic = Mathematics/Combinatorics abstract = Euler's Partition Theorem states that the number of partitions with only distinct parts is equal to the number of partitions with only odd parts. The combinatorial proof follows John Harrison's HOL Light formalization. This theorem is the 45th theorem of the Top 100 Theorems list. notify = lukas.bulwahn@gmail.com [Discrete_Summation] title = Discrete Summation author = Florian Haftmann contributors = Amine Chaieb <> date = 2014-04-13 topic = Mathematics/Combinatorics abstract = These theories introduce basic concepts and proofs about discrete summation: shifts, formal summation, falling factorials and stirling numbers. As proof of concept, a simple summation conversion is provided. notify = florian.haftmann@informatik.tu-muenchen.de [Open_Induction] title = Open Induction author = Mizuhito Ogawa <>, Christian Sternagel date = 2012-11-02 topic = Mathematics/Combinatorics abstract = A proof of the open induction schema based on J.-C. Raoult, Proving open properties by induction, Information Processing Letters 29, 1988, pp.19-23.

This research was supported by the Austrian Science Fund (FWF): J3202.

notify = c.sternagel@gmail.com [Category] title = Category Theory to Yoneda's Lemma author = Greg O'Keefe date = 2005-04-21 topic = Mathematics/Category theory license = LGPL abstract = This development proves Yoneda's lemma and aims to be readable by humans. It only defines what is needed for the lemma: categories, functors and natural transformations. Limits, adjunctions and other important concepts are not included. extra-history = Change history: [2010-04-23]: The definition of the constant equinumerous was slightly too weak in the original submission and has been fixed in revision 8c2b5b3c995f. notify = lcp@cl.cam.ac.uk [Category2] title = Category Theory author = Alexander Katovsky date = 2010-06-20 topic = Mathematics/Category theory abstract = This article presents a development of Category Theory in Isabelle/HOL. A Category is defined using records and locales. Functors and Natural Transformations are also defined. The main result that has been formalized is that the Yoneda functor is a full and faithful embedding. We also formalize the completeness of many sorted monadic equational logic. Extensive use is made of the HOLZF theory in both cases. For an informal description see here [pdf]. notify = alexander.katovsky@cantab.net [FunWithFunctions] title = Fun With Functions author = Tobias Nipkow date = 2008-08-26 topic = Mathematics/Misc abstract = This is a collection of cute puzzles of the form ``Show that if a function satisfies the following constraints, it must be ...'' Please add further examples to this collection! notify = nipkow@in.tum.de [FunWithTilings] title = Fun With Tilings author = Tobias Nipkow , Lawrence C. Paulson date = 2008-11-07 topic = Mathematics/Misc abstract = Tilings are defined inductively. It is shown that one form of mutilated chess board cannot be tiled with dominoes, while another one can be tiled with L-shaped tiles. Please add further fun examples of this kind! notify = nipkow@in.tum.de [Lazy-Lists-II] title = Lazy Lists II author = Stefan Friedrich <> date = 2004-04-26 topic = Computer science/Data structures abstract = This theory contains some useful extensions to the LList (lazy list) theory by Larry Paulson, including finite, infinite, and positive llists over an alphabet, as well as the new constants take and drop and the prefix order of llists. Finally, the notions of safety and liveness in the sense of Alpern and Schneider (1985) are defined. notify = lcp@cl.cam.ac.uk [Ribbon_Proofs] title = Ribbon Proofs author = John Wickerson <> date = 2013-01-19 topic = Computer science/Programming languages/Logics abstract = This document concerns the theory of ribbon proofs: a diagrammatic proof system, based on separation logic, for verifying program correctness. We include the syntax, proof rules, and soundness results for two alternative formalisations of ribbon proofs.

Compared to traditional proof outlines, ribbon proofs emphasise the structure of a proof, so are intelligible and pedagogical. Because they contain less redundancy than proof outlines, and allow each proof step to be checked locally, they may be more scalable. Where proof outlines are cumbersome to modify, ribbon proofs can be visually manoeuvred to yield proofs of variant programs. notify = [Koenigsberg_Friendship] title = The Königsberg Bridge Problem and the Friendship Theorem author = Wenda Li date = 2013-07-19 topic = Mathematics/Graph theory abstract = This development provides a formalization of undirected graphs and simple graphs, which are based on Benedikt Nordhoff and Peter Lammich's simple formalization of labelled directed graphs in the archive. Then, with our formalization of graphs, we show both necessary and sufficient conditions for Eulerian trails and circuits as well as the fact that the Königsberg Bridge Problem does not have a solution. In addition, we show the Friendship Theorem in simple graphs. notify = [Tree_Decomposition] title = Tree Decomposition author = Christoph Dittmann notify = date = 2016-05-31 topic = Mathematics/Graph theory abstract = We formalize tree decompositions and tree width in Isabelle/HOL, proving that trees have treewidth 1. We also show that every edge of a tree decomposition is a separation of the underlying graph. As an application of this theorem we prove that complete graphs of size n have treewidth n-1. [Menger] title = Menger's Theorem author = Christoph Dittmann topic = Mathematics/Graph theory date = 2017-02-26 notify = isabelle@christoph-d.de abstract = We present a formalization of Menger's Theorem for directed and undirected graphs in Isabelle/HOL. This well-known result shows that if two non-adjacent distinct vertices u, v in a directed graph have no separator smaller than n, then there exist n internally vertex-disjoint paths from u to v. The version for undirected graphs follows immediately because undirected graphs are a special case of directed graphs. [IEEE_Floating_Point] title = A Formal Model of IEEE Floating Point Arithmetic author = Lei Yu contributors = Fabian Hellauer , Fabian Immler date = 2013-07-27 topic = Computer science/Data structures abstract = This development provides a formal model of IEEE-754 floating-point arithmetic. This formalization, including formal specification of the standard and proofs of important properties of floating-point arithmetic, forms the foundation for verifying programs with floating-point computation. There is also a code generation setup for floats so that we can execute programs using this formalization in functional programming languages. notify = lp15@cam.ac.uk, immler@in.tum.de extra-history = Change history: [2017-09-25]: Added conversions from and to software floating point numbers (by Fabian Hellauer and Fabian Immler).
[2018-02-05]: 'Modernized' representation following the formalization in HOL4: former "float_format" and predicate "is_valid" is now encoded in a type "('e, 'f) float" where 'e and 'f encode the size of exponent and fraction. [Native_Word] title = Native Word author = Andreas Lochbihler contributors = Peter Lammich date = 2013-09-17 topic = Computer science/Data structures abstract = This entry makes machine words and machine arithmetic available for code generation from Isabelle/HOL. It provides a common abstraction that hides the differences between the different target languages. The code generator maps these operations to the APIs of the target languages. Apart from that, we extend the available bit operations on types int and integer, and map them to the operations in the target languages. extra-history = Change history: [2013-11-06]: added conversion function between native words and characters (revision fd23d9a7fe3a)
[2014-03-31]: added words of default size in the target language (by Peter Lammich) (revision 25caf5065833)
[2014-10-06]: proper test setup with compilation and execution of tests in all target languages (revision 5d7a1c9ae047)
[2017-09-02]: added 64-bit words (revision c89f86244e3c)
[2018-07-15]: added cast operators for default-size words (revision fc1f1fb8dd30)
notify = mail@andreas-lochbihler.de [XML] title = XML author = Christian Sternagel , René Thiemann date = 2014-10-03 topic = Computer science/Functional programming, Computer science/Data structures abstract = This entry provides an XML library for Isabelle/HOL. This includes parsing and pretty printing of XML trees as well as combinators for transforming XML trees into arbitrary user-defined data. The main contribution of this entry is an interface (fit for code generation) that allows for communication between verified programs formalized in Isabelle/HOL and the outside world via XML. This library was developed as part of the IsaFoR/CeTA project to which we refer for examples of its usage. notify = c.sternagel@gmail.com, rene.thiemann@uibk.ac.at [HereditarilyFinite] title = The Hereditarily Finite Sets author = Lawrence C. Paulson date = 2013-11-17 topic = Logic/Set theory abstract = The theory of hereditarily finite sets is formalised, following the development of Swierczkowski. An HF set is a finite collection of other HF sets; they enjoy an induction principle and satisfy all the axioms of ZF set theory apart from the axiom of infinity, which is negated. All constructions that are possible in ZF set theory (Cartesian products, disjoint sums, natural numbers, functions) without using infinite sets are possible here. The definition of addition for the HF sets follows Kirby. This development forms the foundation for the Isabelle proof of Gödel's incompleteness theorems, which has been formalised separately. extra-history = Change history: [2015-02-23]: Added the theory "Finitary" defining the class of types that can be embedded in hf, including int, char, option, list, etc. notify = lp15@cam.ac.uk [Incompleteness] title = Gödel's Incompleteness Theorems author = Lawrence C. Paulson date = 2013-11-17 topic = Logic/Proof theory abstract = Gödel's two incompleteness theorems are formalised, following a careful presentation by Swierczkowski, in the theory of hereditarily finite sets. This represents the first ever machine-assisted proof of the second incompleteness theorem. Compared with traditional formalisations using Peano arithmetic (see e.g. Boolos), coding is simpler, with no need to formalise the notion of multiplication (let alone that of a prime number) in the formalised calculus upon which the theorem is based. However, other technical problems had to be solved in order to complete the argument. notify = lp15@cam.ac.uk [Finite_Automata_HF] title = Finite Automata in Hereditarily Finite Set Theory author = Lawrence C. Paulson date = 2015-02-05 topic = Computer science/Automata and formal languages abstract = Finite Automata, both deterministic and non-deterministic, for regular languages. The Myhill-Nerode Theorem. Closure under intersection, concatenation, etc. Regular expressions define regular languages. Closure under reversal; the powerset construction mapping NFAs to DFAs. Left and right languages; minimal DFAs. Brzozowski's minimization algorithm. Uniqueness up to isomorphism of minimal DFAs. notify = lp15@cam.ac.uk [Decreasing-Diagrams] title = Decreasing Diagrams author = Harald Zankl license = LGPL date = 2013-11-01 topic = Logic/Rewriting abstract = This theory contains a formalization of decreasing diagrams showing that any locally decreasing abstract rewrite system is confluent. We consider the valley (van Oostrom, TCS 1994) and the conversion version (van Oostrom, RTA 2008) and closely follow the original proofs. As an application we prove Newman's lemma. notify = Harald.Zankl@uibk.ac.at [Decreasing-Diagrams-II] title = Decreasing Diagrams II author = Bertram Felgenhauer license = LGPL date = 2015-08-20 topic = Logic/Rewriting abstract = This theory formalizes the commutation version of decreasing diagrams for Church-Rosser modulo. The proof follows Felgenhauer and van Oostrom (RTA 2013). The theory also provides important specializations, in particular van Oostrom’s conversion version (TCS 2008) of decreasing diagrams. notify = bertram.felgenhauer@uibk.ac.at [GoedelGod] title = Gödel's God in Isabelle/HOL author = Christoph Benzmüller , Bruno Woltzenlogel Paleo date = 2013-11-12 topic = Logic/Philosophical aspects abstract = Dana Scott's version of Gödel's proof of God's existence is formalized in quantified modal logic KB (QML KB). QML KB is modeled as a fragment of classical higher-order logic (HOL); thus, the formalization is essentially a formalization in HOL. notify = lp15@cam.ac.uk, c.benzmueller@fu-berlin.de [Types_Tableaus_and_Goedels_God] title = Types, Tableaus and Gödel’s God in Isabelle/HOL author = David Fuenmayor , Christoph Benzmüller topic = Logic/Philosophical aspects date = 2017-05-01 notify = davfuenmayor@gmail.com, c.benzmueller@gmail.com abstract = A computer-formalisation of the essential parts of Fitting's textbook "Types, Tableaus and Gödel's God" in Isabelle/HOL is presented. In particular, Fitting's (and Anderson's) variant of the ontological argument is verified and confirmed. This variant avoids the modal collapse, which has been criticised as an undesirable side-effect of Kurt Gödel's (and Dana Scott's) versions of the ontological argument. Fitting's work is employing an intensional higher-order modal logic, which we shallowly embed here in classical higher-order logic. We then utilize the embedded logic for the formalisation of Fitting's argument. (See also the earlier AFP entry ``Gödel's God in Isabelle/HOL''.) [GewirthPGCProof] title = Formalisation and Evaluation of Alan Gewirth's Proof for the Principle of Generic Consistency in Isabelle/HOL author = David Fuenmayor , Christoph Benzmüller topic = Logic/Philosophical aspects date = 2018-10-30 notify = davfuenmayor@gmail.com, c.benzmueller@gmail.com abstract = An ambitious ethical theory ---Alan Gewirth's "Principle of Generic Consistency"--- is encoded and analysed in Isabelle/HOL. Gewirth's theory has stirred much attention in philosophy and ethics and has been proposed as a potential means to bound the impact of artificial general intelligence. extra-history = Change history: [2019-04-09]: added proof for a stronger variant of the PGC and examplary inferences (revision 88182cb0a2f6)
[Lowe_Ontological_Argument] title = Computer-assisted Reconstruction and Assessment of E. J. Lowe's Modal Ontological Argument author = David Fuenmayor , Christoph Benzmüller topic = Logic/Philosophical aspects date = 2017-09-21 notify = davfuenmayor@gmail.com, c.benzmueller@gmail.com abstract = Computers may help us to understand --not just verify-- philosophical arguments. By utilizing modern proof assistants in an iterative interpretive process, we can reconstruct and assess an argument by fully formal means. Through the mechanization of a variant of St. Anselm's ontological argument by E. J. Lowe, which is a paradigmatic example of a natural-language argument with strong ties to metaphysics and religion, we offer an ideal showcase for our computer-assisted interpretive method. [AnselmGod] title = Anselm's God in Isabelle/HOL author = Ben Blumson topic = Logic/Philosophical aspects date = 2017-09-06 notify = benblumson@gmail.com abstract = Paul Oppenheimer and Edward Zalta's formalisation of Anselm's ontological argument for the existence of God is automated by embedding a free logic for definite descriptions within Isabelle/HOL. [Tail_Recursive_Functions] title = A General Method for the Proof of Theorems on Tail-recursive Functions author = Pasquale Noce date = 2013-12-01 topic = Computer science/Functional programming abstract =

Tail-recursive function definitions are sometimes more straightforward than alternatives, but proving theorems on them may be roundabout because of the peculiar form of the resulting recursion induction rules.

This paper describes a proof method that provides a general solution to this problem by means of suitable invariants over inductive sets, and illustrates the application of such method by examining two case studies.

notify = pasquale.noce.lavoro@gmail.com [CryptoBasedCompositionalProperties] title = Compositional Properties of Crypto-Based Components author = Maria Spichkova date = 2014-01-11 topic = Computer science/Security abstract = This paper presents an Isabelle/HOL set of theories which allows the specification of crypto-based components and the verification of their composition properties wrt. cryptographic aspects. We introduce a formalisation of the security property of data secrecy, the corresponding definitions and proofs. Please note that here we import the Isabelle/HOL theory ListExtras.thy, presented in the AFP entry FocusStreamsCaseStudies-AFP. notify = maria.spichkova@rmit.edu.au [Featherweight_OCL] title = Featherweight OCL: A Proposal for a Machine-Checked Formal Semantics for OCL 2.5 author = Achim D. Brucker , Frédéric Tuong , Burkhart Wolff date = 2014-01-16 topic = Computer science/System description languages abstract = The Unified Modeling Language (UML) is one of the few modeling languages that is widely used in industry. While UML is mostly known as diagrammatic modeling language (e.g., visualizing class models), it is complemented by a textual language, called Object Constraint Language (OCL). The current version of OCL is based on a four-valued logic that turns UML into a formal language. Any type comprises the elements "invalid" and "null" which are propagated as strict and non-strict, respectively. Unfortunately, the former semi-formal semantics of this specification language, captured in the "Annex A" of the OCL standard, leads to different interpretations of corner cases. We formalize the core of OCL: denotational definitions, a logical calculus and operational rules that allow for the execution of OCL expressions by a mixture of term rewriting and code compilation. Our formalization reveals several inconsistencies and contradictions in the current version of the OCL standard. Overall, this document is intended to provide the basis for a machine-checked text "Annex A" of the OCL standard targeting at tool implementors. extra-history = Change history: [2015-10-13]: afp-devel@ea3b38fc54d6 and hol-testgen@12148
   Update of Featherweight OCL including a change in the abstract.
[2014-01-16]: afp-devel@9091ce05cb20 and hol-testgen@10241
   New Entry: Featherweight OCL notify = brucker@spamfence.net, tuong@users.gforge.inria.fr, wolff@lri.fr [Relation_Algebra] title = Relation Algebra author = Alasdair Armstrong <>, Simon Foster , Georg Struth , Tjark Weber date = 2014-01-25 topic = Mathematics/Algebra abstract = Tarski's algebra of binary relations is formalised along the lines of the standard textbooks of Maddux and Schmidt and Ströhlein. This includes relation-algebraic concepts such as subidentities, vectors and a domain operation as well as various notions associated to functions. Relation algebras are also expanded by a reflexive transitive closure operation, and they are linked with Kleene algebras and models of binary relations and Boolean matrices. notify = g.struth@sheffield.ac.uk, tjark.weber@it.uu.se [PSemigroupsConvolution] title = Partial Semigroups and Convolution Algebras author = Brijesh Dongol , Victor B. F. Gomes , Ian J. Hayes , Georg Struth topic = Mathematics/Algebra date = 2017-06-13 notify = g.struth@sheffield.ac.uk, victor.gomes@cl.cam.ac.uk abstract = Partial Semigroups are relevant to the foundations of quantum mechanics and combinatorics as well as to interval and separation logics. Convolution algebras can be understood either as algebras of generalised binary modalities over ternary Kripke frames, in particular over partial semigroups, or as algebras of quantale-valued functions which are equipped with a convolution-style operation of multiplication that is parametrised by a ternary relation. Convolution algebras provide algebraic semantics for various substructural logics, including categorial, relevance and linear logics, for separation logic and for interval logics; they cover quantitative and qualitative applications. These mathematical components for partial semigroups and convolution algebras provide uniform foundations from which models of computation based on relations, program traces or pomsets, and verification components for separation or interval temporal logics can be built with little effort. [Secondary_Sylow] title = Secondary Sylow Theorems author = Jakob von Raumer date = 2014-01-28 topic = Mathematics/Algebra abstract = These theories extend the existing proof of the first Sylow theorem (written by Florian Kammueller and L. C. Paulson) by what are often called the second, third and fourth Sylow theorems. These theorems state propositions about the number of Sylow p-subgroups of a group and the fact that they are conjugate to each other. The proofs make use of an implementation of group actions and their properties. notify = psxjv4@nottingham.ac.uk [Jordan_Hoelder] title = The Jordan-Hölder Theorem author = Jakob von Raumer date = 2014-09-09 topic = Mathematics/Algebra abstract = This submission contains theories that lead to a formalization of the proof of the Jordan-Hölder theorem about composition series of finite groups. The theories formalize the notions of isomorphism classes of groups, simple groups, normal series, composition series, maximal normal subgroups. Furthermore, they provide proofs of the second isomorphism theorem for groups, the characterization theorem for maximal normal subgroups as well as many useful lemmas about normal subgroups and factor groups. The proof is inspired by course notes of Stuart Rankin. notify = psxjv4@nottingham.ac.uk [Cayley_Hamilton] title = The Cayley-Hamilton Theorem author = Stephan Adelsberger , Stefan Hetzl , Florian Pollak date = 2014-09-15 topic = Mathematics/Algebra abstract = This document contains a proof of the Cayley-Hamilton theorem based on the development of matrices in HOL/Multivariate Analysis. notify = stvienna@gmail.com [Probabilistic_Noninterference] title = Probabilistic Noninterference author = Andrei Popescu , Johannes Hölzl date = 2014-03-11 topic = Computer science/Security abstract = We formalize a probabilistic noninterference for a multi-threaded language with uniform scheduling, where probabilistic behaviour comes from both the scheduler and the individual threads. We define notions probabilistic noninterference in two variants: resumption-based and trace-based. For the resumption-based notions, we prove compositionality w.r.t. the language constructs and establish sound type-system-like syntactic criteria. This is a formalization of the mathematical development presented at CPP 2013 and CALCO 2013. It is the probabilistic variant of the Possibilistic Noninterference AFP entry. notify = hoelzl@in.tum.de [HyperCTL] title = A shallow embedding of HyperCTL* author = Markus N. Rabe , Peter Lammich , Andrei Popescu date = 2014-04-16 topic = Computer science/Security, Logic/General logic/Temporal logic abstract = We formalize HyperCTL*, a temporal logic for expressing security properties. We first define a shallow embedding of HyperCTL*, within which we prove inductive and coinductive rules for the operators. Then we show that a HyperCTL* formula captures Goguen-Meseguer noninterference, a landmark information flow property. We also define a deep embedding and connect it to the shallow embedding by a denotational semantics, for which we prove sanity w.r.t. dependence on the free variables. Finally, we show that under some finiteness assumptions about the model, noninterference is given by a (finitary) syntactic formula. notify = uuomul@yahoo.com [Bounded_Deducibility_Security] title = Bounded-Deducibility Security author = Andrei Popescu , Peter Lammich date = 2014-04-22 topic = Computer science/Security abstract = This is a formalization of bounded-deducibility security (BD security), a flexible notion of information-flow security applicable to arbitrary input-output automata. It generalizes Sutherland's classic notion of nondeducibility by factoring in declassification bounds and trigger, whereas nondeducibility states that, in a system, information cannot flow between specified sources and sinks, BD security indicates upper bounds for the flow and triggers under which these upper bounds are no longer guaranteed. notify = uuomul@yahoo.com, lammich@in.tum.de [Network_Security_Policy_Verification] title = Network Security Policy Verification author = Cornelius Diekmann date = 2014-07-04 topic = Computer science/Security abstract = We present a unified theory for verifying network security policies. A security policy is represented as directed graph. To check high-level security goals, security invariants over the policy are expressed. We cover monotonic security invariants, i.e. prohibiting more does not harm security. We provide the following contributions for the security invariant theory.
  • Secure auto-completion of scenario-specific knowledge, which eases usability.
  • Security violations can be repaired by tightening the policy iff the security invariants hold for the deny-all policy.
  • An algorithm to compute a security policy.
  • A formalization of stateful connection semantics in network security mechanisms.
  • An algorithm to compute a secure stateful implementation of a policy.
  • An executable implementation of all the theory.
  • Examples, ranging from an aircraft cabin data network to the analysis of a large real-world firewall.
  • More examples: A fully automated translation of high-level security goals to both firewall and SDN configurations (see Examples/Distributed_WebApp.thy).
For a detailed description, see extra-history = Change history: [2015-04-14]: Added Distributed WebApp example and improved graphviz visualization (revision 4dde08ca2ab8)
notify = diekmann@net.in.tum.de [Abstract_Completeness] title = Abstract Completeness author = Jasmin Christian Blanchette , Andrei Popescu , Dmitriy Traytel date = 2014-04-16 topic = Logic/Proof theory abstract = A formalization of an abstract property of possibly infinite derivation trees (modeled by a codatatype), representing the core of a proof (in Beth/Hintikka style) of the first-order logic completeness theorem, independent of the concrete syntax or inference rules. This work is described in detail in the IJCAR 2014 publication by the authors. The abstract proof can be instantiated for a wide range of Gentzen and tableau systems as well as various flavors of FOL---e.g., with or without predicates, equality, or sorts. Here, we give only a toy example instantiation with classical propositional logic. A more serious instance---many-sorted FOL with equality---is described elsewhere [Blanchette and Popescu, FroCoS 2013]. notify = traytel@in.tum.de [Pop_Refinement] title = Pop-Refinement author = Alessandro Coglio date = 2014-07-03 topic = Computer science/Programming languages/Misc abstract = Pop-refinement is an approach to stepwise refinement, carried out inside an interactive theorem prover by constructing a monotonically decreasing sequence of predicates over deeply embedded target programs. The sequence starts with a predicate that characterizes the possible implementations, and ends with a predicate that characterizes a unique program in explicit syntactic form. Pop-refinement enables more requirements (e.g. program-level and non-functional) to be captured in the initial specification and preserved through refinement. Security requirements expressed as hyperproperties (i.e. predicates over sets of traces) are always preserved by pop-refinement, unlike the popular notion of refinement as trace set inclusion. Two simple examples in Isabelle/HOL are presented, featuring program-level requirements, non-functional requirements, and hyperproperties. notify = coglio@kestrel.edu [VectorSpace] title = Vector Spaces author = Holden Lee date = 2014-08-29 topic = Mathematics/Algebra abstract = This formalisation of basic linear algebra is based completely on locales, building off HOL-Algebra. It includes basic definitions: linear combinations, span, linear independence; linear transformations; interpretation of function spaces as vector spaces; the direct sum of vector spaces, sum of subspaces; the replacement theorem; existence of bases in finite-dimensional; vector spaces, definition of dimension; the rank-nullity theorem. Some concepts are actually defined and proved for modules as they also apply there. Infinite-dimensional vector spaces are supported, but dimension is only supported for finite-dimensional vector spaces. The proofs are standard; the proofs of the replacement theorem and rank-nullity theorem roughly follow the presentation in Linear Algebra by Friedberg, Insel, and Spence. The rank-nullity theorem generalises the existing development in the Archive of Formal Proof (originally using type classes, now using a mix of type classes and locales). notify = holdenl@princeton.edu [Special_Function_Bounds] title = Real-Valued Special Functions: Upper and Lower Bounds author = Lawrence C. Paulson date = 2014-08-29 topic = Mathematics/Analysis abstract = This development proves upper and lower bounds for several familiar real-valued functions. For sin, cos, exp and sqrt, it defines and verifies infinite families of upper and lower bounds, mostly based on Taylor series expansions. For arctan, ln and exp, it verifies a finite collection of upper and lower bounds, originally obtained from the functions' continued fraction expansions using the computer algebra system Maple. A common theme in these proofs is to take the difference between a function and its approximation, which should be zero at one point, and then consider the sign of the derivative. The immediate purpose of this development is to verify axioms used by MetiTarski, an automatic theorem prover for real-valued special functions. Crucial to MetiTarski's operation is the provision of upper and lower bounds for each function of interest. notify = lp15@cam.ac.uk [Landau_Symbols] title = Landau Symbols author = Manuel Eberl date = 2015-07-14 topic = Mathematics/Analysis abstract = This entry provides Landau symbols to describe and reason about the asymptotic growth of functions for sufficiently large inputs. A number of simplification procedures are provided for additional convenience: cancelling of dominated terms in sums under a Landau symbol, cancelling of common factors in products, and a decision procedure for Landau expressions containing products of powers of functions like x, ln(x), ln(ln(x)) etc. notify = eberlm@in.tum.de [Error_Function] title = The Error Function author = Manuel Eberl topic = Mathematics/Analysis date = 2018-02-06 notify = eberlm@in.tum.de abstract =

This entry provides the definitions and basic properties of the complex and real error function erf and the complementary error function erfc. Additionally, it gives their full asymptotic expansions.

[Akra_Bazzi] title = The Akra-Bazzi theorem and the Master theorem author = Manuel Eberl date = 2015-07-14 topic = Mathematics/Analysis abstract = This article contains a formalisation of the Akra-Bazzi method based on a proof by Leighton. It is a generalisation of the well-known Master Theorem for analysing the complexity of Divide & Conquer algorithms. We also include a generalised version of the Master theorem based on the Akra-Bazzi theorem, which is easier to apply than the Akra-Bazzi theorem itself.

Some proof methods that facilitate applying the Master theorem are also included. For a more detailed explanation of the formalisation and the proof methods, see the accompanying paper (publication forthcoming). notify = eberlm@in.tum.de [Dirichlet_Series] title = Dirichlet Series author = Manuel Eberl topic = Mathematics/Number theory date = 2017-10-12 notify = eberlm@in.tum.de abstract = This entry is a formalisation of much of Chapters 2, 3, and 11 of Apostol's “Introduction to Analytic Number Theory”. This includes:

  • Definitions and basic properties for several number-theoretic functions (Euler's φ, Möbius μ, Liouville's λ, the divisor function σ, von Mangoldt's Λ)
  • Executable code for most of these functions, the most efficient implementations using the factoring algorithm by Thiemann et al.
  • Dirichlet products and formal Dirichlet series
  • Analytic results connecting convergent formal Dirichlet series to complex functions
  • Euler product expansions
  • Asymptotic estimates of number-theoretic functions including the density of squarefree integers and the average number of divisors of a natural number
These results are useful as a basis for developing more number-theoretic results, such as the Prime Number Theorem. [Gauss_Sums] title = Gauss Sums and the Pólya–Vinogradov Inequality author = Rodrigo Raya , Manuel Eberl topic = Mathematics/Number theory date = 2019-12-10 notify = manuel.eberl@tum.de abstract =

This article provides a full formalisation of Chapter 8 of Apostol's Introduction to Analytic Number Theory. Subjects that are covered are:

  • periodic arithmetic functions and their finite Fourier series
  • (generalised) Ramanujan sums
  • Gauss sums and separable characters
  • induced moduli and primitive characters
  • the Pólya—Vinogradov inequality
[Zeta_Function] title = The Hurwitz and Riemann ζ Functions author = Manuel Eberl topic = Mathematics/Number theory, Mathematics/Analysis date = 2017-10-12 notify = eberlm@in.tum.de abstract =

This entry builds upon the results about formal and analytic Dirichlet series to define the Hurwitz ζ function ζ(a,s) and, based on that, the Riemann ζ function ζ(s). This is done by first defining them for ℜ(z) > 1 and then successively extending the domain to the left using the Euler–MacLaurin formula.

Apart from the most basic facts such as analyticity, the following results are provided:

  • the Stieltjes constants and the Laurent expansion of ζ(s) at s = 1
  • the non-vanishing of ζ(s) for ℜ(z) ≥ 1
  • the relationship between ζ(a,s) and Γ
  • the special values at negative integers and positive even integers
  • Hurwitz's formula and the reflection formula for ζ(s)
  • the Hadjicostas–Chapman formula

The entry also contains Euler's analytic proof of the infinitude of primes, based on the fact that ζ(s) has a pole at s = 1.

[Linear_Recurrences] title = Linear Recurrences author = Manuel Eberl topic = Mathematics/Analysis date = 2017-10-12 notify = eberlm@in.tum.de abstract =

Linear recurrences with constant coefficients are an interesting class of recurrence equations that can be solved explicitly. The most famous example are certainly the Fibonacci numbers with the equation f(n) = f(n-1) + f(n - 2) and the quite non-obvious closed form (φn - (-φ)-n) / √5 where φ is the golden ratio.

In this work, I build on existing tools in Isabelle – such as formal power series and polynomial factorisation algorithms – to develop a theory of these recurrences and derive a fully executable solver for them that can be exported to programming languages like Haskell.

+[Lambert_W] +title = The Lambert W Function on the Reals +author = Manuel Eberl +topic = Mathematics/Analysis +date = 2020-04-24 +notify = eberlm@in.tum.de +abstract = +

The Lambert W function is a multi-valued + function defined as the inverse function of x + ↦ x + ex. Besides numerous + applications in combinatorics, physics, and engineering, it also + frequently occurs when solving equations containing both + ex and + x, or both x and log + x.

This article provides a + definition of the two real-valued branches + W0(x) + and + W-1(x) + and proves various properties such as basic identities and + inequalities, monotonicity, differentiability, asymptotic expansions, + and the MacLaurin series of + W0(x) + at x = 0.

+ [Cartan_FP] title = The Cartan Fixed Point Theorems author = Lawrence C. Paulson date = 2016-03-08 topic = Mathematics/Analysis abstract = The Cartan fixed point theorems concern the group of holomorphic automorphisms on a connected open set of Cn. Ciolli et al. have formalised the one-dimensional case of these theorems in HOL Light. This entry contains their proofs, ported to Isabelle/HOL. Thus it addresses the authors' remark that "it would be important to write a formal proof in a language that can be read by both humans and machines". notify = lp15@cam.ac.uk [Gauss_Jordan] title = Gauss-Jordan Algorithm and Its Applications author = Jose Divasón , Jesús Aransay topic = Computer science/Algorithms/Mathematical date = 2014-09-03 abstract = The Gauss-Jordan algorithm states that any matrix over a field can be transformed by means of elementary row operations to a matrix in reduced row echelon form. The formalization is based on the Rank Nullity Theorem entry of the AFP and on the HOL-Multivariate-Analysis session of Isabelle, where matrices are represented as functions over finite types. We have set up the code generator to make this representation executable. In order to improve the performance, a refinement to immutable arrays has been carried out. We have formalized some of the applications of the Gauss-Jordan algorithm. Thanks to this development, the following facts can be computed over matrices whose elements belong to a field: Ranks, Determinants, Inverses, Bases and dimensions and Solutions of systems of linear equations. Code can be exported to SML and Haskell. notify = jose.divasonm@unirioja.es, jesus-maria.aransay@unirioja.es [Echelon_Form] title = Echelon Form author = Jose Divasón , Jesús Aransay topic = Computer science/Algorithms/Mathematical, Mathematics/Algebra date = 2015-02-12 abstract = We formalize an algorithm to compute the Echelon Form of a matrix. We have proved its existence over Bézout domains and made it executable over Euclidean domains, such as the integer ring and the univariate polynomials over a field. This allows us to compute determinants, inverses and characteristic polynomials of matrices. The work is based on the HOL-Multivariate Analysis library, and on both the Gauss-Jordan and Cayley-Hamilton AFP entries. As a by-product, some algebraic structures have been implemented (principal ideal domains, Bézout domains...). The algorithm has been refined to immutable arrays and code can be generated to functional languages as well. notify = jose.divasonm@unirioja.es, jesus-maria.aransay@unirioja.es [QR_Decomposition] title = QR Decomposition author = Jose Divasón , Jesús Aransay topic = Computer science/Algorithms/Mathematical, Mathematics/Algebra date = 2015-02-12 abstract = QR decomposition is an algorithm to decompose a real matrix A into the product of two other matrices Q and R, where Q is orthogonal and R is invertible and upper triangular. The algorithm is useful for the least squares problem; i.e., the computation of the best approximation of an unsolvable system of linear equations. As a side-product, the Gram-Schmidt process has also been formalized. A refinement using immutable arrays is presented as well. The development relies, among others, on the AFP entry "Implementing field extensions of the form Q[sqrt(b)]" by René Thiemann, which allows execution of the algorithm using symbolic computations. Verified code can be generated and executed using floats as well. extra-history = Change history: [2015-06-18]: The second part of the Fundamental Theorem of Linear Algebra has been generalized to more general inner product spaces. notify = jose.divasonm@unirioja.es, jesus-maria.aransay@unirioja.es [Hermite] title = Hermite Normal Form author = Jose Divasón , Jesús Aransay topic = Computer science/Algorithms/Mathematical, Mathematics/Algebra date = 2015-07-07 abstract = Hermite Normal Form is a canonical matrix analogue of Reduced Echelon Form, but involving matrices over more general rings. In this work we formalise an algorithm to compute the Hermite Normal Form of a matrix by means of elementary row operations, taking advantage of the Echelon Form AFP entry. We have proven the correctness of such an algorithm and refined it to immutable arrays. Furthermore, we have also formalised the uniqueness of the Hermite Normal Form of a matrix. Code can be exported and some examples of execution involving integer matrices and polynomial matrices are presented as well. notify = jose.divasonm@unirioja.es, jesus-maria.aransay@unirioja.es [Imperative_Insertion_Sort] title = Imperative Insertion Sort author = Christian Sternagel date = 2014-09-25 topic = Computer science/Algorithms abstract = The insertion sort algorithm of Cormen et al. (Introduction to Algorithms) is expressed in Imperative HOL and proved to be correct and terminating. For this purpose we also provide a theory about imperative loop constructs with accompanying induction/invariant rules for proving partial and total correctness. Furthermore, the formalized algorithm is fit for code generation. notify = lp15@cam.ac.uk [Stream_Fusion_Code] title = Stream Fusion in HOL with Code Generation author = Andreas Lochbihler , Alexandra Maximova date = 2014-10-10 topic = Computer science/Functional programming abstract = Stream Fusion is a system for removing intermediate list data structures from functional programs, in particular Haskell. This entry adapts stream fusion to Isabelle/HOL and its code generator. We define stream types for finite and possibly infinite lists and stream versions for most of the fusible list functions in the theories List and Coinductive_List, and prove them correct with respect to the conversion functions between lists and streams. The Stream Fusion transformation itself is implemented as a simproc in the preprocessor of the code generator. [Brian Huffman's AFP entry formalises stream fusion in HOLCF for the domain of lazy lists to prove the GHC compiler rewrite rules correct. In contrast, this work enables Isabelle's code generator to perform stream fusion itself. To that end, it covers both finite and coinductive lists from the HOL library and the Coinductive entry. The fusible list functions require specification and proof principles different from Huffman's.] notify = mail@andreas-lochbihler.de [Case_Labeling] title = Generating Cases from Labeled Subgoals author = Lars Noschinski date = 2015-07-21 topic = Tools, Computer science/Programming languages/Misc abstract = Isabelle/Isar provides named cases to structure proofs. This article contains an implementation of a proof method casify, which can be used to easily extend proof tools with support for named cases. Such a proof tool must produce labeled subgoals, which are then interpreted by casify.

As examples, this work contains verification condition generators producing named cases for three languages: The Hoare language from HOL/Library, a monadic language for computations with failure (inspired by the AutoCorres tool), and a language of conditional expressions. These VCGs are demonstrated by a number of example programs. notify = noschinl@gmail.com [DPT-SAT-Solver] title = A Fast SAT Solver for Isabelle in Standard ML topic = Tools author = Armin Heller <> date = 2009-12-09 abstract = This contribution contains a fast SAT solver for Isabelle written in Standard ML. By loading the theory DPT_SAT_Solver, the SAT solver installs itself (under the name ``dptsat'') and certain Isabelle tools like Refute will start using it automatically. This is a port of the DPT (Decision Procedure Toolkit) SAT Solver written in OCaml. notify = jasmin.blanchette@gmail.com [Rep_Fin_Groups] title = Representations of Finite Groups topic = Mathematics/Algebra author = Jeremy Sylvestre date = 2015-08-12 abstract = We provide a formal framework for the theory of representations of finite groups, as modules over the group ring. Along the way, we develop the general theory of groups (relying on the group_add class for the basics), modules, and vector spaces, to the extent required for theory of group representations. We then provide formal proofs of several important introductory theorems in the subject, including Maschke's theorem, Schur's lemma, and Frobenius reciprocity. We also prove that every irreducible representation is isomorphic to a submodule of the group ring, leading to the fact that for a finite group there are only finitely many isomorphism classes of irreducible representations. In all of this, no restriction is made on the characteristic of the ring or field of scalars until the definition of a group representation, and then the only restriction made is that the characteristic must not divide the order of the group. notify = jsylvest@ualberta.ca [Noninterference_Inductive_Unwinding] title = The Inductive Unwinding Theorem for CSP Noninterference Security topic = Computer science/Security author = Pasquale Noce date = 2015-08-18 abstract =

The necessary and sufficient condition for CSP noninterference security stated by the Ipurge Unwinding Theorem is expressed in terms of a pair of event lists varying over the set of process traces. This does not render it suitable for the subsequent application of rule induction in the case of a process defined inductively, since rule induction may rather be applied to a single variable ranging over an inductively defined set.

Starting from the Ipurge Unwinding Theorem, this paper derives a necessary and sufficient condition for CSP noninterference security that involves a single event list varying over the set of process traces, and is thus suitable for rule induction; hence its name, Inductive Unwinding Theorem. Similarly to the Ipurge Unwinding Theorem, the new theorem only requires to consider individual accepted and refused events for each process trace, and applies to the general case of a possibly intransitive noninterference policy. Specific variants of this theorem are additionally proven for deterministic processes and trace set processes.

notify = pasquale.noce.lavoro@gmail.com [Password_Authentication_Protocol] title = Verification of a Diffie-Hellman Password-based Authentication Protocol by Extending the Inductive Method author = Pasquale Noce topic = Computer science/Security date = 2017-01-03 notify = pasquale.noce.lavoro@gmail.com abstract = This paper constructs a formal model of a Diffie-Hellman password-based authentication protocol between a user and a smart card, and proves its security. The protocol provides for the dispatch of the user's password to the smart card on a secure messaging channel established by means of Password Authenticated Connection Establishment (PACE), where the mapping method being used is Chip Authentication Mapping. By applying and suitably extending Paulson's Inductive Method, this paper proves that the protocol establishes trustworthy secure messaging channels, preserves the secrecy of users' passwords, and provides an effective mutual authentication service. What is more, these security properties turn out to hold independently of the secrecy of the PACE authentication key. [Jordan_Normal_Form] title = Matrices, Jordan Normal Forms, and Spectral Radius Theory topic = Mathematics/Algebra author = René Thiemann , Akihisa Yamada contributors = Alexander Bentkamp date = 2015-08-21 abstract =

Matrix interpretations are useful as measure functions in termination proving. In order to use these interpretations also for complexity analysis, the growth rate of matrix powers has to examined. Here, we formalized a central result of spectral radius theory, namely that the growth rate is polynomially bounded if and only if the spectral radius of a matrix is at most one.

To formally prove this result we first studied the growth rates of matrices in Jordan normal form, and prove the result that every complex matrix has a Jordan normal form using a constructive prove via Schur decomposition.

The whole development is based on a new abstract type for matrices, which is also executable by a suitable setup of the code generator. It completely subsumes our former AFP-entry on executable matrices, and its main advantage is its close connection to the HMA-representation which allowed us to easily adapt existing proofs on determinants.

All the results have been applied to improve CeTA, our certifier to validate termination and complexity proof certificates.

extra-history = Change history: [2016-01-07]: Added Schur-decomposition, Gram-Schmidt orthogonalization, uniqueness of Jordan normal forms
[2018-04-17]: Integrated lemmas from deep-learning AFP-entry of Alexander Bentkamp notify = rene.thiemann@uibk.ac.at, ayamada@trs.cm.is.nagoya-u.ac.jp [LTL_to_DRA] title = Converting Linear Temporal Logic to Deterministic (Generalized) Rabin Automata topic = Computer science/Automata and formal languages author = Salomon Sickert date = 2015-09-04 abstract = Recently, Javier Esparza and Jan Kretinsky proposed a new method directly translating linear temporal logic (LTL) formulas to deterministic (generalized) Rabin automata. Compared to the existing approaches of constructing a non-deterministic Buechi-automaton in the first step and then applying a determinization procedure (e.g. some variant of Safra's construction) in a second step, this new approach preservers a relation between the formula and the states of the resulting automaton. While the old approach produced a monolithic structure, the new method is compositional. Furthermore, in some cases the resulting automata are much smaller than the automata generated by existing approaches. In order to ensure the correctness of the construction, this entry contains a complete formalisation and verification of the translation. Furthermore from this basis executable code is generated. extra-history = Change history: [2015-09-23]: Enable code export for the eager unfolding optimisation and reduce running time of the generated tool. Moreover, add support for the mlton SML compiler.
[2016-03-24]: Make use of the LTL entry and include the simplifier. notify = sickert@in.tum.de [Timed_Automata] title = Timed Automata author = Simon Wimmer date = 2016-03-08 topic = Computer science/Automata and formal languages abstract = Timed automata are a widely used formalism for modeling real-time systems, which is employed in a class of successful model checkers such as UPPAAL [LPY97], HyTech [HHWt97] or Kronos [Yov97]. This work formalizes the theory for the subclass of diagonal-free timed automata, which is sufficient to model many interesting problems. We first define the basic concepts and semantics of diagonal-free timed automata. Based on this, we prove two types of decidability results for the language emptiness problem. The first is the classic result of Alur and Dill [AD90, AD94], which uses a finite partitioning of the state space into so-called `regions`. Our second result focuses on an approach based on `Difference Bound Matrices (DBMs)`, which is practically used by model checkers. We prove the correctness of the basic forward analysis operations on DBMs. One of these operations is the Floyd-Warshall algorithm for the all-pairs shortest paths problem. To obtain a finite search space, a widening operation has to be used for this kind of analysis. We use Patricia Bouyer's [Bou04] approach to prove that this widening operation is correct in the sense that DBM-based forward analysis in combination with the widening operation also decides language emptiness. The interesting property of this proof is that the first decidability result is reused to obtain the second one. notify = wimmers@in.tum.de [Parity_Game] title = Positional Determinacy of Parity Games author = Christoph Dittmann date = 2015-11-02 topic = Mathematics/Games and economics, Mathematics/Graph theory abstract = We present a formalization of parity games (a two-player game on directed graphs) and a proof of their positional determinacy in Isabelle/HOL. This proof works for both finite and infinite games. notify = [Ergodic_Theory] title = Ergodic Theory author = Sebastien Gouezel date = 2015-12-01 topic = Mathematics/Probability theory abstract = Ergodic theory is the branch of mathematics that studies the behaviour of measure preserving transformations, in finite or infinite measure. It interacts both with probability theory (mainly through measure theory) and with geometry as a lot of interesting examples are from geometric origin. We implement the first definitions and theorems of ergodic theory, including notably Poicaré recurrence theorem for finite measure preserving systems (together with the notion of conservativity in general), induced maps, Kac's theorem, Birkhoff theorem (arguably the most important theorem in ergodic theory), and variations around it such as conservativity of the corresponding skew product, or Atkinson lemma. notify = sebastien.gouezel@univ-rennes1.fr, hoelzl@in.tum.de [Latin_Square] title = Latin Square author = Alexander Bentkamp date = 2015-12-02 topic = Mathematics/Combinatorics abstract = A Latin Square is a n x n table filled with integers from 1 to n where each number appears exactly once in each row and each column. A Latin Rectangle is a partially filled n x n table with r filled rows and n-r empty rows, such that each number appears at most once in each row and each column. The main result of this theory is that any Latin Rectangle can be completed to a Latin Square. notify = bentkamp@gmail.com [Deep_Learning] title = Expressiveness of Deep Learning author = Alexander Bentkamp date = 2016-11-10 topic = Computer science/Machine learning, Mathematics/Analysis abstract = Deep learning has had a profound impact on computer science in recent years, with applications to search engines, image recognition and language processing, bioinformatics, and more. Recently, Cohen et al. provided theoretical evidence for the superiority of deep learning over shallow learning. This formalization of their work simplifies and generalizes the original proof, while working around the limitations of the Isabelle type system. To support the formalization, I developed reusable libraries of formalized mathematics, including results about the matrix rank, the Lebesgue measure, and multivariate polynomials, as well as a library for tensor analysis. notify = bentkamp@gmail.com [Applicative_Lifting] title = Applicative Lifting author = Andreas Lochbihler , Joshua Schneider <> date = 2015-12-22 topic = Computer science/Functional programming abstract = Applicative functors augment computations with effects by lifting function application to types which model the effects. As the structure of the computation cannot depend on the effects, applicative expressions can be analysed statically. This allows us to lift universally quantified equations to the effectful types, as observed by Hinze. Thus, equational reasoning over effectful computations can be reduced to pure types.

This entry provides a package for registering applicative functors and two proof methods for lifting of equations over applicative functors. The first method normalises applicative expressions according to the laws of applicative functors. This way, equations whose two sides contain the same list of variables can be lifted to every applicative functor.

To lift larger classes of equations, the second method exploits a number of additional properties (e.g., commutativity of effects) provided the properties have been declared for the concrete applicative functor at hand upon registration.

We declare several types from the Isabelle library as applicative functors and illustrate the use of the methods with two examples: the lifting of the arithmetic type class hierarchy to streams and the verification of a relabelling function on binary trees. We also formalise and verify the normalisation algorithm used by the first proof method.

extra-history = Change history: [2016-03-03]: added formalisation of lifting with combinators
[2016-06-10]: implemented automatic derivation of lifted combinator reductions; support arbitrary lifted relations using relators; improved compatibility with locale interpretation (revision ec336f354f37)
notify = mail@andreas-lochbihler.de [Stern_Brocot] title = The Stern-Brocot Tree author = Peter Gammie , Andreas Lochbihler date = 2015-12-22 topic = Mathematics/Number theory abstract = The Stern-Brocot tree contains all rational numbers exactly once and in their lowest terms. We formalise the Stern-Brocot tree as a coinductive tree using recursive and iterative specifications, which we have proven equivalent, and show that it indeed contains all the numbers as stated. Following Hinze, we prove that the Stern-Brocot tree can be linearised looplessly into Stern's diatonic sequence (also known as Dijkstra's fusc function) and that it is a permutation of the Bird tree.

The reasoning stays at an abstract level by appealing to the uniqueness of solutions of guarded recursive equations and lifting algebraic laws point-wise to trees and streams using applicative functors.

notify = mail@andreas-lochbihler.de [Algebraic_Numbers] title = Algebraic Numbers in Isabelle/HOL topic = Mathematics/Algebra author = René Thiemann , Akihisa Yamada , Sebastiaan Joosten date = 2015-12-22 abstract = Based on existing libraries for matrices, factorization of rational polynomials, and Sturm's theorem, we formalized algebraic numbers in Isabelle/HOL. Our development serves as an implementation for real and complex numbers, and it admits to compute roots and completely factorize real and complex polynomials, provided that all coefficients are rational numbers. Moreover, we provide two implementations to display algebraic numbers, an injective and expensive one, or a faster but approximative version.

To this end, we mechanized several results on resultants, which also required us to prove that polynomials over a unique factorization domain form again a unique factorization domain.

extra-history = Change history: [2016-01-29]: Split off Polynomial Interpolation and Polynomial Factorization
[2017-04-16]: Use certified Berlekamp-Zassenhaus factorization, use subresultant algorithm for computing resultants, improved bisection algorithm notify = rene.thiemann@uibk.ac.at, ayamada@trs.cm.is.nagoya-u.ac.jp, sebastiaan.joosten@uibk.ac.at [Polynomial_Interpolation] title = Polynomial Interpolation topic = Mathematics/Algebra author = René Thiemann , Akihisa Yamada date = 2016-01-29 abstract = We formalized three algorithms for polynomial interpolation over arbitrary fields: Lagrange's explicit expression, the recursive algorithm of Neville and Aitken, and the Newton interpolation in combination with an efficient implementation of divided differences. Variants of these algorithms for integer polynomials are also available, where sometimes the interpolation can fail; e.g., there is no linear integer polynomial p such that p(0) = 0 and p(2) = 1. Moreover, for the Newton interpolation for integer polynomials, we proved that all intermediate results that are computed during the algorithm must be integers. This admits an early failure detection in the implementation. Finally, we proved the uniqueness of polynomial interpolation.

The development also contains improved code equations to speed up the division of integers in target languages. notify = rene.thiemann@uibk.ac.at, ayamada@trs.cm.is.nagoya-u.ac.jp [Polynomial_Factorization] title = Polynomial Factorization topic = Mathematics/Algebra author = René Thiemann , Akihisa Yamada date = 2016-01-29 abstract = Based on existing libraries for polynomial interpolation and matrices, we formalized several factorization algorithms for polynomials, including Kronecker's algorithm for integer polynomials, Yun's square-free factorization algorithm for field polynomials, and Berlekamp's algorithm for polynomials over finite fields. By combining the last one with Hensel's lifting, we derive an efficient factorization algorithm for the integer polynomials, which is then lifted for rational polynomials by mechanizing Gauss' lemma. Finally, we assembled a combined factorization algorithm for rational polynomials, which combines all the mentioned algorithms and additionally uses the explicit formula for roots of quadratic polynomials and a rational root test.

As side products, we developed division algorithms for polynomials over integral domains, as well as primality-testing and prime-factorization algorithms for integers. notify = rene.thiemann@uibk.ac.at, ayamada@trs.cm.is.nagoya-u.ac.jp [Perron_Frobenius] title = Perron-Frobenius Theorem for Spectral Radius Analysis author = Jose Divasón , Ondřej Kunčar , René Thiemann , Akihisa Yamada notify = rene.thiemann@uibk.ac.at date = 2016-05-20 topic = Mathematics/Algebra abstract =

The spectral radius of a matrix A is the maximum norm of all eigenvalues of A. In previous work we already formalized that for a complex matrix A, the values in An grow polynomially in n if and only if the spectral radius is at most one. One problem with the above characterization is the determination of all complex eigenvalues. In case A contains only non-negative real values, a simplification is possible with the help of the Perron–Frobenius theorem, which tells us that it suffices to consider only the real eigenvalues of A, i.e., applying Sturm's method can decide the polynomial growth of An.

We formalize the Perron–Frobenius theorem based on a proof via Brouwer's fixpoint theorem, which is available in the HOL multivariate analysis (HMA) library. Since the results on the spectral radius is based on matrices in the Jordan normal form (JNF) library, we further develop a connection which allows us to easily transfer theorems between HMA and JNF. With this connection we derive the combined result: if A is a non-negative real matrix, and no real eigenvalue of A is strictly larger than one, then An is polynomially bounded in n.

extra-history = Change history: [2017-10-18]: added Perron-Frobenius theorem for irreducible matrices with generalization (revision bda1f1ce8a1c)
[2018-05-17]: prove conjecture of CPP'18 paper: Jordan blocks of spectral radius have maximum size (revision ffdb3794e5d5) [Stochastic_Matrices] title = Stochastic Matrices and the Perron-Frobenius Theorem author = René Thiemann topic = Mathematics/Algebra, Computer science/Automata and formal languages date = 2017-11-22 notify = rene.thiemann@uibk.ac.at abstract = Stochastic matrices are a convenient way to model discrete-time and finite state Markov chains. The Perron–Frobenius theorem tells us something about the existence and uniqueness of non-negative eigenvectors of a stochastic matrix. In this entry, we formalize stochastic matrices, link the formalization to the existing AFP-entry on Markov chains, and apply the Perron–Frobenius theorem to prove that stationary distributions always exist, and they are unique if the stochastic matrix is irreducible. [Formal_SSA] title = Verified Construction of Static Single Assignment Form author = Sebastian Ullrich , Denis Lohner date = 2016-02-05 topic = Computer science/Algorithms, Computer science/Programming languages/Transformations abstract =

We define a functional variant of the static single assignment (SSA) form construction algorithm described by Braun et al., which combines simplicity and efficiency. The definition is based on a general, abstract control flow graph representation using Isabelle locales.

We prove that the algorithm's output is semantically equivalent to the input according to a small-step semantics, and that it is in minimal SSA form for the common special case of reducible inputs. We then show the satisfiability of the locale assumptions by giving instantiations for a simple While language.

Furthermore, we use a generic instantiation based on typedefs in order to extract OCaml code and replace the unverified SSA construction algorithm of the CompCertSSA project with it.

A more detailed description of the verified SSA construction can be found in the paper Verified Construction of Static Single Assignment Form, CC 2016.

notify = denis.lohner@kit.edu [Minimal_SSA] title = Minimal Static Single Assignment Form author = Max Wagner , Denis Lohner topic = Computer science/Programming languages/Transformations date = 2017-01-17 notify = denis.lohner@kit.edu abstract =

This formalization is an extension to "Verified Construction of Static Single Assignment Form". In their work, the authors have shown that Braun et al.'s static single assignment (SSA) construction algorithm produces minimal SSA form for input programs with a reducible control flow graph (CFG). However Braun et al. also proposed an extension to their algorithm that they claim produces minimal SSA form even for irreducible CFGs.
In this formalization we support that claim by giving a mechanized proof.

As the extension of Braun et al.'s algorithm aims for removing so-called redundant strongly connected components of phi functions, we show that this suffices to guarantee minimality according to Cytron et al..

[PropResPI] title = Propositional Resolution and Prime Implicates Generation author = Nicolas Peltier notify = Nicolas.Peltier@imag.fr date = 2016-03-11 topic = Logic/General logic/Mechanization of proofs abstract = We provide formal proofs in Isabelle-HOL (using mostly structured Isar proofs) of the soundness and completeness of the Resolution rule in propositional logic. The completeness proofs take into account the usual redundancy elimination rules (tautology elimination and subsumption), and several refinements of the Resolution rule are considered: ordered resolution (with selection functions), positive and negative resolution, semantic resolution and unit resolution (the latter refinement is complete only for clause sets that are Horn- renamable). We also define a concrete procedure for computing saturated sets and establish its soundness and completeness. The clause sets are not assumed to be finite, so that the results can be applied to formulas obtained by grounding sets of first-order clauses (however, a total ordering among atoms is assumed to be given). Next, we show that the unrestricted Resolution rule is deductive- complete, in the sense that it is able to generate all (prime) implicates of any set of propositional clauses (i.e., all entailment- minimal, non-valid, clausal consequences of the considered set). The generation of prime implicates is an important problem, with many applications in artificial intelligence and verification (for abductive reasoning, knowledge compilation, diagnosis, debugging etc.). We also show that implicates can be computed in an incremental way, by fixing an ordering among all the atoms in the considered sets and resolving upon these atoms one by one in the considered order (with no backtracking). This feature is critical for the efficient computation of prime implicates. Building on these results, we provide a procedure for computing such implicates and establish its soundness and completeness. [SuperCalc] title = A Variant of the Superposition Calculus author = Nicolas Peltier notify = Nicolas.Peltier@imag.fr date = 2016-09-06 topic = Logic/Proof theory abstract = We provide a formalization of a variant of the superposition calculus, together with formal proofs of soundness and refutational completeness (w.r.t. the usual redundancy criteria based on clause ordering). This version of the calculus uses all the standard restrictions of the superposition rules, together with the following refinement, inspired by the basic superposition calculus: each clause is associated with a set of terms which are assumed to be in normal form -- thus any application of the replacement rule on these terms is blocked. The set is initially empty and terms may be added or removed at each inference step. The set of terms that are assumed to be in normal form includes any term introduced by previous unifiers as well as any term occurring in the parent clauses at a position that is smaller (according to some given ordering on positions) than a previously replaced term. The standard superposition calculus corresponds to the case where the set of irreducible terms is always empty. [Nominal2] title = Nominal 2 author = Christian Urban , Stefan Berghofer , Cezary Kaliszyk date = 2013-02-21 topic = Tools abstract =

Dealing with binders, renaming of bound variables, capture-avoiding substitution, etc., is very often a major problem in formal proofs, especially in proofs by structural and rule induction. Nominal Isabelle is designed to make such proofs easy to formalise: it provides an infrastructure for declaring nominal datatypes (that is alpha-equivalence classes) and for defining functions over them by structural recursion. It also provides induction principles that have Barendregt’s variable convention already built in.

This entry can be used as a more advanced replacement for HOL/Nominal in the Isabelle distribution.

notify = christian.urban@kcl.ac.uk [First_Welfare_Theorem] title = Microeconomics and the First Welfare Theorem author = Julian Parsert , Cezary Kaliszyk topic = Mathematics/Games and economics license = LGPL date = 2017-09-01 notify = julian.parsert@uibk.ac.at, cezary.kaliszyk@uibk.ac.at abstract = Economic activity has always been a fundamental part of society. Due to modern day politics, economic theory has gained even more influence on our lives. Thus we want models and theories to be as precise as possible. This can be achieved using certification with the help of formal proof technology. Hence we will use Isabelle/HOL to construct two economic models, that of the the pure exchange economy and a version of the Arrow-Debreu Model. We will prove that the First Theorem of Welfare Economics holds within both. The theorem is the mathematical formulation of Adam Smith's famous invisible hand and states that a group of self-interested and rational actors will eventually achieve an efficient allocation of goods and services. extra-history = Change history: [2018-06-17]: Added some lemmas and a theory file, also introduced Microeconomics folder.
[Noninterference_Sequential_Composition] title = Conservation of CSP Noninterference Security under Sequential Composition author = Pasquale Noce date = 2016-04-26 topic = Computer science/Security, Computer science/Concurrency/Process calculi abstract =

In his outstanding work on Communicating Sequential Processes, Hoare has defined two fundamental binary operations allowing to compose the input processes into another, typically more complex, process: sequential composition and concurrent composition. Particularly, the output of the former operation is a process that initially behaves like the first operand, and then like the second operand once the execution of the first one has terminated successfully, as long as it does.

This paper formalizes Hoare's definition of sequential composition and proves, in the general case of a possibly intransitive policy, that CSP noninterference security is conserved under this operation, provided that successful termination cannot be affected by confidential events and cannot occur as an alternative to other events in the traces of the first operand. Both of these assumptions are shown, by means of counterexamples, to be necessary for the theorem to hold.

notify = pasquale.noce.lavoro@gmail.com [Noninterference_Concurrent_Composition] title = Conservation of CSP Noninterference Security under Concurrent Composition author = Pasquale Noce notify = pasquale.noce.lavoro@gmail.com date = 2016-06-13 topic = Computer science/Security, Computer science/Concurrency/Process calculi abstract =

In his outstanding work on Communicating Sequential Processes, Hoare has defined two fundamental binary operations allowing to compose the input processes into another, typically more complex, process: sequential composition and concurrent composition. Particularly, the output of the latter operation is a process in which any event not shared by both operands can occur whenever the operand that admits the event can engage in it, whereas any event shared by both operands can occur just in case both can engage in it.

This paper formalizes Hoare's definition of concurrent composition and proves, in the general case of a possibly intransitive policy, that CSP noninterference security is conserved under this operation. This result, along with the previous analogous one concerning sequential composition, enables the construction of more and more complex processes enforcing noninterference security by composing, sequentially or concurrently, simpler secure processes, whose security can in turn be proven using either the definition of security, or unwinding theorems.

[ROBDD] title = Algorithms for Reduced Ordered Binary Decision Diagrams author = Julius Michaelis , Maximilian Haslbeck , Peter Lammich , Lars Hupel date = 2016-04-27 topic = Computer science/Algorithms, Computer science/Data structures abstract = We present a verified and executable implementation of ROBDDs in Isabelle/HOL. Our implementation relates pointer-based computation in the Heap monad to operations on an abstract definition of boolean functions. Internally, we implemented the if-then-else combinator in a recursive fashion, following the Shannon decomposition of the argument functions. The implementation mixes and adapts known techniques and is built with efficiency in mind. notify = bdd@liftm.de, haslbecm@in.tum.de [No_FTL_observers] title = No Faster-Than-Light Observers author = Mike Stannett , István Németi date = 2016-04-28 topic = Mathematics/Physics abstract = We provide a formal proof within First Order Relativity Theory that no observer can travel faster than the speed of light. Originally reported in Stannett & Németi (2014) "Using Isabelle/HOL to verify first-order relativity theory", Journal of Automated Reasoning 52(4), pp. 361-378. notify = m.stannett@sheffield.ac.uk [Groebner_Bases] title = Gröbner Bases Theory author = Fabian Immler , Alexander Maletzky date = 2016-05-02 topic = Mathematics/Algebra, Computer science/Algorithms/Mathematical abstract = This formalization is concerned with the theory of Gröbner bases in (commutative) multivariate polynomial rings over fields, originally developed by Buchberger in his 1965 PhD thesis. Apart from the statement and proof of the main theorem of the theory, the formalization also implements Buchberger's algorithm for actually computing Gröbner bases as a tail-recursive function, thus allowing to effectively decide ideal membership in finitely generated polynomial ideals. Furthermore, all functions can be executed on a concrete representation of multivariate polynomials as association lists. extra-history = Change history: [2019-04-18]: Specialized Gröbner bases to less abstract representation of polynomials, where power-products are represented as polynomial mappings.
notify = alexander.maletzky@risc.jku.at [Nullstellensatz] title = Hilbert's Nullstellensatz author = Alexander Maletzky topic = Mathematics/Algebra, Mathematics/Geometry date = 2019-06-16 notify = alexander.maletzky@risc-software.at abstract = This entry formalizes Hilbert's Nullstellensatz, an important theorem in algebraic geometry that can be viewed as the generalization of the Fundamental Theorem of Algebra to multivariate polynomials: If a set of (multivariate) polynomials over an algebraically closed field has no common zero, then the ideal it generates is the entire polynomial ring. The formalization proves several equivalent versions of this celebrated theorem: the weak Nullstellensatz, the strong Nullstellensatz (connecting algebraic varieties and radical ideals), and the field-theoretic Nullstellensatz. The formalization follows Chapter 4.1. of Ideals, Varieties, and Algorithms by Cox, Little and O'Shea. [Bell_Numbers_Spivey] title = Spivey's Generalized Recurrence for Bell Numbers author = Lukas Bulwahn date = 2016-05-04 topic = Mathematics/Combinatorics abstract = This entry defines the Bell numbers as the cardinality of set partitions for a carrier set of given size, and derives Spivey's generalized recurrence relation for Bell numbers following his elegant and intuitive combinatorial proof.

As the set construction for the combinatorial proof requires construction of three intermediate structures, the main difficulty of the formalization is handling the overall combinatorial argument in a structured way. The introduced proof structure allows us to compose the combinatorial argument from its subparts, and supports to keep track how the detailed proof steps are related to the overall argument. To obtain this structure, this entry uses set monad notation for the set construction's definition, introduces suitable predicates and rules, and follows a repeating structure in its Isar proof. notify = lukas.bulwahn@gmail.com [Randomised_Social_Choice] title = Randomised Social Choice Theory author = Manuel Eberl date = 2016-05-05 topic = Mathematics/Games and economics abstract = This work contains a formalisation of basic Randomised Social Choice, including Stochastic Dominance and Social Decision Schemes (SDSs) along with some of their most important properties (Anonymity, Neutrality, ex-post- and SD-Efficiency, SD-Strategy-Proofness) and two particular SDSs – Random Dictatorship and Random Serial Dictatorship (with proofs of the properties that they satisfy). Many important properties of these concepts are also proven – such as the two equivalent characterisations of Stochastic Dominance and the fact that SD-efficiency of a lottery only depends on the support. The entry also provides convenient commands to define Preference Profiles, prove their well-formedness, and automatically derive restrictions that sufficiently nice SDSs need to satisfy on the defined profiles. Currently, the formalisation focuses on weak preferences and Stochastic Dominance, but it should be easy to extend it to other domains – such as strict preferences – or other lottery extensions – such as Bilinear Dominance or Pairwise Comparison. notify = eberlm@in.tum.de [SDS_Impossibility] title = The Incompatibility of SD-Efficiency and SD-Strategy-Proofness author = Manuel Eberl date = 2016-05-04 topic = Mathematics/Games and economics abstract = This formalisation contains the proof that there is no anonymous and neutral Social Decision Scheme for at least four voters and alternatives that fulfils both SD-Efficiency and SD-Strategy- Proofness. The proof is a fully structured and quasi-human-redable one. It was derived from the (unstructured) SMT proof of the case for exactly four voters and alternatives by Brandl et al. Their proof relies on an unverified translation of the original problem to SMT, and the proof that lifts the argument for exactly four voters and alternatives to the general case is also not machine-checked. In this Isabelle proof, on the other hand, all of these steps are fully proven and machine-checked. This is particularly important seeing as a previously published informal proof of a weaker statement contained a mistake in precisely this lifting step. notify = eberlm@in.tum.de [Median_Of_Medians_Selection] title = The Median-of-Medians Selection Algorithm author = Manuel Eberl topic = Computer science/Algorithms date = 2017-12-21 notify = eberlm@in.tum.de abstract =

This entry provides an executable functional implementation of the Median-of-Medians algorithm for selecting the k-th smallest element of an unsorted list deterministically in linear time. The size bounds for the recursive call that lead to the linear upper bound on the run-time of the algorithm are also proven.

[Mason_Stothers] title = The Mason–Stothers Theorem author = Manuel Eberl topic = Mathematics/Algebra date = 2017-12-21 notify = eberlm@in.tum.de abstract =

This article provides a formalisation of Snyder’s simple and elegant proof of the Mason–Stothers theorem, which is the polynomial analogue of the famous abc Conjecture for integers. Remarkably, Snyder found this very elegant proof when he was still a high-school student.

In short, the statement of the theorem is that three non-zero coprime polynomials A, B, C over a field which sum to 0 and do not all have vanishing derivatives fulfil max{deg(A), deg(B), deg(C)} < deg(rad(ABC)) where the rad(P) denotes the radical of P, i. e. the product of all unique irreducible factors of P.

This theorem also implies a kind of polynomial analogue of Fermat’s Last Theorem for polynomials: except for trivial cases, An + Bn + Cn = 0 implies n ≤ 2 for coprime polynomials A, B, C over a field.

[FLP] title = A Constructive Proof for FLP author = Benjamin Bisping , Paul-David Brodmann , Tim Jungnickel , Christina Rickmann , Henning Seidler , Anke Stüber , Arno Wilhelm-Weidner , Kirstin Peters , Uwe Nestmann date = 2016-05-18 topic = Computer science/Concurrency abstract = The impossibility of distributed consensus with one faulty process is a result with important consequences for real world distributed systems e.g., commits in replicated databases. Since proofs are not immune to faults and even plausible proofs with a profound formalism can conclude wrong results, we validate the fundamental result named FLP after Fischer, Lynch and Paterson. We present a formalization of distributed systems and the aforementioned consensus problem. Our proof is based on Hagen Völzer's paper "A constructive proof for FLP". In addition to the enhanced confidence in the validity of Völzer's proof, we contribute the missing gaps to show the correctness in Isabelle/HOL. We clarify the proof details and even prove fairness of the infinite execution that contradicts consensus. Our Isabelle formalization can also be reused for further proofs of properties of distributed systems. notify = henning.seidler@mailbox.tu-berlin.de [IMAP-CRDT] title = The IMAP CmRDT author = Tim Jungnickel , Lennart Oldenburg <>, Matthias Loibl <> topic = Computer science/Algorithms/Distributed, Computer science/Data structures date = 2017-11-09 notify = tim.jungnickel@tu-berlin.de abstract = We provide our Isabelle/HOL formalization of a Conflict-free Replicated Datatype for Internet Message Access Protocol commands. We show that Strong Eventual Consistency (SEC) is guaranteed by proving the commutativity of concurrent operations. We base our formalization on the recently proposed "framework for establishing Strong Eventual Consistency for Conflict-free Replicated Datatypes" (AFP.CRDT) from Gomes et al. Hence, we provide an additional example of how the recently proposed framework can be used to design and prove CRDTs. [Incredible_Proof_Machine] title = The meta theory of the Incredible Proof Machine author = Joachim Breitner , Denis Lohner date = 2016-05-20 topic = Logic/Proof theory abstract = The Incredible Proof Machine is an interactive visual theorem prover which represents proofs as port graphs. We model this proof representation in Isabelle, and prove that it is just as powerful as natural deduction. notify = mail@joachim-breitner.de [Word_Lib] title = Finite Machine Word Library author = Joel Beeren<>, Matthew Fernandez<>, Xin Gao<>, Gerwin Klein , Rafal Kolanski<>, Japheth Lim<>, Corey Lewis<>, Daniel Matichuk<>, Thomas Sewell<> notify = kleing@unsw.edu.au date = 2016-06-09 topic = Computer science/Data structures abstract = This entry contains an extension to the Isabelle library for fixed-width machine words. In particular, the entry adds quickcheck setup for words, printing as hexadecimals, additional operations, reasoning about alignment, signed words, enumerations of words, normalisation of word numerals, and an extensive library of properties about generic fixed-width words, as well as an instantiation of many of these to the commonly used 32 and 64-bit bases. [Catalan_Numbers] title = Catalan Numbers author = Manuel Eberl notify = eberlm@in.tum.de date = 2016-06-21 topic = Mathematics/Combinatorics abstract =

In this work, we define the Catalan numbers Cn and prove several equivalent definitions (including some closed-form formulae). We also show one of their applications (counting the number of binary trees of size n), prove the asymptotic growth approximation Cn ∼ 4n / (√π · n1.5), and provide reasonably efficient executable code to compute them.

The derivation of the closed-form formulae uses algebraic manipulations of the ordinary generating function of the Catalan numbers, and the asymptotic approximation is then done using generalised binomial coefficients and the Gamma function. Thanks to these highly non-elementary mathematical tools, the proofs are very short and simple.

[Fisher_Yates] title = Fisher–Yates shuffle author = Manuel Eberl notify = eberlm@in.tum.de date = 2016-09-30 topic = Computer science/Algorithms abstract =

This work defines and proves the correctness of the Fisher–Yates algorithm for shuffling – i.e. producing a random permutation – of a list. The algorithm proceeds by traversing the list and in each step swapping the current element with a random element from the remaining list.

[Bertrands_Postulate] title = Bertrand's postulate author = Julian Biendarra<>, Manuel Eberl contributors = Lawrence C. Paulson topic = Mathematics/Number theory date = 2017-01-17 notify = eberlm@in.tum.de abstract =

Bertrand's postulate is an early result on the distribution of prime numbers: For every positive integer n, there exists a prime number that lies strictly between n and 2n. The proof is ported from John Harrison's formalisation in HOL Light. It proceeds by first showing that the property is true for all n greater than or equal to 600 and then showing that it also holds for all n below 600 by case distinction.

[Rewriting_Z] title = The Z Property author = Bertram Felgenhauer<>, Julian Nagele<>, Vincent van Oostrom<>, Christian Sternagel notify = bertram.felgenhauer@uibk.ac.at, julian.nagele@uibk.ac.at, c.sternagel@gmail.com date = 2016-06-30 topic = Logic/Rewriting abstract = We formalize the Z property introduced by Dehornoy and van Oostrom. First we show that for any abstract rewrite system, Z implies confluence. Then we give two examples of proofs using Z: confluence of lambda-calculus with respect to beta-reduction and confluence of combinatory logic. [Resolution_FOL] title = The Resolution Calculus for First-Order Logic author = Anders Schlichtkrull notify = andschl@dtu.dk date = 2016-06-30 topic = Logic/General logic/Mechanization of proofs abstract = This theory is a formalization of the resolution calculus for first-order logic. It is proven sound and complete. The soundness proof uses the substitution lemma, which shows a correspondence between substitutions and updates to an environment. The completeness proof uses semantic trees, i.e. trees whose paths are partial Herbrand interpretations. It employs Herbrand's theorem in a formulation which states that an unsatisfiable set of clauses has a finite closed semantic tree. It also uses the lifting lemma which lifts resolution derivation steps from the ground world up to the first-order world. The theory is presented in a paper in the Journal of Automated Reasoning [Sch18] which extends a paper presented at the International Conference on Interactive Theorem Proving [Sch16]. An earlier version was presented in an MSc thesis [Sch15]. The formalization mostly follows textbooks by Ben-Ari [BA12], Chang and Lee [CL73], and Leitsch [Lei97]. The theory is part of the IsaFoL project [IsaFoL].

[Sch18] Anders Schlichtkrull. "Formalization of the Resolution Calculus for First-Order Logic". Journal of Automated Reasoning, 2018.
[Sch16] Anders Schlichtkrull. "Formalization of the Resolution Calculus for First-Order Logic". In: ITP 2016. Vol. 9807. LNCS. Springer, 2016.
[Sch15] Anders Schlichtkrull. "Formalization of Resolution Calculus in Isabelle". https://people.compute.dtu.dk/andschl/Thesis.pdf. MSc thesis. Technical University of Denmark, 2015.
[BA12] Mordechai Ben-Ari. Mathematical Logic for Computer Science. 3rd. Springer, 2012.
[CL73] Chin-Liang Chang and Richard Char-Tung Lee. Symbolic Logic and Mechanical Theorem Proving. 1st. Academic Press, Inc., 1973.
[Lei97] Alexander Leitsch. The Resolution Calculus. Texts in theoretical computer science. Springer, 1997.
[IsaFoL] IsaFoL authors. IsaFoL: Isabelle Formalization of Logic. https://bitbucket.org/jasmin_blanchette/isafol. extra-history = Change history: [2018-01-24]: added several new versions of the soundness and completeness theorems as described in the paper [Sch18].
[2018-03-20]: added a concrete instance of the unification and completeness theorems using the First-Order Terms AFP-entry from IsaFoR as described in the papers [Sch16] and [Sch18]. [Surprise_Paradox] title = Surprise Paradox author = Joachim Breitner notify = mail@joachim-breitner.de date = 2016-07-17 topic = Logic/Proof theory abstract = In 1964, Fitch showed that the paradox of the surprise hanging can be resolved by showing that the judge’s verdict is inconsistent. His formalization builds on Gödel’s coding of provability. In this theory, we reproduce his proof in Isabelle, building on Paulson’s formalisation of Gödel’s incompleteness theorems. [Ptolemys_Theorem] title = Ptolemy's Theorem author = Lukas Bulwahn notify = lukas.bulwahn@gmail.com date = 2016-08-07 topic = Mathematics/Geometry abstract = This entry provides an analytic proof to Ptolemy's Theorem using polar form transformation and trigonometric identities. In this formalization, we use ideas from John Harrison's HOL Light formalization and the proof sketch on the Wikipedia entry of Ptolemy's Theorem. This theorem is the 95th theorem of the Top 100 Theorems list. [Falling_Factorial_Sum] title = The Falling Factorial of a Sum author = Lukas Bulwahn topic = Mathematics/Combinatorics date = 2017-12-22 notify = lukas.bulwahn@gmail.com abstract = This entry shows that the falling factorial of a sum can be computed with an expression using binomial coefficients and the falling factorial of its summands. The entry provides three different proofs: a combinatorial proof, an induction proof and an algebraic proof using the Vandermonde identity. The three formalizations try to follow their informal presentations from a Mathematics Stack Exchange page as close as possible. The induction and algebraic formalization end up to be very close to their informal presentation, whereas the combinatorial proof first requires the introduction of list interleavings, and significant more detail than its informal presentation. [InfPathElimination] title = Infeasible Paths Elimination by Symbolic Execution Techniques: Proof of Correctness and Preservation of Paths author = Romain Aissat<>, Frederic Voisin<>, Burkhart Wolff notify = wolff@lri.fr date = 2016-08-18 topic = Computer science/Programming languages/Static analysis abstract = TRACER is a tool for verifying safety properties of sequential C programs. TRACER attempts at building a finite symbolic execution graph which over-approximates the set of all concrete reachable states and the set of feasible paths. We present an abstract framework for TRACER and similar CEGAR-like systems. The framework provides 1) a graph- transformation based method for reducing the feasible paths in control-flow graphs, 2) a model for symbolic execution, subsumption, predicate abstraction and invariant generation. In this framework we formally prove two key properties: correct construction of the symbolic states and preservation of feasible paths. The framework focuses on core operations, leaving to concrete prototypes to “fit in” heuristics for combining them. The accompanying paper (published in ITP 2016) can be found at https://www.lri.fr/∼wolff/papers/conf/2016-itp-InfPathsNSE.pdf. [Stirling_Formula] title = Stirling's formula author = Manuel Eberl notify = eberlm@in.tum.de date = 2016-09-01 topic = Mathematics/Analysis abstract = -

This work contains a proof of Stirling's formula both for the factorial $n! \sim \sqrt{2\pi n} (n/e)^n$ on natural numbers and the real +

This work contains a proof of Stirling's formula both for the factorial $n! \sim \sqrt{2\pi n} (n/e)^n$ on natural numbers and the real Gamma function $\Gamma(x)\sim \sqrt{2\pi/x} (x/e)^x$. The proof is based on work by Graham Jameson.

This is then extended to the full asymptotic expansion $$\log\Gamma(z) = \big(z - \tfrac{1}{2}\big)\log z - z + \tfrac{1}{2}\log(2\pi) + \sum_{k=1}^{n-1} \frac{B_{k+1}}{k(k+1)} z^{-k}\\ {} - \frac{1}{n} \int_0^\infty B_n([t])(t + z)^{-n}\,\text{d}t$$ - uniformly for all complex $z\neq 0$ in the cone $\text{arg}(z)\leq \alpha$ for any $\alpha\in(0,\pi)$, with which the above asymptotic + uniformly for all complex $z\neq 0$ in the cone $\text{arg}(z)\leq \alpha$ for any $\alpha\in(0,\pi)$, with which the above asymptotic relation for Γ is also extended to complex arguments.

[Lp] title = Lp spaces author = Sebastien Gouezel notify = sebastien.gouezel@univ-rennes1.fr date = 2016-10-05 topic = Mathematics/Analysis abstract = Lp is the space of functions whose p-th power is integrable. It is one of the most fundamental Banach spaces that is used in analysis and probability. We develop a framework for function spaces, and then implement the Lp spaces in this framework using the existing integration theory in Isabelle/HOL. Our development contains most fundamental properties of Lp spaces, notably the Hölder and Minkowski inequalities, completeness of Lp, duality, stability under almost sure convergence, multiplication of functions in Lp and Lq, stability under conditional expectation. [Berlekamp_Zassenhaus] title = The Factorization Algorithm of Berlekamp and Zassenhaus author = Jose Divasón , Sebastiaan Joosten , René Thiemann , Akihisa Yamada notify = rene.thiemann@uibk.ac.at date = 2016-10-14 topic = Mathematics/Algebra abstract =

We formalize the Berlekamp-Zassenhaus algorithm for factoring square-free integer polynomials in Isabelle/HOL. We further adapt an existing formalization of Yun’s square-free factorization algorithm to integer polynomials, and thus provide an efficient and certified factorization algorithm for arbitrary univariate polynomials.

The algorithm first performs a factorization in the prime field GF(p) and then performs computations in the integer ring modulo p^k, where both p and k are determined at runtime. Since a natural modeling of these structures via dependent types is not possible in Isabelle/HOL, we formalize the whole algorithm using Isabelle’s recent addition of local type definitions.

Through experiments we verify that our algorithm factors polynomials of degree 100 within seconds.

[Allen_Calculus] title = Allen's Interval Calculus author = Fadoua Ghourabi <> notify = fadouaghourabi@gmail.com date = 2016-09-29 topic = Logic/General logic/Temporal logic, Mathematics/Order abstract = Allen’s interval calculus is a qualitative temporal representation of time events. Allen introduced 13 binary relations that describe all the possible arrangements between two events, i.e. intervals with non-zero finite length. The compositions are pertinent to reasoning about knowledge of time. In particular, a consistency problem of relation constraints is commonly solved with a guideline from these compositions. We formalize the relations together with an axiomatic system. We proof the validity of the 169 compositions of these relations. We also define nests as the sets of intervals that share a meeting point. We prove that nests give the ordering properties of points without introducing a new datatype for points. [1] J.F. Allen. Maintaining Knowledge about Temporal Intervals. In Commun. ACM, volume 26, pages 832–843, 1983. [2] J. F. Allen and P. J. Hayes. A Common-sense Theory of Time. In Proceedings of the 9th International Joint Conference on Artificial Intelligence (IJCAI’85), pages 528–531, 1985. [Source_Coding_Theorem] title = Source Coding Theorem author = Quentin Hibon , Lawrence C. Paulson notify = qh225@cl.cam.ac.uk date = 2016-10-19 topic = Mathematics/Probability theory abstract = This document contains a proof of the necessary condition on the code rate of a source code, namely that this code rate is bounded by the entropy of the source. This represents one half of Shannon's source coding theorem, which is itself an equivalence. [Buffons_Needle] title = Buffon's Needle Problem author = Manuel Eberl topic = Mathematics/Probability theory, Mathematics/Geometry date = 2017-06-06 notify = eberlm@in.tum.de abstract = In the 18th century, Georges-Louis Leclerc, Comte de Buffon posed and later solved the following problem, which is often called the first problem ever solved in geometric probability: Given a floor divided into vertical strips of the same width, what is the probability that a needle thrown onto the floor randomly will cross two strips? This entry formally defines the problem in the case where the needle's position is chosen uniformly at random in a single strip around the origin (which is equivalent to larger arrangements due to symmetry). It then provides proofs of the simple solution in the case where the needle's length is no greater than the width of the strips and the more complicated solution in the opposite case. [SPARCv8] title = A formal model for the SPARCv8 ISA and a proof of non-interference for the LEON3 processor author = Zhe Hou , David Sanan , Alwen Tiu , Yang Liu notify = zhe.hou@ntu.edu.sg, sanan@ntu.edu.sg date = 2016-10-19 topic = Computer science/Security, Computer science/Hardware abstract = We formalise the SPARCv8 instruction set architecture (ISA) which is used in processors such as LEON3. Our formalisation can be specialised to any SPARCv8 CPU, here we use LEON3 as a running example. Our model covers the operational semantics for all the instructions in the integer unit of the SPARCv8 architecture and it supports Isabelle code export, which effectively turns the Isabelle model into a SPARCv8 CPU simulator. We prove the language-based non-interference property for the LEON3 processor. Our model is based on deterministic monad, which is a modified version of the non-deterministic monad from NICTA/l4v. [Separata] title = Separata: Isabelle tactics for Separation Algebra author = Zhe Hou , David Sanan , Alwen Tiu , Rajeev Gore , Ranald Clouston notify = zhe.hou@ntu.edu.sg date = 2016-11-16 topic = Computer science/Programming languages/Logics, Tools abstract = We bring the labelled sequent calculus $LS_{PASL}$ for propositional abstract separation logic to Isabelle. The tactics given here are directly applied on an extension of the Separation Algebra in the AFP. In addition to the cancellative separation algebra, we further consider some useful properties in the heap model of separation logic, such as indivisible unit, disjointness, and cross-split. The tactics are essentially a proof search procedure for the calculus $LS_{PASL}$. We wrap the tactics in an Isabelle method called separata, and give a few examples of separation logic formulae which are provable by separata. [LOFT] title = LOFT — Verified Migration of Linux Firewalls to SDN author = Julius Michaelis , Cornelius Diekmann notify = isabelleopenflow@liftm.de date = 2016-10-21 topic = Computer science/Networks abstract = We present LOFT — Linux firewall OpenFlow Translator, a system that transforms the main routing table and FORWARD chain of iptables of a Linux-based firewall into a set of static OpenFlow rules. Our implementation is verified against a model of a simplified Linux-based router and we can directly show how much of the original functionality is preserved. [Stable_Matching] title = Stable Matching author = Peter Gammie notify = peteg42@gmail.com date = 2016-10-24 topic = Mathematics/Games and economics abstract = We mechanize proofs of several results from the matching with contracts literature, which generalize those of the classical two-sided matching scenarios that go by the name of stable marriage. Our focus is on game theoretic issues. Along the way we develop executable algorithms for computing optimal stable matches. [Modal_Logics_for_NTS] title = Modal Logics for Nominal Transition Systems author = Tjark Weber , Lars-Henrik Eriksson , Joachim Parrow , Johannes Borgström , Ramunas Gutkovas notify = tjark.weber@it.uu.se date = 2016-10-25 topic = Computer science/Concurrency/Process calculi, Logic/General logic/Modal logic abstract = We formalize a uniform semantic substrate for a wide variety of process calculi where states and action labels can be from arbitrary nominal sets. A Hennessy-Milner logic for these systems is defined, and proved adequate for bisimulation equivalence. A main novelty is the construction of an infinitary nominal data type to model formulas with (finitely supported) infinite conjunctions and actions that may contain binding names. The logic is generalized to treat different bisimulation variants such as early, late and open in a systematic way. extra-history = Change history: [2017-01-29]: Formalization of weak bisimilarity added (revision c87cc2057d9c) [Abs_Int_ITP2012] title = Abstract Interpretation of Annotated Commands author = Tobias Nipkow notify = nipkow@in.tum.de date = 2016-11-23 topic = Computer science/Programming languages/Static analysis abstract = This is the Isabelle formalization of the material decribed in the eponymous ITP 2012 paper. It develops a generic abstract interpreter for a while-language, including widening and narrowing. The collecting semantics and the abstract interpreter operate on annotated commands: the program is represented as a syntax tree with the semantic information directly embedded, without auxiliary labels. The aim of the formalization is simplicity, not efficiency or precision. This is motivated by the inclusion of the material in a theorem prover based course on semantics. A similar (but more polished) development is covered in the book Concrete Semantics. [Complx] title = COMPLX: A Verification Framework for Concurrent Imperative Programs author = Sidney Amani<>, June Andronick<>, Maksym Bortin<>, Corey Lewis<>, Christine Rizkallah<>, Joseph Tuong<> notify = sidney.amani@data61.csiro.au, corey.lewis@data61.csiro.au date = 2016-11-29 topic = Computer science/Programming languages/Logics, Computer science/Programming languages/Language definitions abstract = We propose a concurrency reasoning framework for imperative programs, based on the Owicki-Gries (OG) foundational shared-variable concurrency method. Our framework combines the approaches of Hoare-Parallel, a formalisation of OG in Isabelle/HOL for a simple while-language, and Simpl, a generic imperative language embedded in Isabelle/HOL, allowing formal reasoning on C programs. We define the Complx language, extending the syntax and semantics of Simpl with support for parallel composition and synchronisation. We additionally define an OG logic, which we prove sound w.r.t. the semantics, and a verification condition generator, both supporting involved low-level imperative constructs such as function calls and abrupt termination. We illustrate our framework on an example that features exceptions, guards and function calls. We aim to then target concurrent operating systems, such as the interruptible eChronos embedded operating system for which we already have a model-level OG proof using Hoare-Parallel. extra-history = Change history: [2017-01-13]: Improve VCG for nested parallels and sequential sections (revision 30739dbc3dcb) [Paraconsistency] title = Paraconsistency author = Anders Schlichtkrull , Jørgen Villadsen topic = Logic/General logic/Paraconsistent logics date = 2016-12-07 notify = andschl@dtu.dk, jovi@dtu.dk abstract = Paraconsistency is about handling inconsistency in a coherent way. In classical and intuitionistic logic everything follows from an inconsistent theory. A paraconsistent logic avoids the explosion. Quite a few applications in computer science and engineering are discussed in the Intelligent Systems Reference Library Volume 110: Towards Paraconsistent Engineering (Springer 2016). We formalize a paraconsistent many-valued logic that we motivated and described in a special issue on logical approaches to paraconsistency (Journal of Applied Non-Classical Logics 2005). We limit ourselves to the propositional fragment of the higher-order logic. The logic is based on so-called key equalities and has a countably infinite number of truth values. We prove theorems in the logic using the definition of validity. We verify truth tables and also counterexamples for non-theorems. We prove meta-theorems about the logic and finally we investigate a case study. [Proof_Strategy_Language] title = Proof Strategy Language author = Yutaka Nagashima<> topic = Tools date = 2016-12-20 notify = Yutaka.Nagashima@data61.csiro.au abstract = Isabelle includes various automatic tools for finding proofs under certain conditions. However, for each conjecture, knowing which automation to use, and how to tweak its parameters, is currently labour intensive. We have developed a language, PSL, designed to capture high level proof strategies. PSL offloads the construction of human-readable fast-to-replay proof scripts to automatic search, making use of search-time information about each conjecture. Our preliminary evaluations show that PSL reduces the labour cost of interactive theorem proving. This submission contains the implementation of PSL and an example theory file, Example.thy, showing how to write poof strategies in PSL. [Concurrent_Ref_Alg] title = Concurrent Refinement Algebra and Rely Quotients author = Julian Fell , Ian J. Hayes , Andrius Velykis topic = Computer science/Concurrency date = 2016-12-30 notify = Ian.Hayes@itee.uq.edu.au abstract = The concurrent refinement algebra developed here is designed to provide a foundation for rely/guarantee reasoning about concurrent programs. The algebra builds on a complete lattice of commands by providing sequential composition, parallel composition and a novel weak conjunction operator. The weak conjunction operator coincides with the lattice supremum providing its arguments are non-aborting, but aborts if either of its arguments do. Weak conjunction provides an abstract version of a guarantee condition as a guarantee process. We distinguish between models that distribute sequential composition over non-deterministic choice from the left (referred to as being conjunctive in the refinement calculus literature) and those that don't. Least and greatest fixed points of monotone functions are provided to allow recursion and iteration operators to be added to the language. Additional iteration laws are available for conjunctive models. The rely quotient of processes c and i is the process that, if executed in parallel with i implements c. It represents an abstract version of a rely condition generalised to a process. [FOL_Harrison] title = First-Order Logic According to Harrison author = Alexander Birch Jensen , Anders Schlichtkrull , Jørgen Villadsen topic = Logic/General logic/Mechanization of proofs date = 2017-01-01 notify = aleje@dtu.dk, andschl@dtu.dk, jovi@dtu.dk abstract =

We present a certified declarative first-order prover with equality based on John Harrison's Handbook of Practical Logic and Automated Reasoning, Cambridge University Press, 2009. ML code reflection is used such that the entire prover can be executed within Isabelle as a very simple interactive proof assistant. As examples we consider Pelletier's problems 1-46.

Reference: Programming and Verifying a Declarative First-Order Prover in Isabelle/HOL. Alexander Birch Jensen, John Bruntse Larsen, Anders Schlichtkrull & Jørgen Villadsen. AI Communications 31:281-299 2018. https://content.iospress.com/articles/ai-communications/aic764

See also: Students' Proof Assistant (SPA). https://github.com/logic-tools/spa

extra-history = Change history: [2018-07-21]: Proof of Pelletier's problem 34 (Andrews's Challenge) thanks to Asta Halkjær From. [Bernoulli] title = Bernoulli Numbers author = Lukas Bulwahn, Manuel Eberl topic = Mathematics/Analysis, Mathematics/Number theory date = 2017-01-24 notify = eberlm@in.tum.de abstract =

Bernoulli numbers were first discovered in the closed-form expansion of the sum 1m + 2m + … + nm for a fixed m and appear in many other places. This entry provides three different definitions for them: a recursive one, an explicit one, and one through their exponential generating function.

In addition, we prove some basic facts, e.g. their relation to sums of powers of integers and that all odd Bernoulli numbers except the first are zero, and some advanced facts like their relationship to the Riemann zeta function on positive even integers.

We also prove the correctness of the Akiyama–Tanigawa algorithm for computing Bernoulli numbers with reasonable efficiency, and we define the periodic Bernoulli polynomials (which appear e.g. in the Euler–MacLaurin summation formula and the expansion of the log-Gamma function) and prove their basic properties.

[Stone_Relation_Algebras] title = Stone Relation Algebras author = Walter Guttmann topic = Mathematics/Algebra date = 2017-02-07 notify = walter.guttmann@canterbury.ac.nz abstract = We develop Stone relation algebras, which generalise relation algebras by replacing the underlying Boolean algebra structure with a Stone algebra. We show that finite matrices over extended real numbers form an instance. As a consequence, relation-algebraic concepts and methods can be used for reasoning about weighted graphs. We also develop a fixpoint calculus and apply it to compare different definitions of reflexive-transitive closures in semirings. [Stone_Kleene_Relation_Algebras] title = Stone-Kleene Relation Algebras author = Walter Guttmann topic = Mathematics/Algebra date = 2017-07-06 notify = walter.guttmann@canterbury.ac.nz abstract = We develop Stone-Kleene relation algebras, which expand Stone relation algebras with a Kleene star operation to describe reachability in weighted graphs. Many properties of the Kleene star arise as a special case of a more general theory of iteration based on Conway semirings extended by simulation axioms. This includes several theorems representing complex program transformations. We formally prove the correctness of Conway's automata-based construction of the Kleene star of a matrix. We prove numerous results useful for reasoning about weighted graphs. [Abstract_Soundness] title = Abstract Soundness author = Jasmin Christian Blanchette , Andrei Popescu , Dmitriy Traytel topic = Logic/Proof theory date = 2017-02-10 notify = jasmin.blanchette@gmail.com abstract = A formalized coinductive account of the abstract development of Brotherston, Gorogiannis, and Petersen [APLAS 2012], in a slightly more general form since we work with arbitrary infinite proofs, which may be acyclic. This work is described in detail in an article by the authors, published in 2017 in the Journal of Automated Reasoning. The abstract proof can be instantiated for various formalisms, including first-order logic with inductive predicates. [Differential_Dynamic_Logic] title = Differential Dynamic Logic author = Brandon Bohrer topic = Logic/General logic/Modal logic, Computer science/Programming languages/Logics date = 2017-02-13 notify = bbohrer@cs.cmu.edu abstract = We formalize differential dynamic logic, a logic for proving properties of hybrid systems. The proof calculus in this formalization is based on the uniform substitution principle. We show it is sound with respect to our denotational semantics, which provides increased confidence in the correctness of the KeYmaera X theorem prover based on this calculus. As an application, we include a proof term checker embedded in Isabelle/HOL with several example proofs. Published in: Brandon Bohrer, Vincent Rahli, Ivana Vukotic, Marcus Völp, André Platzer: Formally verified differential dynamic logic. CPP 2017. [Elliptic_Curves_Group_Law] title = The Group Law for Elliptic Curves author = Stefan Berghofer topic = Computer science/Security/Cryptography date = 2017-02-28 notify = berghofe@in.tum.de abstract = We prove the group law for elliptic curves in Weierstrass form over fields of characteristic greater than 2. In addition to affine coordinates, we also formalize projective coordinates, which allow for more efficient computations. By specializing the abstract formalization to prime fields, we can apply the curve operations to parameters used in standard security protocols. [Example-Submission] title = Example Submission author = Gerwin Klein topic = Mathematics/Analysis, Mathematics/Number theory date = 2004-02-25 notify = kleing@cse.unsw.edu.au abstract =

This is an example submission to the Archive of Formal Proofs. It shows submission requirements and explains the structure of a simple typical submission.

Note that you can use HTML tags and LaTeX formulae like $\sum_{n=1}^\infty \frac{1}{n^2} = \frac{\pi^2}{6}$ in the abstract. Display formulae like $$ \int_0^1 x^{-x}\,\text{d}x = \sum_{n=1}^\infty n^{-n}$$ are also possible. Please read the - submission guidelines before using this.

+ submission guidelines before using this.

extra-no-index = no-index: true [CRDT] title = A framework for establishing Strong Eventual Consistency for Conflict-free Replicated Datatypes author = Victor B. F. Gomes , Martin Kleppmann, Dominic P. Mulligan, Alastair R. Beresford topic = Computer science/Algorithms/Distributed, Computer science/Data structures date = 2017-07-07 notify = vb358@cam.ac.uk, dominic.p.mulligan@googlemail.com abstract = In this work, we focus on the correctness of Conflict-free Replicated Data Types (CRDTs), a class of algorithm that provides strong eventual consistency guarantees for replicated data. We develop a modular and reusable framework for verifying the correctness of CRDT algorithms. We avoid correctness issues that have dogged previous mechanised proofs in this area by including a network model in our formalisation, and proving that our theorems hold in all possible network behaviours. Our axiomatic network model is a standard abstraction that accurately reflects the behaviour of real-world computer networks. Moreover, we identify an abstract convergence theorem, a property of order relations, which provides a formal definition of strong eventual consistency. We then obtain the first machine-checked correctness theorems for three concrete CRDTs: the Replicated Growable Array, the Observed-Remove Set, and an Increment-Decrement Counter. [HOLCF-Prelude] title = HOLCF-Prelude author = Joachim Breitner, Brian Huffman<>, Neil Mitchell<>, Christian Sternagel topic = Computer science/Functional programming date = 2017-07-15 notify = c.sternagel@gmail.com, joachim@cis.upenn.edu, hupel@in.tum.de abstract = The Isabelle/HOLCF-Prelude is a formalization of a large part of Haskell's standard prelude in Isabelle/HOLCF. We use it to prove the correctness of the Eratosthenes' Sieve, in its self-referential implementation commonly used to showcase Haskell's laziness; prove correctness of GHC's "fold/build" rule and related rewrite rules; and certify a number of hints suggested by HLint. [Decl_Sem_Fun_PL] title = Declarative Semantics for Functional Languages author = Jeremy Siek topic = Computer science/Programming languages date = 2017-07-21 notify = jsiek@indiana.edu abstract = We present a semantics for an applied call-by-value lambda-calculus that is compositional, extensional, and elementary. We present four different views of the semantics: 1) as a relational (big-step) semantics that is not operational but instead declarative, 2) as a denotational semantics that does not use domain theory, 3) as a non-deterministic interpreter, and 4) as a variant of the intersection type systems of the Torino group. We prove that the semantics is correct by showing that it is sound and complete with respect to operational semantics on programs and that is sound with respect to contextual equivalence. We have not yet investigated whether it is fully abstract. We demonstrate that this approach to semantics is useful with three case studies. First, we use the semantics to prove correctness of a compiler optimization that inlines function application. Second, we adapt the semantics to the polymorphic lambda-calculus extended with general recursion and prove semantic type soundness. Third, we adapt the semantics to the call-by-value lambda-calculus with mutable references.
The paper that accompanies these Isabelle theories is available on arXiv. [DynamicArchitectures] title = Dynamic Architectures author = Diego Marmsoler topic = Computer science/System description languages date = 2017-07-28 notify = diego.marmsoler@tum.de abstract = The architecture of a system describes the system's overall organization into components and connections between those components. With the emergence of mobile computing, dynamic architectures have become increasingly important. In such architectures, components may appear or disappear, and connections may change over time. In the following we mechanize a theory of dynamic architectures and verify the soundness of a corresponding calculus. Therefore, we first formalize the notion of configuration traces as a model for dynamic architectures. Then, the behavior of single components is formalized in terms of behavior traces and an operator is introduced and studied to extract the behavior of a single component out of a given configuration trace. Then, behavior trace assertions are introduced as a temporal specification technique to specify behavior of components. Reasoning about component behavior in a dynamic context is formalized in terms of a calculus for dynamic architectures. Finally, the soundness of the calculus is verified by introducing an alternative interpretation for behavior trace assertions over configuration traces and proving the rules of the calculus. Since projection may lead to finite as well as infinite behavior traces, they are formalized in terms of coinductive lists. Thus, our theory is based on Lochbihler's formalization of coinductive lists. The theory may be applied to verify properties for dynamic architectures. extra-history = Change history: [2018-06-07]: adding logical operators to specify configuration traces (revision 09178f08f050)
[Stewart_Apollonius] title = Stewart's Theorem and Apollonius' Theorem author = Lukas Bulwahn topic = Mathematics/Geometry date = 2017-07-31 notify = lukas.bulwahn@gmail.com abstract = This entry formalizes the two geometric theorems, Stewart's and Apollonius' theorem. Stewart's Theorem relates the length of a triangle's cevian to the lengths of the triangle's two sides. Apollonius' Theorem is a specialisation of Stewart's theorem, restricting the cevian to be the median. The proof applies the law of cosines, some basic geometric facts about triangles and then simply transforms the terms algebraically to yield the conjectured relation. The formalization in Isabelle can closely follow the informal proofs described in the Wikipedia articles of those two theorems. [LambdaMu] title = The LambdaMu-calculus author = Cristina Matache , Victor B. F. Gomes , Dominic P. Mulligan topic = Computer science/Programming languages/Lambda calculi, Logic/General logic/Lambda calculus date = 2017-08-16 notify = victorborgesfg@gmail.com, dominic.p.mulligan@googlemail.com abstract = The propositions-as-types correspondence is ordinarily presented as linking the metatheory of typed λ-calculi and the proof theory of intuitionistic logic. Griffin observed that this correspondence could be extended to classical logic through the use of control operators. This observation set off a flurry of further research, leading to the development of Parigots λμ-calculus. In this work, we formalise λμ- calculus in Isabelle/HOL and prove several metatheoretical properties such as type preservation and progress. [Orbit_Stabiliser] title = Orbit-Stabiliser Theorem with Application to Rotational Symmetries author = Jonas Rädle topic = Mathematics/Algebra date = 2017-08-20 notify = jonas.raedle@tum.de abstract = The Orbit-Stabiliser theorem is a basic result in the algebra of groups that factors the order of a group into the sizes of its orbits and stabilisers. We formalize the notion of a group action and the related concepts of orbits and stabilisers. This allows us to prove the orbit-stabiliser theorem. In the second part of this work, we formalize the tetrahedral group and use the orbit-stabiliser theorem to prove that there are twelve (orientation-preserving) rotations of the tetrahedron. [PLM] title = Representation and Partial Automation of the Principia Logico-Metaphysica in Isabelle/HOL author = Daniel Kirchner topic = Logic/Philosophical aspects date = 2017-09-17 notify = daniel@ekpyron.org abstract =

We present an embedding of the second-order fragment of the Theory of Abstract Objects as described in Edward Zalta's upcoming work Principia Logico-Metaphysica (PLM) in the automated reasoning framework Isabelle/HOL. The Theory of Abstract Objects is a metaphysical theory that reifies property patterns, as they for example occur in the abstract reasoning of mathematics, as abstract objects and provides an axiomatic framework that allows to reason about these objects. It thereby serves as a fundamental metaphysical theory that can be used to axiomatize and describe a wide range of philosophical objects, such as Platonic forms or Leibniz' concepts, and has the ambition to function as a foundational theory of mathematics. The target theory of our embedding as described in chapters 7-9 of PLM employs a modal relational type theory as logical foundation for which a representation in functional type theory is known to be challenging.

Nevertheless we arrive at a functioning representation of the theory in the functional logic of Isabelle/HOL based on a semantical representation of an Aczel-model of the theory. Based on this representation we construct an implementation of the deductive system of PLM which allows to automatically and interactively find and verify theorems of PLM.

Our work thereby supports the concept of shallow semantical embeddings of logical systems in HOL as a universal tool for logical reasoning as promoted by Christoph Benzmüller.

The most notable result of the presented work is the discovery of a previously unknown paradox in the formulation of the Theory of Abstract Objects. The embedding of the theory in Isabelle/HOL played a vital part in this discovery. Furthermore it was possible to immediately offer several options to modify the theory to guarantee its consistency. Thereby our work could provide a significant contribution to the development of a proper grounding for object theory.

[KD_Tree] title = Multidimensional Binary Search Trees author = Martin Rau<> topic = Computer science/Data structures date = 2019-05-30 notify = martin.rau@tum.de, mrtnrau@googlemail.com abstract = This entry provides a formalization of multidimensional binary trees, also known as k-d trees. It includes a balanced build algorithm as well as the nearest neighbor algorithm and the range search algorithm. It is based on the papers Multidimensional binary search trees used for associative searching and An Algorithm for Finding Best Matches in Logarithmic Expected Time. extra-history = Change history: [2020-15-04]: Change representation of k-dimensional points from 'list' to HOL-Analysis.Finite_Cartesian_Product 'vec'. Update proofs to incorporate HOL-Analysis 'dist' and 'cbox' primitives. [Closest_Pair_Points] title = Closest Pair of Points Algorithms author = Martin Rau , Tobias Nipkow topic = Computer science/Algorithms/Geometry date = 2020-01-13 notify = martin.rau@tum.de, nipkow@in.tum.de abstract = This entry provides two related verified divide-and-conquer algorithms solving the fundamental Closest Pair of Points problem in Computational Geometry. Functional correctness and the optimal running time of O(n log n) are proved. Executable code is generated which is empirically competitive with handwritten reference implementations. extra-history = Change history: [2020-14-04]: Incorporate Time_Monad of the AFP entry Root_Balanced_Tree. [Approximation_Algorithms] title = Verified Approximation Algorithms author = Robin Eßmann , Tobias Nipkow , Simon Robillard topic = Computer science/Algorithms/Approximation date = 2020-01-16 notify = nipkow@in.tum.de abstract = We present the first formal verification of approximation algorithms for NP-complete optimization problems: vertex cover, independent set, load balancing, and bin packing. The proofs correct incompletenesses in existing proofs and improve the approximation ratio in one case. [Diophantine_Eqns_Lin_Hom] title = Homogeneous Linear Diophantine Equations author = Florian Messner , Julian Parsert , Jonas Schöpf , Christian Sternagel topic = Computer science/Algorithms/Mathematical, Mathematics/Number theory, Tools license = LGPL date = 2017-10-14 notify = c.sternagel@gmail.com, julian.parsert@gmail.com abstract = We formalize the theory of homogeneous linear diophantine equations, focusing on two main results: (1) an abstract characterization of minimal complete sets of solutions, and (2) an algorithm computing them. Both, the characterization and the algorithm are based on previous work by Huet. Our starting point is a simple but inefficient variant of Huet's lexicographic algorithm incorporating improved bounds due to Clausen and Fortenbacher. We proceed by proving its soundness and completeness. Finally, we employ code equations to obtain a reasonably efficient implementation. Thus, we provide a formally verified solver for homogeneous linear diophantine equations. [Winding_Number_Eval] title = Evaluate Winding Numbers through Cauchy Indices author = Wenda Li topic = Mathematics/Analysis date = 2017-10-17 notify = wl302@cam.ac.uk, liwenda1990@hotmail.com abstract = In complex analysis, the winding number measures the number of times a path (counterclockwise) winds around a point, while the Cauchy index can approximate how the path winds. This entry provides a formalisation of the Cauchy index, which is then shown to be related to the winding number. In addition, this entry also offers a tactic that enables users to evaluate the winding number by calculating Cauchy indices. [Count_Complex_Roots] title = Count the Number of Complex Roots author = Wenda Li topic = Mathematics/Analysis date = 2017-10-17 notify = wl302@cam.ac.uk, liwenda1990@hotmail.com abstract = Based on evaluating Cauchy indices through remainder sequences, this entry provides an effective procedure to count the number of complex roots (with multiplicity) of a polynomial within a rectangle box or a half-plane. Potential applications of this entry include certified complex root isolation (of a polynomial) and testing the Routh-Hurwitz stability criterion (i.e., to check whether all the roots of some characteristic polynomial have negative real parts). [Buchi_Complementation] title = Büchi Complementation author = Julian Brunner topic = Computer science/Automata and formal languages date = 2017-10-19 notify = brunnerj@in.tum.de abstract = This entry provides a verified implementation of rank-based Büchi Complementation. The verification is done in three steps:
  1. Definition of odd rankings and proof that an automaton rejects a word iff there exists an odd ranking for it.
  2. Definition of the complement automaton and proof that it accepts exactly those words for which there is an odd ranking.
  3. Verified implementation of the complement automaton using the Isabelle Collections Framework.
[Transition_Systems_and_Automata] title = Transition Systems and Automata author = Julian Brunner topic = Computer science/Automata and formal languages date = 2017-10-19 notify = brunnerj@in.tum.de abstract = This entry provides a very abstract theory of transition systems that can be instantiated to express various types of automata. A transition system is typically instantiated by providing a set of initial states, a predicate for enabled transitions, and a transition execution function. From this, it defines the concepts of finite and infinite paths as well as the set of reachable states, among other things. Many useful theorems, from basic path manipulation rules to coinduction and run construction rules, are proven in this abstract transition system context. The library comes with instantiations for DFAs, NFAs, and Büchi automata. [Kuratowski_Closure_Complement] title = The Kuratowski Closure-Complement Theorem author = Peter Gammie , Gianpaolo Gioiosa<> topic = Mathematics/Topology date = 2017-10-26 notify = peteg42@gmail.com abstract = We discuss a topological curiosity discovered by Kuratowski (1922): the fact that the number of distinct operators on a topological space generated by compositions of closure and complement never exceeds 14, and is exactly 14 in the case of R. In addition, we prove a theorem due to Chagrov (1982) that classifies topological spaces according to the number of such operators they support. [Hybrid_Multi_Lane_Spatial_Logic] title = Hybrid Multi-Lane Spatial Logic author = Sven Linker topic = Logic/General logic/Modal logic date = 2017-11-06 notify = s.linker@liverpool.ac.uk abstract = We present a semantic embedding of a spatio-temporal multi-modal logic, specifically defined to reason about motorway traffic, into Isabelle/HOL. The semantic model is an abstraction of a motorway, emphasising local spatial properties, and parameterised by the types of sensors deployed in the vehicles. We use the logic to define controller constraints to ensure safety, i.e., the absence of collisions on the motorway. After proving safety with a restrictive definition of sensors, we relax these assumptions and show how to amend the controller constraints to still guarantee safety. [Dirichlet_L] title = Dirichlet L-Functions and Dirichlet's Theorem author = Manuel Eberl topic = Mathematics/Number theory, Mathematics/Algebra date = 2017-12-21 notify = eberlm@in.tum.de abstract =

This article provides a formalisation of Dirichlet characters and Dirichlet L-functions including proofs of their basic properties – most notably their analyticity, their areas of convergence, and their non-vanishing for ℜ(s) ≥ 1. All of this is built in a very high-level style using Dirichlet series. The proof of the non-vanishing follows a very short and elegant proof by Newman, which we attempt to reproduce faithfully in a similar level of abstraction in Isabelle.

This also leads to a relatively short proof of Dirichlet’s Theorem, which states that, if h and n are coprime, there are infinitely many primes p with ph (mod n).

[Symmetric_Polynomials] title = Symmetric Polynomials author = Manuel Eberl topic = Mathematics/Algebra date = 2018-09-25 notify = eberlm@in.tum.de abstract =

A symmetric polynomial is a polynomial in variables X1,…,Xn that does not discriminate between its variables, i. e. it is invariant under any permutation of them. These polynomials are important in the study of the relationship between the coefficients of a univariate polynomial and its roots in its algebraic closure.

This article provides a definition of symmetric polynomials and the elementary symmetric polynomials e1,…,en and proofs of their basic properties, including three notable ones:

  • Vieta's formula, which gives an explicit expression for the k-th coefficient of a univariate monic polynomial in terms of its roots x1,…,xn, namely ck = (-1)n-k en-k(x1,…,xn).
  • Second, the Fundamental Theorem of Symmetric Polynomials, which states that any symmetric polynomial is itself a uniquely determined polynomial combination of the elementary symmetric polynomials.
  • Third, as a corollary of the previous two, that given a polynomial over some ring R, any symmetric polynomial combination of its roots is also in R even when the roots are not.

Both the symmetry property itself and the witness for the Fundamental Theorem are executable.

[Taylor_Models] title = Taylor Models author = Christoph Traut<>, Fabian Immler topic = Computer science/Algorithms/Mathematical, Computer science/Data structures, Mathematics/Analysis, Mathematics/Algebra date = 2018-01-08 notify = immler@in.tum.de abstract = We present a formally verified implementation of multivariate Taylor models. Taylor models are a form of rigorous polynomial approximation, consisting of an approximation polynomial based on Taylor expansions, combined with a rigorous bound on the approximation error. Taylor models were introduced as a tool to mitigate the dependency problem of interval arithmetic. Our implementation automatically computes Taylor models for the class of elementary functions, expressed by composition of arithmetic operations and basic functions like exp, sin, or square root. [Green] title = An Isabelle/HOL formalisation of Green's Theorem author = Mohammad Abdulaziz , Lawrence C. Paulson topic = Mathematics/Analysis date = 2018-01-11 notify = mohammad.abdulaziz8@gmail.com, lp15@cam.ac.uk abstract = We formalise a statement of Green’s theorem—the first formalisation to our knowledge—in Isabelle/HOL. The theorem statement that we formalise is enough for most applications, especially in physics and engineering. Our formalisation is made possible by a novel proof that avoids the ubiquitous line integral cancellation argument. This eliminates the need to formalise orientations and region boundaries explicitly with respect to the outwards-pointing normal vector. Instead we appeal to a homological argument about equivalences between paths. [Gromov_Hyperbolicity] title = Gromov Hyperbolicity author = Sebastien Gouezel<> topic = Mathematics/Geometry date = 2018-01-16 notify = sebastien.gouezel@univ-rennes1.fr abstract = A geodesic metric space is Gromov hyperbolic if all its geodesic triangles are thin, i.e., every side is contained in a fixed thickening of the two other sides. While this definition looks innocuous, it has proved extremely important and versatile in modern geometry since its introduction by Gromov. We formalize the basic classical properties of Gromov hyperbolic spaces, notably the Morse lemma asserting that quasigeodesics are close to geodesics, the invariance of hyperbolicity under quasi-isometries, we define and study the Gromov boundary and its associated distance, and prove that a quasi-isometry between Gromov hyperbolic spaces extends to a homeomorphism of the boundaries. We also prove a less classical theorem, by Bonk and Schramm, asserting that a Gromov hyperbolic space embeds isometrically in a geodesic Gromov-hyperbolic space. As the original proof uses a transfinite sequence of Cauchy completions, this is an interesting formalization exercise. Along the way, we introduce basic material on isometries, quasi-isometries, Lipschitz maps, geodesic spaces, the Hausdorff distance, the Cauchy completion of a metric space, and the exponential on extended real numbers. [Ordered_Resolution_Prover] title = Formalization of Bachmair and Ganzinger's Ordered Resolution Prover author = Anders Schlichtkrull , Jasmin Christian Blanchette , Dmitriy Traytel , Uwe Waldmann topic = Logic/General logic/Mechanization of proofs date = 2018-01-18 notify = andschl@dtu.dk, j.c.blanchette@vu.nl abstract = This Isabelle/HOL formalization covers Sections 2 to 4 of Bachmair and Ganzinger's "Resolution Theorem Proving" chapter in the Handbook of Automated Reasoning. This includes soundness and completeness of unordered and ordered variants of ground resolution with and without literal selection, the standard redundancy criterion, a general framework for refutational theorem proving, and soundness and completeness of an abstract first-order prover. [BNF_Operations] title = Operations on Bounded Natural Functors author = Jasmin Christian Blanchette , Andrei Popescu , Dmitriy Traytel topic = Tools date = 2017-12-19 notify = jasmin.blanchette@gmail.com,uuomul@yahoo.com,traytel@inf.ethz.ch abstract = This entry formalizes the closure property of bounded natural functors (BNFs) under seven operations. These operations and the corresponding proofs constitute the core of Isabelle's (co)datatype package. To be close to the implemented tactics, the proofs are deliberately formulated as detailed apply scripts. The (co)datatypes together with (co)induction principles and (co)recursors are byproducts of the fixpoint operations LFP and GFP. Composition of BNFs is subdivided into four simpler operations: Compose, Kill, Lift, and Permute. The N2M operation provides mutual (co)induction principles and (co)recursors for nested (co)datatypes. [LLL_Basis_Reduction] title = A verified LLL algorithm author = Ralph Bottesch <>, Jose Divasón , Maximilian Haslbeck , Sebastiaan Joosten , René Thiemann , Akihisa Yamada<> topic = Computer science/Algorithms/Mathematical, Mathematics/Algebra date = 2018-02-02 notify = ralph.bottesch@uibk.ac.at, jose.divason@unirioja.es, maximilian.haslbeck@uibk.ac.at, s.j.c.joosten@utwente.nl, rene.thiemann@uibk.ac.at, ayamada@trs.cm.is.nagoya-u.ac.jp abstract = The Lenstra-Lenstra-Lovász basis reduction algorithm, also known as LLL algorithm, is an algorithm to find a basis with short, nearly orthogonal vectors of an integer lattice. Thereby, it can also be seen as an approximation to solve the shortest vector problem (SVP), which is an NP-hard problem, where the approximation quality solely depends on the dimension of the lattice, but not the lattice itself. The algorithm also possesses many applications in diverse fields of computer science, from cryptanalysis to number theory, but it is specially well-known since it was used to implement the first polynomial-time algorithm to factor polynomials. In this work we present the first mechanized soundness proof of the LLL algorithm to compute short vectors in lattices. The formalization follows a textbook by von zur Gathen and Gerhard. extra-history = Change history: [2018-04-16]: Integrated formal complexity bounds (Haslbeck, Thiemann) [2018-05-25]: Integrated much faster LLL implementation based on integer arithmetic (Bottesch, Haslbeck, Thiemann) [LLL_Factorization] title = A verified factorization algorithm for integer polynomials with polynomial complexity author = Jose Divasón , Sebastiaan Joosten , René Thiemann , Akihisa Yamada topic = Mathematics/Algebra date = 2018-02-06 notify = jose.divason@unirioja.es, s.j.c.joosten@utwente.nl, rene.thiemann@uibk.ac.at, ayamada@trs.cm.is.nagoya-u.ac.jp abstract = Short vectors in lattices and factors of integer polynomials are related. Each factor of an integer polynomial belongs to a certain lattice. When factoring polynomials, the condition that we are looking for an irreducible polynomial means that we must look for a small element in a lattice, which can be done by a basis reduction algorithm. In this development we formalize this connection and thereby one main application of the LLL basis reduction algorithm: an algorithm to factor square-free integer polynomials which runs in polynomial time. The work is based on our previous Berlekamp–Zassenhaus development, where the exponential reconstruction phase has been replaced by the polynomial-time basis reduction algorithm. Thanks to this formalization we found a serious flaw in a textbook. [Treaps] title = Treaps author = Maximilian Haslbeck , Manuel Eberl , Tobias Nipkow topic = Computer science/Data structures date = 2018-02-06 notify = eberlm@in.tum.de abstract =

A Treap is a binary tree whose nodes contain pairs consisting of some payload and an associated priority. It must have the search-tree property w.r.t. the payloads and the heap property w.r.t. the priorities. Treaps are an interesting data structure that is related to binary search trees (BSTs) in the following way: if one forgets all the priorities of a treap, the resulting BST is exactly the same as if one had inserted the elements into an empty BST in order of ascending priority. This means that a treap behaves like a BST where we can pretend the elements were inserted in a different order from the one in which they were actually inserted.

In particular, by choosing these priorities at random upon insertion of an element, we can pretend that we inserted the elements in random order, so that the shape of the resulting tree is that of a random BST no matter in what order we insert the elements. This is the main result of this formalisation.

[Skip_Lists] title = Skip Lists author = Max W. Haslbeck , Manuel Eberl topic = Computer science/Data structures date = 2020-01-09 notify = max.haslbeck@gmx.de abstract =

Skip lists are sorted linked lists enhanced with shortcuts and are an alternative to binary search trees. A skip lists consists of multiple levels of sorted linked lists where a list on level n is a subsequence of the list on level n − 1. In the ideal case, elements are skipped in such a way that a lookup in a skip lists takes O(log n) time. In a randomised skip list the skipped elements are choosen randomly.

This entry contains formalized proofs of the textbook results about the expected height and the expected length of a search path in a randomised skip list.

[Mersenne_Primes] title = Mersenne primes and the Lucas–Lehmer test author = Manuel Eberl topic = Mathematics/Number theory date = 2020-01-17 notify = eberlm@in.tum.de abstract =

This article provides formal proofs of basic properties of Mersenne numbers, i. e. numbers of the form 2n - 1, and especially of Mersenne primes.

In particular, an efficient, verified, and executable version of the Lucas–Lehmer test is developed. This test decides primality for Mersenne numbers in time polynomial in n.

[Hoare_Time] title = Hoare Logics for Time Bounds author = Maximilian P. L. Haslbeck , Tobias Nipkow topic = Computer science/Programming languages/Logics date = 2018-02-26 notify = haslbema@in.tum.de abstract = We study three different Hoare logics for reasoning about time bounds of imperative programs and formalize them in Isabelle/HOL: a classical Hoare like logic due to Nielson, a logic with potentials due to Carbonneaux et al. and a separation logic following work by Atkey, Chaguérand and Pottier. These logics are formally shown to be sound and complete. Verification condition generators are developed and are shown sound and complete too. We also consider variants of the systems where we abstract from multiplicative constants in the running time bounds, thus supporting a big-O style of reasoning. Finally we compare the expressive power of the three systems. [Architectural_Design_Patterns] title = A Theory of Architectural Design Patterns author = Diego Marmsoler topic = Computer science/System description languages date = 2018-03-01 notify = diego.marmsoler@tum.de abstract = The following document formalizes and verifies several architectural design patterns. Each pattern specification is formalized in terms of a locale where the locale assumptions correspond to the assumptions which a pattern poses on an architecture. Thus, pattern specifications may build on top of each other by interpreting the corresponding locale. A pattern is verified using the framework provided by the AFP entry Dynamic Architectures. Currently, the document consists of formalizations of 4 different patterns: the singleton, the publisher subscriber, the blackboard pattern, and the blockchain pattern. Thereby, the publisher component of the publisher subscriber pattern is modeled as an instance of the singleton pattern and the blackboard pattern is modeled as an instance of the publisher subscriber pattern. In general, this entry provides the first steps towards an overall theory of architectural design patterns. extra-history = Change history: [2018-05-25]: changing the major assumption for blockchain architectures from alternative minings to relative mining frequencies (revision 5043c5c71685)
[2019-04-08]: adapting the terminology: honest instead of trusted, dishonest instead of untrusted (revision 7af3431a22ae) [Weight_Balanced_Trees] title = Weight-Balanced Trees author = Tobias Nipkow , Stefan Dirix<> topic = Computer science/Data structures date = 2018-03-13 notify = nipkow@in.tum.de abstract = This theory provides a verified implementation of weight-balanced trees following the work of Hirai and Yamamoto who proved that all parameters in a certain range are valid, i.e. guarantee that insertion and deletion preserve weight-balance. Instead of a general theorem we provide parameterized proofs of preservation of the invariant that work for many (all?) valid parameters. [Fishburn_Impossibility] title = The Incompatibility of Fishburn-Strategyproofness and Pareto-Efficiency author = Felix Brandt , Manuel Eberl , Christian Saile , Christian Stricker topic = Mathematics/Games and economics date = 2018-03-22 notify = eberlm@in.tum.de abstract =

This formalisation contains the proof that there is no anonymous Social Choice Function for at least three agents and alternatives that fulfils both Pareto-Efficiency and Fishburn-Strategyproofness. It was derived from a proof of Brandt et al., which relies on an unverified translation of a fixed finite instance of the original problem to SAT. This Isabelle proof contains a machine-checked version of both the statement for exactly three agents and alternatives and the lifting to the general case.

[BNF_CC] title = Bounded Natural Functors with Covariance and Contravariance author = Andreas Lochbihler , Joshua Schneider topic = Computer science/Functional programming, Tools date = 2018-04-24 notify = mail@andreas-lochbihler.de, joshua.schneider@inf.ethz.ch abstract = Bounded natural functors (BNFs) provide a modular framework for the construction of (co)datatypes in higher-order logic. Their functorial operations, the mapper and relator, are restricted to a subset of the parameters, namely those where recursion can take place. For certain applications, such as free theorems, data refinement, quotients, and generalised rewriting, it is desirable that these operations do not ignore the other parameters. In this article, we formalise the generalisation BNFCC that extends the mapper and relator to covariant and contravariant parameters. We show that
  1. BNFCCs are closed under functor composition and least and greatest fixpoints,
  2. subtypes inherit the BNFCC structure under conditions that generalise those for the BNF case, and
  3. BNFCCs preserve quotients under mild conditions.
These proofs are carried out for abstract BNFCCs similar to the AFP entry BNF Operations. In addition, we apply the BNFCC theory to several concrete functors. [Modular_Assembly_Kit_Security] title = An Isabelle/HOL Formalization of the Modular Assembly Kit for Security Properties author = Oliver Bračevac , Richard Gay , Sylvia Grewe , Heiko Mantel , Henning Sudbrock , Markus Tasch topic = Computer science/Security date = 2018-05-07 notify = tasch@mais.informatik.tu-darmstadt.de abstract = The "Modular Assembly Kit for Security Properties" (MAKS) is a framework for both the definition and verification of possibilistic information-flow security properties at the specification-level. MAKS supports the uniform representation of a wide range of possibilistic information-flow properties and provides support for the verification of such properties via unwinding results and compositionality results. We provide a formalization of this framework in Isabelle/HOL. [AxiomaticCategoryTheory] title = Axiom Systems for Category Theory in Free Logic author = Christoph Benzmüller , Dana Scott topic = Mathematics/Category theory date = 2018-05-23 notify = c.benzmueller@gmail.com abstract = This document provides a concise overview on the core results of our previous work on the exploration of axioms systems for category theory. Extending the previous studies (http://arxiv.org/abs/1609.01493) we include one further axiomatic theory in our experiments. This additional theory has been suggested by Mac Lane in 1948. We show that the axioms proposed by Mac Lane are equivalent to the ones we studied before, which includes an axioms set suggested by Scott in the 1970s and another axioms set proposed by Freyd and Scedrov in 1990, which we slightly modified to remedy a minor technical issue. [OpSets] title = OpSets: Sequential Specifications for Replicated Datatypes author = Martin Kleppmann , Victor B. F. Gomes , Dominic P. Mulligan , Alastair R. Beresford topic = Computer science/Algorithms/Distributed, Computer science/Data structures date = 2018-05-10 notify = vb358@cam.ac.uk abstract = We introduce OpSets, an executable framework for specifying and reasoning about the semantics of replicated datatypes that provide eventual consistency in a distributed system, and for mechanically verifying algorithms that implement these datatypes. Our approach is simple but expressive, allowing us to succinctly specify a variety of abstract datatypes, including maps, sets, lists, text, graphs, trees, and registers. Our datatypes are also composable, enabling the construction of complex data structures. To demonstrate the utility of OpSets for analysing replication algorithms, we highlight an important correctness property for collaborative text editing that has traditionally been overlooked; algorithms that do not satisfy this property can exhibit awkward interleaving of text. We use OpSets to specify this correctness property and prove that although one existing replication algorithm satisfies this property, several other published algorithms do not. [Irrationality_J_Hancl] title = Irrational Rapidly Convergent Series author = Angeliki Koutsoukou-Argyraki , Wenda Li topic = Mathematics/Number theory, Mathematics/Analysis date = 2018-05-23 notify = ak2110@cam.ac.uk, wl302@cam.ac.uk abstract = We formalize with Isabelle/HOL a proof of a theorem by J. Hancl asserting the irrationality of the sum of a series consisting of rational numbers, built up by sequences that fulfill certain properties. Even though the criterion is a number theoretic result, the proof makes use only of analytical arguments. We also formalize a corollary of the theorem for a specific series fulfilling the assumptions of the theorem. [Optimal_BST] title = Optimal Binary Search Trees author = Tobias Nipkow , Dániel Somogyi <> topic = Computer science/Algorithms, Computer science/Data structures date = 2018-05-27 notify = nipkow@in.tum.de abstract = This article formalizes recursive algorithms for the construction of optimal binary search trees given fixed access frequencies. We follow Knuth (1971), Yao (1980) and Mehlhorn (1984). The algorithms are memoized with the help of the AFP article Monadification, Memoization and Dynamic Programming, thus yielding dynamic programming algorithms. [Projective_Geometry] title = Projective Geometry author = Anthony Bordg topic = Mathematics/Geometry date = 2018-06-14 notify = apdb3@cam.ac.uk abstract = We formalize the basics of projective geometry. In particular, we give a proof of the so-called Hessenberg's theorem in projective plane geometry. We also provide a proof of the so-called Desargues's theorem based on an axiomatization of (higher) projective space geometry using the notion of rank of a matroid. This last approach allows to handle incidence relations in an homogeneous way dealing only with points and without the need of talking explicitly about lines, planes or any higher entity. [Localization_Ring] title = The Localization of a Commutative Ring author = Anthony Bordg topic = Mathematics/Algebra date = 2018-06-14 notify = apdb3@cam.ac.uk abstract = We formalize the localization of a commutative ring R with respect to a multiplicative subset (i.e. a submonoid of R seen as a multiplicative monoid). This localization is itself a commutative ring and we build the natural homomorphism of rings from R to its localization. [Minsky_Machines] title = Minsky Machines author = Bertram Felgenhauer<> topic = Logic/Computability date = 2018-08-14 notify = int-e@gmx.de abstract =

We formalize undecidablity results for Minsky machines. To this end, we also formalize recursive inseparability.

We start by proving that Minsky machines can compute arbitrary primitive recursive and recursive functions. We then show that there is a deterministic Minsky machine with one argument and two final states such that the set of inputs that are accepted in one state is recursively inseparable from the set of inputs that are accepted in the other state.

As a corollary, the set of Minsky configurations that reach the first state but not the second recursively inseparable from the set of Minsky configurations that reach the second state but not the first. In particular both these sets are undecidable.

We do not prove that recursive functions can simulate Minsky machines.

[Neumann_Morgenstern_Utility] title = Von-Neumann-Morgenstern Utility Theorem author = Julian Parsert, Cezary Kaliszyk topic = Mathematics/Games and economics license = LGPL date = 2018-07-04 notify = julian.parsert@uibk.ac.at, cezary.kaliszyk@uibk.ac.at abstract = Utility functions form an essential part of game theory and economics. In order to guarantee the existence of utility functions most of the time sufficient properties are assumed in an axiomatic manner. One famous and very common set of such assumptions is that of expected utility theory. Here, the rationality, continuity, and independence of preferences is assumed. The von-Neumann-Morgenstern Utility theorem shows that these assumptions are necessary and sufficient for an expected utility function to exists. This theorem was proven by Neumann and Morgenstern in ``Theory of Games and Economic Behavior'' which is regarded as one of the most influential works in game theory. The formalization includes formal definitions of the underlying concepts including continuity and independence of preferences. [Simplex] title = An Incremental Simplex Algorithm with Unsatisfiable Core Generation author = Filip Marić , Mirko Spasić , René Thiemann topic = Computer science/Algorithms/Optimization date = 2018-08-24 notify = rene.thiemann@uibk.ac.at abstract = We present an Isabelle/HOL formalization and total correctness proof for the incremental version of the Simplex algorithm which is used in most state-of-the-art SMT solvers. It supports extraction of satisfying assignments, extraction of minimal unsatisfiable cores, incremental assertion of constraints and backtracking. The formalization relies on stepwise program refinement, starting from a simple specification, going through a number of refinement steps, and ending up in a fully executable functional implementation. Symmetries present in the algorithm are handled with special care. [Budan_Fourier] title = The Budan-Fourier Theorem and Counting Real Roots with Multiplicity author = Wenda Li topic = Mathematics/Analysis date = 2018-09-02 notify = wl302@cam.ac.uk, liwenda1990@hotmail.com abstract = This entry is mainly about counting and approximating real roots (of a polynomial) with multiplicity. We have first formalised the Budan-Fourier theorem: given a polynomial with real coefficients, we can calculate sign variations on Fourier sequences to over-approximate the number of real roots (counting multiplicity) within an interval. When all roots are known to be real, the over-approximation becomes tight: we can utilise this theorem to count real roots exactly. It is also worth noting that Descartes' rule of sign is a direct consequence of the Budan-Fourier theorem, and has been included in this entry. In addition, we have extended previous formalised Sturm's theorem to count real roots with multiplicity, while the original Sturm's theorem only counts distinct real roots. Compared to the Budan-Fourier theorem, our extended Sturm's theorem always counts roots exactly but may suffer from greater computational cost. [Quaternions] title = Quaternions author = Lawrence C. Paulson topic = Mathematics/Algebra, Mathematics/Geometry date = 2018-09-05 notify = lp15@cam.ac.uk abstract = This theory is inspired by the HOL Light development of quaternions, but follows its own route. Quaternions are developed coinductively, as in the existing formalisation of the complex numbers. Quaternions are quickly shown to belong to the type classes of real normed division algebras and real inner product spaces. And therefore they inherit a great body of facts involving algebraic laws, limits, continuity, etc., which must be proved explicitly in the HOL Light version. The development concludes with the geometric interpretation of the product of imaginary quaternions. [Octonions] title = Octonions author = Angeliki Koutsoukou-Argyraki topic = Mathematics/Algebra, Mathematics/Geometry date = 2018-09-14 notify = ak2110@cam.ac.uk abstract = We develop the basic theory of Octonions, including various identities and properties of the octonions and of the octonionic product, a description of 7D isometries and representations of orthogonal transformations. To this end we first develop the theory of the vector cross product in 7 dimensions. The development of the theory of Octonions is inspired by that of the theory of Quaternions by Lawrence Paulson. However, we do not work within the type class real_algebra_1 because the octonionic product is not associative. [Aggregation_Algebras] title = Aggregation Algebras author = Walter Guttmann topic = Mathematics/Algebra date = 2018-09-15 notify = walter.guttmann@canterbury.ac.nz abstract = We develop algebras for aggregation and minimisation for weight matrices and for edge weights in graphs. We verify the correctness of Prim's and Kruskal's minimum spanning tree algorithms based on these algebras. We also show numerous instances of these algebras based on linearly ordered commutative semigroups. [Prime_Number_Theorem] title = The Prime Number Theorem author = Manuel Eberl , Lawrence C. Paulson topic = Mathematics/Number theory date = 2018-09-19 notify = eberlm@in.tum.de abstract =

This article provides a short proof of the Prime Number Theorem in several equivalent forms, most notably π(x) ~ x/ln x where π(x) is the number of primes no larger than x. It also defines other basic number-theoretic functions related to primes like Chebyshev's functions ϑ and ψ and the “n-th prime number” function pn. We also show various bounds and relationship between these functions are shown. Lastly, we derive Mertens' First and Second Theorem, i. e. ∑px ln p/p = ln x + O(1) and ∑px 1/p = ln ln x + M + O(1/ln x). We also give explicit bounds for the remainder terms.

The proof of the Prime Number Theorem builds on a library of Dirichlet series and analytic combinatorics. We essentially follow the presentation by Newman. The core part of the proof is a Tauberian theorem for Dirichlet series, which is proven using complex analysis and then used to strengthen Mertens' First Theorem to ∑px ln p/p = ln x + c + o(1).

A variant of this proof has been formalised before by Harrison in HOL Light, and formalisations of Selberg's elementary proof exist both by Avigad et al. in Isabelle and by Carneiro in Metamath. The advantage of the analytic proof is that, while it requires more powerful mathematical tools, it is considerably shorter and clearer. This article attempts to provide a short and clear formalisation of all components of that proof using the full range of mathematical machinery available in Isabelle, staying as close as possible to Newman's simple paper proof.

[Signature_Groebner] title = Signature-Based Gröbner Basis Algorithms author = Alexander Maletzky topic = Mathematics/Algebra, Computer science/Algorithms/Mathematical date = 2018-09-20 notify = alexander.maletzky@risc.jku.at abstract =

This article formalizes signature-based algorithms for computing Gröbner bases. Such algorithms are, in general, superior to other algorithms in terms of efficiency, and have not been formalized in any proof assistant so far. The present development is both generic, in the sense that most known variants of signature-based algorithms are covered by it, and effectively executable on concrete input thanks to Isabelle's code generator. Sample computations of benchmark problems show that the verified implementation of signature-based algorithms indeed outperforms the existing implementation of Buchberger's algorithm in Isabelle/HOL.

Besides total correctness of the algorithms, the article also proves that under certain conditions they a-priori detect and avoid all useless zero-reductions, and always return 'minimal' (in some sense) Gröbner bases if an input parameter is chosen in the right way.

The formalization follows the recent survey article by Eder and Faugère.

[Factored_Transition_System_Bounding] title = Upper Bounding Diameters of State Spaces of Factored Transition Systems author = Friedrich Kurz <>, Mohammad Abdulaziz topic = Computer science/Automata and formal languages, Mathematics/Graph theory date = 2018-10-12 notify = friedrich.kurz@tum.de, mohammad.abdulaziz@in.tum.de abstract = A completeness threshold is required to guarantee the completeness of planning as satisfiability, and bounded model checking of safety properties. One valid completeness threshold is the diameter of the underlying transition system. The diameter is the maximum element in the set of lengths of all shortest paths between pairs of states. The diameter is not calculated exactly in our setting, where the transition system is succinctly described using a (propositionally) factored representation. Rather, an upper bound on the diameter is calculated compositionally, by bounding the diameters of small abstract subsystems, and then composing those. We port a HOL4 formalisation of a compositional algorithm for computing a relatively tight upper bound on the system diameter. This compositional algorithm exploits acyclicity in the state space to achieve compositionality, and it was introduced by Abdulaziz et. al. The formalisation that we port is described as a part of another paper by Abdulaziz et. al. As a part of this porting we developed a libray about transition systems, which shall be of use in future related mechanisation efforts. [Smooth_Manifolds] title = Smooth Manifolds author = Fabian Immler , Bohua Zhan topic = Mathematics/Analysis, Mathematics/Topology date = 2018-10-22 notify = immler@in.tum.de, bzhan@ios.ac.cn abstract = We formalize the definition and basic properties of smooth manifolds in Isabelle/HOL. Concepts covered include partition of unity, tangent and cotangent spaces, and the fundamental theorem of path integrals. We also examine some concrete manifolds such as spheres and projective spaces. The formalization makes extensive use of the analysis and linear algebra libraries in Isabelle/HOL, in particular its “types-to-sets” mechanism. [Matroids] title = Matroids author = Jonas Keinholz<> topic = Mathematics/Combinatorics date = 2018-11-16 notify = eberlm@in.tum.de abstract =

This article defines the combinatorial structures known as Independence Systems and Matroids and provides basic concepts and theorems related to them. These structures play an important role in combinatorial optimisation, e. g. greedy algorithms such as Kruskal's algorithm. The development is based on Oxley's `What is a Matroid?'.

[Graph_Saturation] title = Graph Saturation author = Sebastiaan J. C. Joosten<> topic = Logic/Rewriting, Mathematics/Graph theory date = 2018-11-23 notify = sjcjoosten@gmail.com abstract = This is an Isabelle/HOL formalisation of graph saturation, closely following a paper by the author on graph saturation. Nine out of ten lemmas of the original paper are proven in this formalisation. The formalisation additionally includes two theorems that show the main premise of the paper: that consistency and entailment are decided through graph saturation. This formalisation does not give executable code, and it did not implement any of the optimisations suggested in the paper. [Functional_Ordered_Resolution_Prover] title = A Verified Functional Implementation of Bachmair and Ganzinger's Ordered Resolution Prover author = Anders Schlichtkrull , Jasmin Christian Blanchette , Dmitriy Traytel topic = Logic/General logic/Mechanization of proofs date = 2018-11-23 notify = andschl@dtu.dk,j.c.blanchette@vu.nl,traytel@inf.ethz.ch abstract = This Isabelle/HOL formalization refines the abstract ordered resolution prover presented in Section 4.3 of Bachmair and Ganzinger's "Resolution Theorem Proving" chapter in the Handbook of Automated Reasoning. The result is a functional implementation of a first-order prover. [Auto2_HOL] title = Auto2 Prover author = Bohua Zhan topic = Tools date = 2018-11-20 notify = bzhan@ios.ac.cn abstract = Auto2 is a saturation-based heuristic prover for higher-order logic, implemented as a tactic in Isabelle. This entry contains the instantiation of auto2 for Isabelle/HOL, along with two basic examples: solutions to some of the Pelletier’s problems, and elementary number theory of primes. [Order_Lattice_Props] title = Properties of Orderings and Lattices author = Georg Struth topic = Mathematics/Order date = 2018-12-11 notify = g.struth@sheffield.ac.uk abstract = These components add further fundamental order and lattice-theoretic concepts and properties to Isabelle's libraries. They follow by and large the introductory sections of the Compendium of Continuous Lattices, covering directed and filtered sets, down-closed and up-closed sets, ideals and filters, Galois connections, closure and co-closure operators. Some emphasis is on duality and morphisms between structures, as in the Compendium. To this end, three ad-hoc approaches to duality are compared. [Quantales] title = Quantales author = Georg Struth topic = Mathematics/Algebra date = 2018-12-11 notify = g.struth@sheffield.ac.uk abstract = These mathematical components formalise basic properties of quantales, together with some important models, constructions, and concepts, including quantic nuclei and conuclei. [Transformer_Semantics] title = Transformer Semantics author = Georg Struth topic = Mathematics/Algebra, Computer science/Semantics date = 2018-12-11 notify = g.struth@sheffield.ac.uk abstract = These mathematical components formalise predicate transformer semantics for programs, yet currently only for partial correctness and in the absence of faults. A first part for isotone (or monotone), Sup-preserving and Inf-preserving transformers follows Back and von Wright's approach, with additional emphasis on the quantalic structure of algebras of transformers. The second part develops Sup-preserving and Inf-preserving predicate transformers from the powerset monad, via its Kleisli category and Eilenberg-Moore algebras, with emphasis on adjunctions and dualities, as well as isomorphisms between relations, state transformers and predicate transformers. [Concurrent_Revisions] title = Formalization of Concurrent Revisions author = Roy Overbeek topic = Computer science/Concurrency date = 2018-12-25 notify = Roy.Overbeek@cwi.nl abstract = Concurrent revisions is a concurrency control model developed by Microsoft Research. It has many interesting properties that distinguish it from other well-known models such as transactional memory. One of these properties is determinacy: programs written within the model always produce the same outcome, independent of scheduling activity. The concurrent revisions model has an operational semantics, with an informal proof of determinacy. This document contains an Isabelle/HOL formalization of this semantics and the proof of determinacy. [Core_DOM] title = A Formal Model of the Document Object Model author = Achim D. Brucker , Michael Herzberg topic = Computer science/Data structures date = 2018-12-26 notify = adbrucker@0x5f.org abstract = In this AFP entry, we formalize the core of the Document Object Model (DOM). At its core, the DOM defines a tree-like data structure for representing documents in general and HTML documents in particular. It is the heart of any modern web browser. Formalizing the key concepts of the DOM is a prerequisite for the formal reasoning over client-side JavaScript programs and for the analysis of security concepts in modern web browsers. We present a formalization of the core DOM, with focus on the node-tree and the operations defined on node-trees, in Isabelle/HOL. We use the formalization to verify the functional correctness of the most important functions defined in the DOM standard. Moreover, our formalization is 1) extensible, i.e., can be extended without the need of re-proving already proven properties and 2) executable, i.e., we can generate executable code from our specification. [Store_Buffer_Reduction] title = A Reduction Theorem for Store Buffers author = Ernie Cohen , Norbert Schirmer topic = Computer science/Concurrency date = 2019-01-07 notify = norbert.schirmer@web.de abstract = When verifying a concurrent program, it is usual to assume that memory is sequentially consistent. However, most modern multiprocessors depend on store buffering for efficiency, and provide native sequential consistency only at a substantial performance penalty. To regain sequential consistency, a programmer has to follow an appropriate programming discipline. However, naïve disciplines, such as protecting all shared accesses with locks, are not flexible enough for building high-performance multiprocessor software. We present a new discipline for concurrent programming under TSO (total store order, with store buffer forwarding). It does not depend on concurrency primitives, such as locks. Instead, threads use ghost operations to acquire and release ownership of memory addresses. A thread can write to an address only if no other thread owns it, and can read from an address only if it owns it or it is shared and the thread has flushed its store buffer since it last wrote to an address it did not own. This discipline covers both coarse-grained concurrency (where data is protected by locks) as well as fine-grained concurrency (where atomic operations race to memory). We formalize this discipline in Isabelle/HOL, and prove that if every execution of a program in a system without store buffers follows the discipline, then every execution of the program with store buffers is sequentially consistent. Thus, we can show sequential consistency under TSO by ordinary assertional reasoning about the program, without having to consider store buffers at all. [IMP2] title = IMP2 – Simple Program Verification in Isabelle/HOL author = Peter Lammich , Simon Wimmer topic = Computer science/Programming languages/Logics, Computer science/Algorithms date = 2019-01-15 notify = lammich@in.tum.de abstract = IMP2 is a simple imperative language together with Isabelle tooling to create a program verification environment in Isabelle/HOL. The tools include a C-like syntax, a verification condition generator, and Isabelle commands for the specification of programs. The framework is modular, i.e., it allows easy reuse of already proved programs within larger programs. This entry comes with a quickstart guide and a large collection of examples, spanning basic algorithms with simple proofs to more advanced algorithms and proof techniques like data refinement. Some highlights from the examples are:
  • Bisection Square Root,
  • Extended Euclid,
  • Exponentiation by Squaring,
  • Binary Search,
  • Insertion Sort,
  • Quicksort,
  • Depth First Search.
The abstract syntax and semantics are very simple and well-documented. They are suitable to be used in a course, as extension to the IMP language which comes with the Isabelle distribution. While this entry is limited to a simple imperative language, the ideas could be extended to more sophisticated languages. [Farkas] title = Farkas' Lemma and Motzkin's Transposition Theorem author = Ralph Bottesch , Max W. Haslbeck , René Thiemann topic = Mathematics/Algebra date = 2019-01-17 notify = rene.thiemann@uibk.ac.at abstract = We formalize a proof of Motzkin's transposition theorem and Farkas' lemma in Isabelle/HOL. Our proof is based on the formalization of the simplex algorithm which, given a set of linear constraints, either returns a satisfying assignment to the problem or detects unsatisfiability. By reusing facts about the simplex algorithm we show that a set of linear constraints is unsatisfiable if and only if there is a linear combination of the constraints which evaluates to a trivially unsatisfiable inequality. [Auto2_Imperative_HOL] title = Verifying Imperative Programs using Auto2 author = Bohua Zhan topic = Computer science/Algorithms, Computer science/Data structures date = 2018-12-21 notify = bzhan@ios.ac.cn abstract = This entry contains the application of auto2 to verifying functional and imperative programs. Algorithms and data structures that are verified include linked lists, binary search trees, red-black trees, interval trees, priority queue, quicksort, union-find, Dijkstra's algorithm, and a sweep-line algorithm for detecting rectangle intersection. The imperative verification is based on Imperative HOL and its separation logic framework. A major goal of this work is to set up automation in order to reduce the length of proof that the user needs to provide, both for verifying functional programs and for working with separation logic. [UTP] title = Isabelle/UTP: Mechanised Theory Engineering for Unifying Theories of Programming author = Simon Foster , Frank Zeyda<>, Yakoub Nemouchi , Pedro Ribeiro<>, Burkhart Wolff topic = Computer science/Programming languages/Logics date = 2019-02-01 notify = simon.foster@york.ac.uk abstract = Isabelle/UTP is a mechanised theory engineering toolkit based on Hoare and He’s Unifying Theories of Programming (UTP). UTP enables the creation of denotational, algebraic, and operational semantics for different programming languages using an alphabetised relational calculus. We provide a semantic embedding of the alphabetised relational calculus in Isabelle/HOL, including new type definitions, relational constructors, automated proof tactics, and accompanying algebraic laws. Isabelle/UTP can be used to both capture laws of programming for different languages, and put these fundamental theorems to work in the creation of associated verification tools, using calculi like Hoare logics. This document describes the relational core of the UTP in Isabelle/HOL. [HOL-CSP] title = HOL-CSP Version 2.0 author = Safouan Taha , Lina Ye , Burkhart Wolff topic = Computer science/Concurrency/Process calculi, Computer science/Semantics date = 2019-04-26 notify = wolff@lri.fr abstract = This is a complete formalization of the work of Hoare and Roscoe on the denotational semantics of the Failure/Divergence Model of CSP. It follows essentially the presentation of CSP in Roscoe’s Book ”Theory and Practice of Concurrency” [8] and the semantic details in a joint Paper of Roscoe and Brooks ”An improved failures model for communicating processes". The present work is based on a prior formalization attempt, called HOL-CSP 1.0, done in 1997 by H. Tej and B. Wolff with the Isabelle proof technology available at that time. This work revealed minor, but omnipresent foundational errors in key concepts like the process invariant. The present version HOL-CSP profits from substantially improved libraries (notably HOLCF), improved automated proof techniques, and structured proof techniques in Isar and is substantially shorter but more complete. [Probabilistic_Prime_Tests] title = Probabilistic Primality Testing author = Daniel Stüwe<>, Manuel Eberl topic = Mathematics/Number theory date = 2019-02-11 notify = eberlm@in.tum.de abstract =

The most efficient known primality tests are probabilistic in the sense that they use randomness and may, with some probability, mistakenly classify a composite number as prime – but never a prime number as composite. Examples of this are the Miller–Rabin test, the Solovay–Strassen test, and (in most cases) Fermat's test.

This entry defines these three tests and proves their correctness. It also develops some of the number-theoretic foundations, such as Carmichael numbers and the Jacobi symbol with an efficient executable algorithm to compute it.

[Kruskal] title = Kruskal's Algorithm for Minimum Spanning Forest author = Maximilian P.L. Haslbeck , Peter Lammich , Julian Biendarra<> topic = Computer science/Algorithms/Graph date = 2019-02-14 notify = haslbema@in.tum.de, lammich@in.tum.de abstract = This Isabelle/HOL formalization defines a greedy algorithm for finding a minimum weight basis on a weighted matroid and proves its correctness. This algorithm is an abstract version of Kruskal's algorithm. We interpret the abstract algorithm for the cycle matroid (i.e. forests in a graph) and refine it to imperative executable code using an efficient union-find data structure. Our formalization can be instantiated for different graph representations. We provide instantiations for undirected graphs and symmetric directed graphs. [List_Inversions] title = The Inversions of a List author = Manuel Eberl topic = Computer science/Algorithms date = 2019-02-01 notify = eberlm@in.tum.de abstract =

This entry defines the set of inversions of a list, i.e. the pairs of indices that violate sortedness. It also proves the correctness of the well-known O(n log n) divide-and-conquer algorithm to compute the number of inversions.

[Prime_Distribution_Elementary] title = Elementary Facts About the Distribution of Primes author = Manuel Eberl topic = Mathematics/Number theory date = 2019-02-21 notify = eberlm@in.tum.de abstract =

This entry is a formalisation of Chapter 4 (and parts of Chapter 3) of Apostol's Introduction to Analytic Number Theory. The main topics that are addressed are properties of the distribution of prime numbers that can be shown in an elementary way (i. e. without the Prime Number Theorem), the various equivalent forms of the PNT (which imply each other in elementary ways), and consequences that follow from the PNT in elementary ways. The latter include, most notably, asymptotic bounds for the number of distinct prime factors of n, the divisor function d(n), Euler's totient function φ(n), and lcm(1,…,n).

[Safe_OCL] title = Safe OCL author = Denis Nikiforov <> topic = Computer science/Programming languages/Language definitions license = LGPL date = 2019-03-09 notify = denis.nikif@gmail.com abstract =

The theory is a formalization of the OCL type system, its abstract syntax and expression typing rules. The theory does not define a concrete syntax and a semantics. In contrast to Featherweight OCL, it is based on a deep embedding approach. The type system is defined from scratch, it is not based on the Isabelle HOL type system.

The Safe OCL distincts nullable and non-nullable types. Also the theory gives a formal definition of safe navigation operations. The Safe OCL typing rules are much stricter than rules given in the OCL specification. It allows one to catch more errors on a type checking phase.

The type theory presented is four-layered: classes, basic types, generic types, errorable types. We introduce the following new types: non-nullable types (T[1]), nullable types (T[?]), OclSuper. OclSuper is a supertype of all other types (basic types, collections, tuples). This type allows us to define a total supremum function, so types form an upper semilattice. It allows us to define rich expression typing rules in an elegant manner.

The Preliminaries Chapter of the theory defines a number of helper lemmas for transitive closures and tuples. It defines also a generic object model independent from OCL. It allows one to use the theory as a reference for formalization of analogous languages.

[QHLProver] title = Quantum Hoare Logic author = Junyi Liu<>, Bohua Zhan , Shuling Wang<>, Shenggang Ying<>, Tao Liu<>, Yangjia Li<>, Mingsheng Ying<>, Naijun Zhan<> topic = Computer science/Programming languages/Logics, Computer science/Semantics date = 2019-03-24 notify = bzhan@ios.ac.cn abstract = We formalize quantum Hoare logic as given in [1]. In particular, we specify the syntax and denotational semantics of a simple model of quantum programs. Then, we write down the rules of quantum Hoare logic for partial correctness, and show the soundness and completeness of the resulting proof system. As an application, we verify the correctness of Grover’s algorithm. [Transcendence_Series_Hancl_Rucki] title = The Transcendence of Certain Infinite Series author = Angeliki Koutsoukou-Argyraki , Wenda Li topic = Mathematics/Analysis, Mathematics/Number theory date = 2019-03-27 notify = wl302@cam.ac.uk, ak2110@cam.ac.uk abstract = We formalize the proofs of two transcendence criteria by J. Hančl and P. Rucki that assert the transcendence of the sums of certain infinite series built up by sequences that fulfil certain properties. Both proofs make use of Roth's celebrated theorem on diophantine approximations to algebraic numbers from 1955 which we implement as an assumption without having formalised its proof. [Binding_Syntax_Theory] title = A General Theory of Syntax with Bindings author = Lorenzo Gheri , Andrei Popescu topic = Computer science/Programming languages/Lambda calculi, Computer science/Functional programming, Logic/General logic/Mechanization of proofs date = 2019-04-06 notify = a.popescu@mdx.ac.uk, lor.gheri@gmail.com abstract = We formalize a theory of syntax with bindings that has been developed and refined over the last decade to support several large formalization efforts. Terms are defined for an arbitrary number of constructors of varying numbers of inputs, quotiented to alpha-equivalence and sorted according to a binding signature. The theory includes many properties of the standard operators on terms: substitution, swapping and freshness. It also includes bindings-aware induction and recursion principles and support for semantic interpretation. This work has been presented in the ITP 2017 paper “A Formalized General Theory of Syntax with Bindings”. [LTL_Master_Theorem] title = A Compositional and Unified Translation of LTL into ω-Automata author = Benedikt Seidl , Salomon Sickert topic = Computer science/Automata and formal languages date = 2019-04-16 notify = benedikt.seidl@tum.de, s.sickert@tum.de abstract = We present a formalisation of the unified translation approach of linear temporal logic (LTL) into ω-automata from [1]. This approach decomposes LTL formulas into ``simple'' languages and allows a clear separation of concerns: first, we formalise the purely logical result yielding this decomposition; second, we instantiate this generic theory to obtain a construction for deterministic (state-based) Rabin automata (DRA). We extract from this particular instantiation an executable tool translating LTL to DRAs. To the best of our knowledge this is the first verified translation from LTL to DRAs that is proven to be double exponential in the worst case which asymptotically matches the known lower bound.

[1] Javier Esparza, Jan Kretínský, Salomon Sickert. One Theorem to Rule Them All: A Unified Translation of LTL into ω-Automata. LICS 2018 [LambdaAuth] title = Formalization of Generic Authenticated Data Structures author = Matthias Brun<>, Dmitriy Traytel topic = Computer science/Security, Computer science/Programming languages/Lambda calculi date = 2019-05-14 notify = traytel@inf.ethz.ch abstract = Authenticated data structures are a technique for outsourcing data storage and maintenance to an untrusted server. The server is required to produce an efficiently checkable and cryptographically secure proof that it carried out precisely the requested computation. Miller et al. introduced λ• (pronounced lambda auth)—a functional programming language with a built-in primitive authentication construct, which supports a wide range of user-specified authenticated data structures while guaranteeing certain correctness and security properties for all well-typed programs. We formalize λ• and prove its correctness and security properties. With Isabelle's help, we uncover and repair several mistakes in the informal proofs and lemma statements. Our findings are summarized in a paper draft. [IMP2_Binary_Heap] title = Binary Heaps for IMP2 author = Simon Griebel<> topic = Computer science/Data structures, Computer science/Algorithms date = 2019-06-13 notify = s.griebel@tum.de abstract = In this submission array-based binary minimum heaps are formalized. The correctness of the following heap operations is proved: insert, get-min, delete-min and make-heap. These are then used to verify an in-place heapsort. The formalization is based on IMP2, an imperative program verification framework implemented in Isabelle/HOL. The verified heap functions are iterative versions of the partly recursive functions found in "Algorithms and Data Structures – The Basic Toolbox" by K. Mehlhorn and P. Sanders and "Introduction to Algorithms" by T. H. Cormen, C. E. Leiserson, R. L. Rivest and C. Stein. [Groebner_Macaulay] title = Gröbner Bases, Macaulay Matrices and Dubé's Degree Bounds author = Alexander Maletzky topic = Mathematics/Algebra date = 2019-06-15 notify = alexander.maletzky@risc.jku.at abstract = This entry formalizes the connection between Gröbner bases and Macaulay matrices (sometimes also referred to as `generalized Sylvester matrices'). In particular, it contains a method for computing Gröbner bases, which proceeds by first constructing some Macaulay matrix of the initial set of polynomials, then row-reducing this matrix, and finally converting the result back into a set of polynomials. The output is shown to be a Gröbner basis if the Macaulay matrix constructed in the first step is sufficiently large. In order to obtain concrete upper bounds on the size of the matrix (and hence turn the method into an effectively executable algorithm), Dubé's degree bounds on Gröbner bases are utilized; consequently, they are also part of the formalization. [Linear_Inequalities] title = Linear Inequalities author = Ralph Bottesch , Alban Reynaud <>, René Thiemann topic = Mathematics/Algebra date = 2019-06-21 notify = rene.thiemann@uibk.ac.at abstract = We formalize results about linear inqualities, mainly from Schrijver's book. The main results are the proof of the fundamental theorem on linear inequalities, Farkas' lemma, Carathéodory's theorem, the Farkas-Minkowsky-Weyl theorem, the decomposition theorem of polyhedra, and Meyer's result that the integer hull of a polyhedron is a polyhedron itself. Several theorems include bounds on the appearing numbers, and in particular we provide an a-priori bound on mixed-integer solutions of linear inequalities. [Linear_Programming] title = Linear Programming author = Julian Parsert , Cezary Kaliszyk topic = Mathematics/Algebra date = 2019-08-06 notify = julian.parsert@gmail.com, cezary.kaliszyk@uibk.ac.at abstract = We use the previous formalization of the general simplex algorithm to formulate an algorithm for solving linear programs. We encode the linear programs using only linear constraints. Solving these constraints also solves the original linear program. This algorithm is proven to be sound by applying the weak duality theorem which is also part of this formalization. [Differential_Game_Logic] title = Differential Game Logic author = André Platzer topic = Computer science/Programming languages/Logics date = 2019-06-03 notify = aplatzer@cs.cmu.edu abstract = This formalization provides differential game logic (dGL), a logic for proving properties of hybrid game. In addition to the syntax and semantics, it formalizes a uniform substitution calculus for dGL. Church's uniform substitutions substitute a term or formula for a function or predicate symbol everywhere. The uniform substitutions for dGL also substitute hybrid games for a game symbol everywhere. We prove soundness of one-pass uniform substitutions and the axioms of differential game logic with respect to their denotational semantics. One-pass uniform substitutions are faster by postponing soundness-critical admissibility checks with a linear pass homomorphic application and regain soundness by a variable condition at the replacements. The formalization is based on prior non-mechanized soundness proofs for dGL. [Complete_Non_Orders] title = Complete Non-Orders and Fixed Points author = Akihisa Yamada , Jérémy Dubut topic = Mathematics/Order date = 2019-06-27 notify = akihisayamada@nii.ac.jp, dubut@nii.ac.jp abstract = We develop an Isabelle/HOL library of order-theoretic concepts, such as various completeness conditions and fixed-point theorems. We keep our formalization as general as possible: we reprove several well-known results about complete orders, often without any properties of ordering, thus complete non-orders. In particular, we generalize the Knaster–Tarski theorem so that we ensure the existence of a quasi-fixed point of monotone maps over complete non-orders, and show that the set of quasi-fixed points is complete under a mild condition—attractivity—which is implied by either antisymmetry or transitivity. This result generalizes and strengthens a result by Stauti and Maaden. Finally, we recover Kleene’s fixed-point theorem for omega-complete non-orders, again using attractivity to prove that Kleene’s fixed points are least quasi-fixed points. [Priority_Search_Trees] title = Priority Search Trees author = Peter Lammich , Tobias Nipkow topic = Computer science/Data structures date = 2019-06-25 notify = lammich@in.tum.de abstract = We present a new, purely functional, simple and efficient data structure combining a search tree and a priority queue, which we call a priority search tree. The salient feature of priority search trees is that they offer a decrease-key operation, something that is missing from other simple, purely functional priority queue implementations. Priority search trees can be implemented on top of any search tree. This entry does the implementation for red-black trees. This entry formalizes the first part of our ITP-2019 proof pearl Purely Functional, Simple and Efficient Priority Search Trees and Applications to Prim and Dijkstra. [Prim_Dijkstra_Simple] title = Purely Functional, Simple, and Efficient Implementation of Prim and Dijkstra author = Peter Lammich , Tobias Nipkow topic = Computer science/Algorithms/Graph date = 2019-06-25 notify = lammich@in.tum.de abstract = We verify purely functional, simple and efficient implementations of Prim's and Dijkstra's algorithms. This constitutes the first verification of an executable and even efficient version of Prim's algorithm. This entry formalizes the second part of our ITP-2019 proof pearl Purely Functional, Simple and Efficient Priority Search Trees and Applications to Prim and Dijkstra. [MFOTL_Monitor] title = Formalization of a Monitoring Algorithm for Metric First-Order Temporal Logic author = Joshua Schneider , Dmitriy Traytel topic = Computer science/Algorithms, Logic/General logic/Temporal logic, Computer science/Automata and formal languages date = 2019-07-04 notify = joshua.schneider@inf.ethz.ch, traytel@inf.ethz.ch abstract = A monitor is a runtime verification tool that solves the following problem: Given a stream of time-stamped events and a policy formulated in a specification language, decide whether the policy is satisfied at every point in the stream. We verify the correctness of an executable monitor for specifications given as formulas in metric first-order temporal logic (MFOTL), an expressive extension of linear temporal logic with real-time constraints and first-order quantification. The verified monitor implements a simplified variant of the algorithm used in the efficient MonPoly monitoring tool. The formalization is presented in a forthcoming RV 2019 paper, which also compares the output of the verified monitor to that of other monitoring tools on randomly generated inputs. This case study revealed several errors in the optimized but unverified tools. [FOL_Seq_Calc1] title = A Sequent Calculus for First-Order Logic author = Asta Halkjær From contributors = Alexander Birch Jensen , Anders Schlichtkrull , Jørgen Villadsen topic = Logic/Proof theory date = 2019-07-18 notify = ahfrom@dtu.dk abstract = This work formalizes soundness and completeness of a one-sided sequent calculus for first-order logic. The completeness is shown via a translation from a complete semantic tableau calculus, the proof of which is based on the First-Order Logic According to Fitting theory. The calculi and proof techniques are taken from Ben-Ari's Mathematical Logic for Computer Science. [Szpilrajn] title = Szpilrajn Extension Theorem author = Peter Zeller topic = Mathematics/Order date = 2019-07-27 notify = p_zeller@cs.uni-kl.de abstract = We formalize the Szpilrajn extension theorem, also known as order-extension principal: Every strict partial order can be extended to a strict linear order. [TESL_Language] title = A Formal Development of a Polychronous Polytimed Coordination Language author = Hai Nguyen Van , Frédéric Boulanger , Burkhart Wolff topic = Computer science/System description languages, Computer science/Semantics, Computer science/Concurrency date = 2019-07-30 notify = frederic.boulanger@centralesupelec.fr, burkhart.wolff@lri.fr abstract = The design of complex systems involves different formalisms for modeling their different parts or aspects. The global model of a system may therefore consist of a coordination of concurrent sub-models that use different paradigms. We develop here a theory for a language used to specify the timed coordination of such heterogeneous subsystems by addressing the following issues:

  • the behavior of the sub-systems is observed only at a series of discrete instants,
  • events may occur in different sub-systems at unrelated times, leading to polychronous systems, which do not necessarily have a common base clock,
  • coordination between subsystems involves causality, so the occurrence of an event may enforce the occurrence of other events, possibly after a certain duration has elapsed or an event has occurred a given number of times,
  • the domain of time (discrete, rational, continuous...) may be different in the subsystems, leading to polytimed systems,
  • the time frames of different sub-systems may be related (for instance, time in a GPS satellite and in a GPS receiver on Earth are related although they are not the same).
Firstly, a denotational semantics of the language is defined. Then, in order to be able to incrementally check the behavior of systems, an operational semantics is given, with proofs of progress, soundness and completeness with regard to the denotational semantics. These proofs are made according to a setup that can scale up when new operators are added to the language. In order for specifications to be composed in a clean way, the language should be invariant by stuttering (i.e., adding observation instants at which nothing happens). The proof of this invariance is also given. [Stellar_Quorums] title = Stellar Quorum Systems author = Giuliano Losa topic = Computer science/Algorithms/Distributed date = 2019-08-01 notify = giuliano@galois.com abstract = We formalize the static properties of personal Byzantine quorum systems (PBQSs) and Stellar quorum systems, as described in the paper ``Stellar Consensus by Reduction'' (to appear at DISC 2019). [IMO2019] title = Selected Problems from the International Mathematical Olympiad 2019 author = Manuel Eberl topic = Mathematics/Misc date = 2019-08-05 notify = eberlm@in.tum.de abstract =

This entry contains formalisations of the answers to three of the six problem of the International Mathematical Olympiad 2019, namely Q1, Q4, and Q5.

The reason why these problems were chosen is that they are particularly amenable to formalisation: they can be solved with minimal use of libraries. The remaining three concern geometry and graph theory, which, in the author's opinion, are more difficult to formalise resp. require a more complex library.

[Adaptive_State_Counting] title = Formalisation of an Adaptive State Counting Algorithm author = Robert Sachtleben topic = Computer science/Automata and formal languages, Computer science/Algorithms date = 2019-08-16 notify = rob_sac@uni-bremen.de abstract = This entry provides a formalisation of a refinement of an adaptive state counting algorithm, used to test for reduction between finite state machines. The algorithm has been originally presented by Hierons in the paper Testing from a Non-Deterministic Finite State Machine Using Adaptive State Counting. Definitions for finite state machines and adaptive test cases are given and many useful theorems are derived from these. The algorithm is formalised using mutually recursive functions, for which it is proven that the generated test suite is sufficient to test for reduction against finite state machines of a certain fault domain. Additionally, the algorithm is specified in a simple WHILE-language and its correctness is shown using Hoare-logic. [Jacobson_Basic_Algebra] title = A Case Study in Basic Algebra author = Clemens Ballarin topic = Mathematics/Algebra date = 2019-08-30 notify = ballarin@in.tum.de abstract = The focus of this case study is re-use in abstract algebra. It contains locale-based formalisations of selected parts of set, group and ring theory from Jacobson's Basic Algebra leading to the respective fundamental homomorphism theorems. The study is not intended as a library base for abstract algebra. It rather explores an approach towards abstract algebra in Isabelle. [Hybrid_Systems_VCs] title = Verification Components for Hybrid Systems author = Jonathan Julian Huerta y Munive <> topic = Mathematics/Algebra, Mathematics/Analysis date = 2019-09-10 notify = jjhuertaymunive1@sheffield.ac.uk, jonjulian23@gmail.com abstract = These components formalise a semantic framework for the deductive verification of hybrid systems. They support reasoning about continuous evolutions of hybrid programs in the style of differential dynamics logic. Vector fields or flows model these evolutions, and their verification is done with invariants for the former or orbits for the latter. Laws of modal Kleene algebra or categorical predicate transformers implement the verification condition generation. Examples show the approach at work. [Generic_Join] title = Formalization of Multiway-Join Algorithms author = Thibault Dardinier<> topic = Computer science/Algorithms date = 2019-09-16 notify = tdardini@student.ethz.ch, traytel@inf.ethz.ch abstract = Worst-case optimal multiway-join algorithms are recent seminal achievement of the database community. These algorithms compute the natural join of multiple relational databases and improve in the worst case over traditional query plan optimizations of nested binary joins. In 2014, Ngo, Ré, and Rudra gave a unified presentation of different multi-way join algorithms. We formalized and proved correct their "Generic Join" algorithm and extended it to support negative joins. [Aristotles_Assertoric_Syllogistic] title = Aristotle's Assertoric Syllogistic author = Angeliki Koutsoukou-Argyraki topic = Logic/Philosophical aspects date = 2019-10-08 notify = ak2110@cam.ac.uk abstract = We formalise with Isabelle/HOL some basic elements of Aristotle's assertoric syllogistic following the article from the Stanford Encyclopedia of Philosophy by Robin Smith. To this end, we use a set theoretic formulation (covering both individual and general predication). In particular, we formalise the deductions in the Figures and after that we present Aristotle's metatheoretical observation that all deductions in the Figures can in fact be reduced to either Barbara or Celarent. As the formal proofs prove to be straightforward, the interest of this entry lies in illustrating the functionality of Isabelle and high efficiency of Sledgehammer for simple exercises in philosophy. [VerifyThis2019] title = VerifyThis 2019 -- Polished Isabelle Solutions author = Peter Lammich<>, Simon Wimmer topic = Computer science/Algorithms date = 2019-10-16 notify = lammich@in.tum.de, wimmers@in.tum.de abstract = VerifyThis 2019 (http://www.pm.inf.ethz.ch/research/verifythis.html) was a program verification competition associated with ETAPS 2019. It was the 8th event in the VerifyThis competition series. In this entry, we present polished and completed versions of our solutions that we created during the competition. [ZFC_in_HOL] title = Zermelo Fraenkel Set Theory in Higher-Order Logic author = Lawrence C. Paulson topic = Logic/Set theory date = 2019-10-24 notify = lp15@cam.ac.uk abstract =

This entry is a new formalisation of ZFC set theory in Isabelle/HOL. It is logically equivalent to Obua's HOLZF; the point is to have the closest possible integration with the rest of Isabelle/HOL, minimising the amount of new notations and exploiting type classes.

There is a type V of sets and a function elts :: V => V set mapping a set to its elements. Classes simply have type V set, and a predicate identifies the small classes: those that correspond to actual sets. Type classes connected with orders and lattices are used to minimise the amount of new notation for concepts such as the subset relation, union and intersection. Basic concepts — Cartesian products, disjoint sums, natural numbers, functions, etc. — are formalised.

More advanced set-theoretic concepts, such as transfinite induction, ordinals, cardinals and the transitive closure of a set, are also provided. The definition of addition and multiplication for general sets (not just ordinals) follows Kirby.

The theory provides two type classes with the aim of facilitating developments that combine V with other Isabelle/HOL types: embeddable, the class of types that can be injected into V (including V itself as well as V*V, etc.), and small, the class of types that correspond to some ZF set.

extra-history = Change history: [2020-01-28]: Generalisation of the "small" predicate and order types to arbitrary sets; ordinal exponentiation; introduction of the coercion ord_of_nat :: "nat => V"; numerous new lemmas. (revision 6081d5be8d08) [Interval_Arithmetic_Word32] title = Interval Arithmetic on 32-bit Words author = Brandon Bohrer topic = Computer science/Data structures date = 2019-11-27 notify = bjbohrer@gmail.com, bbohrer@cs.cmu.edu abstract = Interval_Arithmetic implements conservative interval arithmetic computations, then uses this interval arithmetic to implement a simple programming language where all terms have 32-bit signed word values, with explicit infinities for terms outside the representable bounds. Our target use case is interpreters for languages that must have a well-understood low-level behavior. We include a formalization of bounded-length strings which are used for the identifiers of our language. Bounded-length identifiers are useful in some applications, for example the Differential_Dynamic_Logic article, where a Euclidean space indexed by identifiers demands that identifiers are finitely many. [Generalized_Counting_Sort] title = An Efficient Generalization of Counting Sort for Large, possibly Infinite Key Ranges author = Pasquale Noce topic = Computer science/Algorithms, Computer science/Functional programming date = 2019-12-04 notify = pasquale.noce.lavoro@gmail.com abstract = Counting sort is a well-known algorithm that sorts objects of any kind mapped to integer keys, or else to keys in one-to-one correspondence with some subset of the integers (e.g. alphabet letters). However, it is suitable for direct use, viz. not just as a subroutine of another sorting algorithm (e.g. radix sort), only if the key range is not significantly larger than the number of the objects to be sorted. This paper describes a tail-recursive generalization of counting sort making use of a bounded number of counters, suitable for direct use in case of a large, or even infinite key range of any kind, subject to the only constraint of being a subset of an arbitrary linear order. After performing a pen-and-paper analysis of how such algorithm has to be designed to maximize its efficiency, this paper formalizes the resulting generalized counting sort (GCsort) algorithm and then formally proves its correctness properties, namely that (a) the counters' number is maximized never exceeding the fixed upper bound, (b) objects are conserved, (c) objects get sorted, and (d) the algorithm is stable. [Poincare_Bendixson] title = The Poincaré-Bendixson Theorem author = Fabian Immler , Yong Kiam Tan topic = Mathematics/Analysis date = 2019-12-18 notify = fimmler@cs.cmu.edu, yongkiat@cs.cmu.edu abstract = The Poincaré-Bendixson theorem is a classical result in the study of (continuous) dynamical systems. Colloquially, it restricts the possible behaviors of planar dynamical systems: such systems cannot be chaotic. In practice, it is a useful tool for proving the existence of (limiting) periodic behavior in planar systems. The theorem is an interesting and challenging benchmark for formalized mathematics because proofs in the literature rely on geometric sketches and only hint at symmetric cases. It also requires a substantial background of mathematical theories, e.g., the Jordan curve theorem, real analysis, ordinary differential equations, and limiting (long-term) behavior of dynamical systems. [Isabelle_C] title = Isabelle/C author = Frédéric Tuong , Burkhart Wolff topic = Computer science/Programming languages/Language definitions, Computer science/Semantics, Tools date = 2019-10-22 notify = tuong@users.gforge.inria.fr, wolff@lri.fr abstract = We present a framework for C code in C11 syntax deeply integrated into the Isabelle/PIDE development environment. Our framework provides an abstract interface for verification back-ends to be plugged-in independently. Thus, various techniques such as deductive program verification or white-box testing can be applied to the same source, which is part of an integrated PIDE document model. Semantic back-ends are free to choose the supported C fragment and its semantics. In particular, they can differ on the chosen memory model or the specification mechanism for framing conditions. Our framework supports semantic annotations of C sources in the form of comments. Annotations serve to locally control back-end settings, and can express the term focus to which an annotation refers. Both the logical and the syntactic context are available when semantic annotations are evaluated. As a consequence, a formula in an annotation can refer both to HOL or C variables. Our approach demonstrates the degree of maturity and expressive power the Isabelle/PIDE sub-system has achieved in recent years. Our integration technique employs Lex and Yacc style grammars to ensure efficient deterministic parsing. This is the core-module of Isabelle/C; the AFP package for Clean and Clean_wrapper as well as AutoCorres and AutoCorres_wrapper (available via git) are applications of this front-end. [Zeta_3_Irrational] title = The Irrationality of ζ(3) author = Manuel Eberl topic = Mathematics/Number theory date = 2019-12-27 notify = manuel.eberl@tum.de abstract =

This article provides a formalisation of Beukers's straightforward analytic proof that ζ(3) is irrational. This was first proven by Apéry (which is why this result is also often called ‘Apéry's Theorem’) using a more algebraic approach. This formalisation follows Filaseta's presentation of Beukers's proof.

[Hybrid_Logic] title = Formalizing a Seligman-Style Tableau System for Hybrid Logic author = Asta Halkjær From topic = Logic/General logic/Modal logic date = 2019-12-20 notify = ahfrom@dtu.dk abstract = This work is a formalization of soundness and completeness proofs for a Seligman-style tableau system for hybrid logic. The completeness result is obtained via a synthetic approach using maximally consistent sets of tableau blocks. The formalization differs from the cited work in a few ways. First, to avoid the need to backtrack in the construction of a tableau, the formalized system has no unnamed initial segment, and therefore no Name rule. Second, I show that the full Bridge rule is admissible in the system. Third, I start from rules restricted to only extend the branch with new formulas, including only witnessing diamonds that are not already witnessed, and show that the unrestricted rules are admissible. Similarly, I start from simpler versions of the @-rules and show the general ones admissible. Finally, the GoTo rule is restricted using a notion of coins such that each application consumes a coin and coins are earned through applications of the remaining rules. I show that if a branch can be closed then it can be closed starting from a single coin. These restrictions are imposed to rule out some means of nontermination. [Bicategory] title = Bicategories author = Eugene W. Stark topic = Mathematics/Category theory date = 2020-01-06 notify = stark@cs.stonybrook.edu abstract = Taking as a starting point the author's previous work on developing aspects of category theory in Isabelle/HOL, this article gives a compatible formalization of the notion of "bicategory" and develops a framework within which formal proofs of facts about bicategories can be given. The framework includes a number of basic results, including the Coherence Theorem, the Strictness Theorem, pseudofunctors and biequivalence, and facts about internal equivalences and adjunctions in a bicategory. As a driving application and demonstration of the utility of the framework, it is used to give a formal proof of a theorem, due to Carboni, Kasangian, and Street, that characterizes up to biequivalence the bicategories of spans in a category with pullbacks. The formalization effort necessitated the filling-in of many details that were not evident from the brief presentation in the original paper, as well as identifying a few minor corrections along the way. extra-history = Change history: [2020-02-15]: Move ConcreteCategory.thy from Bicategory to Category3 and use it systematically. Make other minor improvements throughout. (revision a51840d36867)
[Subset_Boolean_Algebras] title = A Hierarchy of Algebras for Boolean Subsets author = Walter Guttmann , Bernhard Möller topic = Mathematics/Algebra date = 2020-01-31 notify = walter.guttmann@canterbury.ac.nz abstract = We present a collection of axiom systems for the construction of Boolean subalgebras of larger overall algebras. The subalgebras are defined as the range of a complement-like operation on a semilattice. This technique has been used, for example, with the antidomain operation, dynamic negation and Stone algebras. We present a common ground for these constructions based on a new equational axiomatisation of Boolean algebras. [Goodstein_Lambda] title = Implementing the Goodstein Function in λ-Calculus author = Bertram Felgenhauer topic = Logic/Rewriting date = 2020-02-21 notify = int-e@gmx.de abstract = In this formalization, we develop an implementation of the Goodstein function G in plain λ-calculus, linked to a concise, self-contained specification. The implementation works on a Church-encoded representation of countable ordinals. The initial conversion to hereditary base 2 is not covered, but the material is sufficient to compute the particular value G(16), and easily extends to other fixed arguments. [VeriComp] title = A Generic Framework for Verified Compilers author = Martin Desharnais topic = Computer science/Programming languages/Compiling date = 2020-02-10 notify = martin.desharnais@unibw.de abstract = This is a generic framework for formalizing compiler transformations. It leverages Isabelle/HOL’s locales to abstract over concrete languages and transformations. It states common definitions for language semantics, program behaviours, forward and backward simulations, and compilers. We provide generic operations, such as simulation and compiler composition, and prove general (partial) correctness theorems, resulting in reusable proof components. [Hello_World] title = Hello World author = Cornelius Diekmann , Lars Hupel topic = Computer science/Functional programming date = 2020-03-07 notify = diekmann@net.in.tum.de abstract = In this article, we present a formalization of the well-known "Hello, World!" code, including a formal framework for reasoning about IO. Our model is inspired by the handling of IO in Haskell. We start by formalizing the 🌍 and embrace the IO monad afterwards. Then we present a sample main :: IO (), followed by its proof of correctness. [WOOT_Strong_Eventual_Consistency] title = Strong Eventual Consistency of the Collaborative Editing Framework WOOT author = Emin Karayel , Edgar Gonzàlez topic = Computer science/Algorithms/Distributed date = 2020-03-25 notify = eminkarayel@google.com, edgargip@google.com, me@eminkarayel.de abstract = Commutative Replicated Data Types (CRDTs) are a promising new class of data structures for large-scale shared mutable content in applications that only require eventual consistency. The WithOut Operational Transforms (WOOT) framework is a CRDT for collaborative text editing introduced by Oster et al. (CSCW 2006) for which the eventual consistency property was verified only for a bounded model to date. We contribute a formal proof for WOOTs strong eventual consistency. [Furstenberg_Topology] title = Furstenberg's topology and his proof of the infinitude of primes author = Manuel Eberl topic = Mathematics/Number theory date = 2020-03-22 notify = manuel.eberl@tum.de abstract =

This article gives a formal version of Furstenberg's topological proof of the infinitude of primes. He defines a topology on the integers based on arithmetic progressions (or, equivalently, residue classes). Using some fairly obvious properties of this topology, the infinitude of primes is then easily obtained.

Apart from this, this topology is also fairly ‘nice’ in general: it is second countable, metrizable, and perfect. All of these (well-known) facts are formally proven, including an explicit metric for the topology given by Zulfeqarr.

[Saturation_Framework] title = A Comprehensive Framework for Saturation Theorem Proving author = Sophie Tourret topic = Logic/General logic/Mechanization of proofs date = 2020-04-09 notify = stourret@mpi-inf.mpg.de abstract = This Isabelle/HOL formalization is the companion of the technical report “A comprehensive framework for saturation theorem proving”, itself companion of the eponym IJCAR 2020 paper, written by Uwe Waldmann, Sophie Tourret, Simon Robillard and Jasmin Blanchette. It verifies a framework for formal refutational completeness proofs of abstract provers that implement saturation calculi, such as ordered resolution or superposition, and allows to model entire prover architectures in such a way that the static refutational completeness of a calculus immediately implies the dynamic refutational completeness of a prover implementing the calculus using a variant of the given clause loop. The technical report “A comprehensive framework for saturation theorem proving” is available on the Matryoshka website. The names of the Isabelle lemmas and theorems corresponding to the results in the report are indicated in the margin of the report. [MFODL_Monitor_Optimized] title = Formalization of an Optimized Monitoring Algorithm for Metric First-Order Dynamic Logic with Aggregations author = Thibault Dardinier<>, Lukas Heimes<>, Martin Raszyk , Joshua Schneider , Dmitriy Traytel topic = Computer science/Algorithms, Logic/General logic/Modal logic, Computer science/Automata and formal languages date = 2020-04-09 notify = martin.raszyk@inf.ethz.ch, joshua.schneider@inf.ethz.ch, traytel@inf.ethz.ch abstract = A monitor is a runtime verification tool that solves the following problem: Given a stream of time-stamped events and a policy formulated in a specification language, decide whether the policy is satisfied at every point in the stream. We verify the correctness of an executable monitor for specifications given as formulas in metric first-order dynamic logic (MFODL), which combines the features of metric first-order temporal logic (MFOTL) and metric dynamic logic. Thus, MFODL supports real-time constraints, first-order parameters, and regular expressions. Additionally, the monitor supports aggregation operations such as count and sum. This formalization, which is described in a forthcoming paper at IJCAR 2020, significantly extends previous work on a verified monitor for MFOTL. Apart from the addition of regular expressions and aggregations, we implemented multi-way joins and a specialized sliding window algorithm to further optimize the monitor. [Sliding_Window_Algorithm] title = Formalization of an Algorithm for Greedily Computing Associative Aggregations on Sliding Windows author = Lukas Heimes<>, Dmitriy Traytel , Joshua Schneider<> topic = Computer science/Algorithms date = 2020-04-10 notify = heimesl@student.ethz.ch, traytel@inf.ethz.ch, joshua.schneider@inf.ethz.ch abstract = Basin et al.'s sliding window algorithm (SWA) is an algorithm for combining the elements of subsequences of a sequence with an associative operator. It is greedy and minimizes the number of operator applications. We formalize the algorithm and verify its functional correctness. We extend the algorithm with additional operations and provide an alternative interface to the slide operation that does not require the entire input sequence. [Lucas_Theorem] title = Lucas's Theorem author = Chelsea Edmonds topic = Mathematics/Number theory date = 2020-04-07 notify = cle47@cam.ac.uk abstract = This work presents a formalisation of a generating function proof for Lucas's theorem. We first outline extensions to the existing Formal Power Series (FPS) library, including an equivalence relation for coefficients modulo n, an alternate binomial theorem statement, and a formalised proof of the Freshman's dream (mod p) lemma. The second part of the work presents the formal proof of Lucas's Theorem. Working backwards, the formalisation first proves a well known corollary of the theorem which is easier to formalise, and then applies induction to prove the original theorem statement. The proof of the corollary aims to provide a good example of a formalised generating function equivalence proof using the FPS library. The final theorem statement is intended to be integrated into the formalised proof of Hilbert's 10th Problem. [ADS_Functor] title = Authenticated Data Structures As Functors author = Andreas Lochbihler , Ognjen Marić topic = Computer science/Data structures date = 2020-04-16 notify = andreas.lochbihler@digitalasset.com, mail@andreas-lochbihler.de abstract = Authenticated data structures allow several systems to convince each other that they are referring to the same data structure, even if each of them knows only a part of the data structure. Using inclusion proofs, knowledgeable systems can selectively share their knowledge with other systems and the latter can verify the authenticity of what is being shared. In this article, we show how to modularly define authenticated data structures, their inclusion proofs, and operations thereon as datatypes in Isabelle/HOL, using a shallow embedding. Modularity allows us to construct complicated trees from reusable building blocks, which we call Merkle functors. Merkle functors include sums, products, and function spaces and are closed under composition and least fixpoints. As a practical application, we model the hierarchical transactions of Canton, a practical interoperability protocol for distributed ledgers, as authenticated data structures. This is a first step towards formalizing the Canton protocol and verifying its integrity and security guarantees. +[Power_Sum_Polynomials] +title = Power Sum Polynomials +author = Manuel Eberl +topic = Mathematics/Algebra +date = 2020-04-24 +notify = eberlm@in.tum.de +abstract = +

This article provides a formalisation of the symmetric + multivariate polynomials known as power sum + polynomials. These are of the form + pn(X1,…, + Xk) = + X1n + + … + + Xkn. + A formal proof of the Girard–Newton Theorem is also given. This + theorem relates the power sum polynomials to the elementary symmetric + polynomials sk in the form + of a recurrence relation + (-1)k + k sk + = + ∑i∈[0,k) + (-1)i si + pk-i .

+

As an application, this is then used to solve a generalised + form of a puzzle given as an exercise in Dummit and Foote's + Abstract Algebra: For k + complex unknowns x1, + …, + xk, + define pj := + x1j + + … + + xkj. + Then for each vector a ∈ + ℂk, show that + there is exactly one solution to the system p1 + = a1, …, + pk = + ak up to permutation of + the + xi + and determine the value of + pi for + i>k.

+ +[Gaussian_Integers] +title = Gaussian Integers +author = Manuel Eberl +topic = Mathematics/Number theory +date = 2020-04-24 +notify = eberlm@in.tum.de +abstract = +

The Gaussian integers are the subring ℤ[i] of the + complex numbers, i. e. the ring of all complex numbers with integral + real and imaginary part. This article provides a definition of this + ring as well as proofs of various basic properties, such as that they + form a Euclidean ring and a full classification of their primes. An + executable (albeit not very efficient) factorisation algorithm is also + provided.

Lastly, this Gaussian integer + formalisation is used in two short applications:

    +
  1. The characterisation of all positive integers that can be + written as sums of two squares
  2. Euclid's + formula for primitive Pythagorean triples
+

While elementary proofs for both of these are already + available in the AFP, the theory of Gaussian integers provides more + concise proofs and a more high-level view.

+ +[Forcing] +title = Formalization of Forcing in Isabelle/ZF +author = Emmanuel Gunther , Miguel Pagano , Pedro Sánchez Terraf +topic = Logic/Set theory +date = 2020-05-06 +notify = gunther@famaf.unc.edu.ar, pagano@famaf.unc.edu.ar, sterraf@famaf.unc.edu.ar +abstract = + We formalize the theory of forcing in the set theory framework of + Isabelle/ZF. Under the assumption of the existence of a countable + transitive model of ZFC, we construct a proper generic extension and + show that the latter also satisfies ZFC. + +[Recursion-Addition] +title = Recursion Theorem in ZF +author = Georgy Dunaev +topic = Logic/Set theory +date = 2020-05-11 +notify = georgedunaev@gmail.com +abstract = + This document contains a proof of the recursion theorem. This is a + mechanization of the proof of the recursion theorem from the text Introduction to + Set Theory, by Karel Hrbacek and Thomas Jech. This + implementation may be used as the basis for a model of Peano arithmetic in + ZF. While recursion and the natural numbers are already available in Isabelle/ZF, this clean development + is much easier to follow. + +[LTL_Normal_Form] +title = An Efficient Normalisation Procedure for Linear Temporal Logic: Isabelle/HOL Formalisation +author = Salomon Sickert +topic = Computer science/Automata and formal languages, Logic/General logic/Temporal logic +date = 2020-05-08 +notify = s.sickert@tum.de +abstract = + In the mid 80s, Lichtenstein, Pnueli, and Zuck proved a classical + theorem stating that every formula of Past LTL (the extension of LTL + with past operators) is equivalent to a formula of the form + $\bigwedge_{i=1}^n \mathbf{G}\mathbf{F} \varphi_i \vee + \mathbf{F}\mathbf{G} \psi_i$, where $\varphi_i$ and $\psi_i$ contain + only past operators. Some years later, Chang, Manna, and Pnueli built + on this result to derive a similar normal form for LTL. Both + normalisation procedures have a non-elementary worst-case blow-up, and + follow an involved path from formulas to counter-free automata to + star-free regular expressions and back to formulas. We improve on both + points. We present an executable formalisation of a direct and purely + syntactic normalisation procedure for LTL yielding a normal form, + comparable to the one by Chang, Manna, and Pnueli, that has only a + single exponential blow-up. + +[Matrices_for_ODEs] +title = Matrices for ODEs +author = Jonathan Julian Huerta y Munive +topic = Mathematics/Analysis, Mathematics/Algebra +date = 2020-04-19 +notify = jonjulian23@gmail.com +abstract = + Our theories formalise various matrix properties that serve to + establish existence, uniqueness and characterisation of the solution + to affine systems of ordinary differential equations (ODEs). In + particular, we formalise the operator and maximum norm of matrices. + Then we use them to prove that square matrices form a Banach space, + and in this setting, we show an instance of Picard-Lindelöf’s + theorem for affine systems of ODEs. Finally, we use this formalisation + to verify three simple hybrid programs. + +[Irrational_Series_Erdos_Straus] +title = Irrationality Criteria for Series by Erdős and Straus +author = Angeliki Koutsoukou-Argyraki , Wenda Li +topic = Mathematics/Number theory, Mathematics/Analysis +date = 2020-05-12 +notify = ak2110@cam.ac.uk, wl302@cam.ac.uk, liwenda1990@hotmail.com +abstract = + We formalise certain irrationality criteria for infinite series of the form: + \[\sum_{n=1}^\infty \frac{b_n}{\prod_{i=1}^n a_i} \] + where $\{b_n\}$ is a sequence of integers and $\{a_n\}$ a sequence of positive integers + with $a_n >1$ for all large n. The results are due to P. Erdős and E. G. Straus + [1]. + In particular, we formalise Theorem 2.1, Corollary 2.10 and Theorem 3.1. + The latter is an application of Theorem 2.1 involving the prime numbers. + [Knuth_Bendix_Order] title = A Formalization of Knuth–Bendix Orders author = Christian Sternagel , René Thiemann topic = Logic/Rewriting date = 2020-05-13 notify = c.sternagel@gmail.com, rene.thiemann@uibk.ac.at -abstract = +abstract = We define a generalized version of Knuth–Bendix orders, including subterm coefficient functions. For these orders we formalize several properties such as strong normalization, the subterm property, closure properties under substitutions and contexts, as well as ground totality. diff --git a/thys/Banach_Steinhaus/Banach_Steinhaus.thy b/thys/Banach_Steinhaus/Banach_Steinhaus.thy new file mode 100644 --- /dev/null +++ b/thys/Banach_Steinhaus/Banach_Steinhaus.thy @@ -0,0 +1,485 @@ +(* + File: Banach_Steinhaus.thy + Author: Dominique Unruh, University of Tartu + Author: Jose Manuel Rodriguez Caballero, University of Tartu +*) +section \Banach-Steinhaus theorem\ + +theory Banach_Steinhaus + imports Banach_Steinhaus_Missing +begin + +text \ + We formalize Banach-Steinhaus theorem as theorem @{text banach_steinhaus}. This theorem was + originally proved in Banach-Steinhaus's paper~\cite{banach1927principe}. For the proof, we follow + Sokal's approach~\cite{sokal2011really}. Furthermore, we prove as a corollary a result about + pointwise convergent sequences of bounded operators whose domain is a Banach space. +\ + +subsection \Preliminaries for Sokal's proof of Banach-Steinhaus theorem\ + +lemma linear_plus_norm: + includes notation_norm + assumes \linear f\ + shows \\f \\ \ max \f (x + \)\ \f (x - \)\\ + text \ + Explanation: For arbitrary \<^term>\x\ and a linear operator \<^term>\f\, + \<^term>\norm (f \)\ is upper bounded by the maximum of the norms + of the shifts of \<^term>\f\ (i.e., \<^term>\f (x + \)\ and \<^term>\f (x - \)\). +\ +proof- + have \norm (f \) = norm ( (inverse (of_nat 2)) *\<^sub>R (f (x + \) - f (x - \)) )\ + by (smt add_diff_cancel_left' assms diff_add_cancel diff_diff_add linear_diff midpoint_def + midpoint_plus_self of_nat_1 of_nat_add one_add_one scaleR_half_double) + also have \\ = inverse (of_nat 2) * norm (f (x + \) - f (x - \))\ + using Real_Vector_Spaces.real_normed_vector_class.norm_scaleR by simp + also have \\ \ inverse (of_nat 2) * (norm (f (x + \)) + norm (f (x - \)))\ + by (simp add: norm_triangle_ineq4) + also have \\ \ max (norm (f (x + \))) (norm (f (x - \)))\ + by auto + finally show ?thesis by blast +qed + +lemma onorm_Sup_on_ball: + includes notation_norm + assumes \r > 0\ + shows "\f\ \ Sup ( (\x. \f *\<^sub>v x\) ` (ball x r) ) / r" + text \ + Explanation: Let \<^term>\f\ be a bounded operator and let \<^term>\x\ be a point. For any \<^term>\r > 0\, + the operator norm of \<^term>\f\ is bounded above by the supremum of $f$ applied to the open ball of + radius \<^term>\r\ around \<^term>\x\, divided by \<^term>\r\. +\ +proof- + have bdd_above_3: \bdd_above ((\x. \f *\<^sub>v x\) ` (ball 0 r))\ + proof - + obtain M where \\ \. \f *\<^sub>v \\ \ M * norm \\ and \M \ 0\ + using norm_blinfun norm_ge_zero by blast + hence \\ \. \ \ ball 0 r \ \f *\<^sub>v \\ \ M * r\ + using \r > 0\ by (smt mem_ball_0 mult_left_mono) + thus ?thesis by (meson bdd_aboveI2) + qed + have bdd_above_2: \bdd_above ((\ \. \f *\<^sub>v (x + \)\) ` (ball 0 r))\ + proof- + have \bdd_above ((\ \. \f *\<^sub>v x\) ` (ball 0 r))\ + by auto + moreover have \bdd_above ((\ \. \f *\<^sub>v \\) ` (ball 0 r))\ + using bdd_above_3 by blast + ultimately have \bdd_above ((\ \. \f *\<^sub>v x\ + \f *\<^sub>v \\) ` (ball 0 r))\ + by (rule bdd_above_plus) + then obtain M where \\ \. \ \ ball 0 r \ \f *\<^sub>v x\ + \f *\<^sub>v \\ \ M\ + unfolding bdd_above_def by (meson image_eqI) + moreover have \\f *\<^sub>v (x + \)\ \ \f *\<^sub>v x\ + \f *\<^sub>v \\\ for \ + by (simp add: blinfun.add_right norm_triangle_ineq) + ultimately have \\ \. \ \ ball 0 r \ \f *\<^sub>v (x + \)\ \ M\ + by (simp add: blinfun.add_right norm_triangle_le) + thus ?thesis by (meson bdd_aboveI2) + qed + have bdd_above_4: \bdd_above ((\ \. \f *\<^sub>v (x - \)\) ` (ball 0 r))\ + proof- + obtain K where K_def: \\ \. \ \ ball 0 r \ \f *\<^sub>v (x + \)\ \ K\ + using \bdd_above ((\ \. norm (f (x + \))) ` (ball 0 r))\ unfolding bdd_above_def + by (meson image_eqI) + have \\ \ ball (0::'a) r \ -\ \ ball 0 r\ for \ + by auto + thus ?thesis by (metis K_def ab_group_add_class.ab_diff_conv_add_uminus bdd_aboveI2) + qed + have bdd_above_1: \bdd_above ((\ \. max \f *\<^sub>v (x + \)\ \f *\<^sub>v (x - \)\) ` (ball 0 r))\ + proof- + have \bdd_above ((\ \. \f *\<^sub>v (x + \)\) ` (ball 0 r))\ + using bdd_above_2 by blast + moreover have \bdd_above ((\ \. \f *\<^sub>v (x - \)\) ` (ball 0 r))\ + using bdd_above_4 by blast + ultimately show ?thesis + unfolding max_def apply auto apply (meson bdd_above_Int1 bdd_above_mono image_Int_subset) + by (meson bdd_above_Int1 bdd_above_mono image_Int_subset) + qed + have bdd_above_6: \bdd_above ((\t. \f *\<^sub>v t\) ` ball x r)\ + proof- + have \bounded (ball x r)\ + by simp + hence \bounded ((\t. \f *\<^sub>v t\) ` ball x r)\ + by (metis (no_types) add.left_neutral bdd_above_2 bdd_above_norm bounded_norm_comp + image_add_ball image_image) + thus ?thesis + by (simp add: bounded_imp_bdd_above) + qed + have norm_1: \(\\. \f *\<^sub>v (x + \)\) ` ball 0 r = (\t. \f *\<^sub>v t\) ` ball x r\ + by (metis add.right_neutral ball_translation image_image) + have bdd_above_5: \bdd_above ((\\. norm (f (x + \))) ` ball 0 r)\ + by (simp add: bdd_above_2) + have norm_2: \\\\ < r \ \f *\<^sub>v (x - \)\ \ (\\. \f *\<^sub>v (x + \)\) ` ball 0 r\ for \ + proof- + assume \\\\ < r\ + hence \\ \ ball (0::'a) r\ + by auto + hence \-\ \ ball (0::'a) r\ + by auto + thus ?thesis + by (metis (no_types, lifting) ab_group_add_class.ab_diff_conv_add_uminus image_iff) + qed + have norm_2': \\\\ < r \ \f *\<^sub>v (x + \)\ \ (\\. \f *\<^sub>v (x - \)\) ` ball 0 r\ for \ + proof- + assume \norm \ < r\ + hence \\ \ ball (0::'a) r\ + by auto + hence \-\ \ ball (0::'a) r\ + by auto + thus ?thesis + by (metis (no_types, lifting) diff_minus_eq_add image_iff) + qed + have bdd_above_6: \bdd_above ((\\. \f *\<^sub>v (x - \)\) ` ball 0 r)\ + by (simp add: bdd_above_4) + have Sup_2: \(SUP \\ball 0 r. max \f *\<^sub>v (x + \)\ \f *\<^sub>v (x - \)\) = + max (SUP \\ball 0 r. \f *\<^sub>v (x + \)\) (SUP \\ball 0 r. \f *\<^sub>v (x - \)\)\ + proof- + have \ball (0::'a) r \ {}\ + using \r > 0\ by auto + moreover have \bdd_above ((\\. \f *\<^sub>v (x + \)\) ` ball 0 r)\ + using bdd_above_5 by blast + moreover have \bdd_above ((\\. \f *\<^sub>v (x - \)\) ` ball 0 r)\ + using bdd_above_6 by blast + ultimately show ?thesis + using max_Sup + by (metis (mono_tags, lifting) Banach_Steinhaus_Missing.pointwise_max_def image_cong) + qed + have Sup_3': \\\\ < r \ \f *\<^sub>v (x + \)\ \ (\\. \f *\<^sub>v (x - \)\) ` ball 0 r\ for \::'a + by (simp add: norm_2') + have Sup_3'': \\\\ < r \ \f *\<^sub>v (x - \)\ \ (\\. \f *\<^sub>v (x + \)\) ` ball 0 r\ for \::'a + by (simp add: norm_2) + have Sup_3: \max (SUP \\ball 0 r. \f *\<^sub>v (x + \)\) (SUP \\ball 0 r. \f *\<^sub>v (x - \)\) = + (SUP \\ball 0 r. \f *\<^sub>v (x + \)\)\ + proof- + have \(\\. \f *\<^sub>v (x + \)\) ` (ball 0 r) = (\\. \f *\<^sub>v (x - \)\) ` (ball 0 r)\ + apply auto using Sup_3' apply auto using Sup_3'' by blast + hence \Sup ((\\. \f *\<^sub>v (x + \)\) ` (ball 0 r))=Sup ((\\. \f *\<^sub>v (x - \)\) ` (ball 0 r))\ + by simp + thus ?thesis by simp + qed + have Sup_1: \Sup ((\t. \f *\<^sub>v t\) ` (ball 0 r)) \ Sup ( (\\. \f *\<^sub>v \\) ` (ball x r) )\ + proof- + have \(\t. \f *\<^sub>v t\) \ \ max \f *\<^sub>v (x + \)\ \f *\<^sub>v (x - \)\\ for \ + apply(rule linear_plus_norm) apply (rule bounded_linear.linear) + by (simp add: blinfun.bounded_linear_right) + moreover have \bdd_above ((\ \. max \f *\<^sub>v (x + \)\ \f *\<^sub>v (x - \)\) ` (ball 0 r))\ + using bdd_above_1 by blast + moreover have \ball (0::'a) r \ {}\ + using \r > 0\ by auto + ultimately have \Sup ((\t. \f *\<^sub>v t\) ` (ball 0 r)) \ + Sup ((\\. max \f *\<^sub>v (x + \)\ \f *\<^sub>v (x - \)\) ` (ball 0 r))\ + using cSUP_mono by smt + also have \\ = max (Sup ((\\. \f *\<^sub>v (x + \)\) ` (ball 0 r))) + (Sup ((\\. \f *\<^sub>v (x - \)\) ` (ball 0 r)))\ + using Sup_2 by blast + also have \\ = Sup ((\\. \f *\<^sub>v (x + \)\) ` (ball 0 r))\ + using Sup_3 by blast + also have \\ = Sup ((\\. \f *\<^sub>v \\) ` (ball x r))\ + by (metis add.right_neutral ball_translation image_image) + finally show ?thesis by blast + qed + have \\f\ = (SUP x\ball 0 r. \f *\<^sub>v x\) / r\ + using \0 < r\ onorm_r by blast + moreover have \Sup ((\t. \f *\<^sub>v t\) ` (ball 0 r)) / r \ Sup ((\\. \f *\<^sub>v \\) ` (ball x r)) / r\ + using Sup_1 \0 < r\ divide_right_mono by fastforce + ultimately have \\f\ \ Sup ((\t. \f *\<^sub>v t\) ` ball x r) / r\ + by simp + thus ?thesis by simp +qed + +lemma onorm_Sup_on_ball': + includes notation_norm + assumes \r > 0\ and \\ < 1\ + shows \\\\ball x r. \ * r * \f\ \ \f *\<^sub>v \\\ + text \ + In the proof of Banach-Steinhaus theorem, we will use this variation of the + lemma @{text onorm_Sup_on_ball}. + + Explanation: Let \<^term>\f\ be a bounded operator, let \<^term>\x\ be a point and let \<^term>\r\ be a + positive real number. For any real number \<^term>\\ < 1\, there is a point \<^term>\\\ in the open ball + of radius \<^term>\r\ around \<^term>\x\ such that \<^term>\\ * r * \f\ \ \f *\<^sub>v \\\. +\ +proof(cases \f = 0\) + case True + thus ?thesis by (metis assms(1) centre_in_ball mult_zero_right norm_zero order_refl + zero_blinfun.rep_eq) +next + case False + have bdd_above_1: \bdd_above ((\t. \(*\<^sub>v) f t\) ` ball x r)\ for f::\'a \\<^sub>L 'b\ + using assms(1) bounded_linear_image by (simp add: bounded_linear_image + blinfun.bounded_linear_right bounded_imp_bdd_above bounded_norm_comp) + have \norm f > 0\ + using \f \ 0\ by auto + have \norm f \ Sup ( (\\. \(*\<^sub>v) f \\) ` (ball x r) ) / r\ + using \r > 0\ by (simp add: onorm_Sup_on_ball) + hence \r * norm f \ Sup ( (\\. \(*\<^sub>v) f \\) ` (ball x r) )\ + using \0 < r\ by (smt divide_strict_right_mono nonzero_mult_div_cancel_left) + moreover have \\ * r * norm f < r * norm f\ + using \\ < 1\ using \0 < norm f\ \0 < r\ by auto + ultimately have \\ * r * norm f < Sup ( (norm \ ((*\<^sub>v) f)) ` (ball x r) )\ + by simp + moreover have \(norm \ ( (*\<^sub>v) f)) ` (ball x r) \ {}\ + using \0 < r\ by auto + moreover have \bdd_above ((norm \ ( (*\<^sub>v) f)) ` (ball x r))\ + using bdd_above_1 apply transfer by simp + ultimately have \\t \ (norm \ ( (*\<^sub>v) f)) ` (ball x r). \ * r * norm f < t\ + by (simp add: less_cSup_iff) + thus ?thesis by (smt comp_def image_iff) +qed + +subsection \Banach-Steinhaus theorem\ + +theorem banach_steinhaus: + fixes f::\'c \ ('a::banach \\<^sub>L 'b::real_normed_vector)\ + assumes \\x. bounded (range (\n. (f n) *\<^sub>v x))\ + shows \bounded (range f)\ + text\ + This is Banach-Steinhaus Theorem. + + Explanation: If a family of bounded operators on a Banach space + is pointwise bounded, then it is uniformly bounded. +\ +proof(rule classical) + assume \\(bounded (range f))\ + have sum_1: \\K. \n. sum (\k. inverse (real_of_nat 3^k)) {0..n} \ K\ + proof- + have \summable (\n. (inverse (real_of_nat 3))^n)\ + using Series.summable_geometric_iff [where c = "inverse (real_of_nat 3)"] by auto + moreover have \(inverse (real_of_nat 3))^n = inverse (real_of_nat 3^n)\ for n::nat + using power_inverse by blast + ultimately have \summable (\n. inverse (real_of_nat 3^n))\ + by auto + hence \bounded (range (\n. sum (\ k. inverse (real 3 ^ k)) {0.. + using summable_imp_sums_bounded[where f = "(\n. inverse (real_of_nat 3^n))"] + lessThan_atLeast0 by auto + hence \\M. \h\(range (\n. sum (\ k. inverse (real 3 ^ k)) {0.. M\ + using bounded_iff by blast + then obtain M where \h\range (\n. sum (\ k. inverse (real 3 ^ k)) {0.. norm h \ M\ + for h + by blast + have sum_2: \sum (\k. inverse (real_of_nat 3^k)) {0..n} \ M\ for n + proof- + have \norm (sum (\ k. inverse (real 3 ^ k)) {0..< Suc n}) \ M\ + using \\h. h\(range (\n. sum (\ k. inverse (real 3 ^ k)) {0.. norm h \ M\ + by blast + hence \norm (sum (\ k. inverse (real 3 ^ k)) {0..n}) \ M\ + by (simp add: atLeastLessThanSuc_atLeastAtMost) + hence \(sum (\ k. inverse (real 3 ^ k)) {0..n}) \ M\ + by auto + thus ?thesis by blast + qed + have \sum (\k. inverse (real_of_nat 3^k)) {0..n} \ M\ for n + using sum_2 by blast + thus ?thesis by blast + qed + have \of_rat 2/3 < (1::real)\ + by auto + hence \\g::'a \\<^sub>L 'b. \x. \r. \\. g \ 0 \ r > 0 + \ (\\ball x r \ (of_rat 2/3) * r * norm g \ norm ((*\<^sub>v) g \))\ + using onorm_Sup_on_ball' by blast + hence \\\. \g::'a \\<^sub>L 'b. \x. \r. g \ 0 \ r > 0 + \ ((\ g x r)\ball x r \ (of_rat 2/3) * r * norm g \ norm ((*\<^sub>v) g (\ g x r)))\ + by metis + then obtain \ where f1: \\g \ 0; r > 0\ \ + \ g x r \ ball x r \ (of_rat 2/3) * r * norm g \ norm ((*\<^sub>v) g (\ g x r))\ + for g::\'a \\<^sub>L 'b\ and x and r + by blast + have \\n. \k. norm (f k) \ 4^n\ + using \\(bounded (range f))\ by (metis (mono_tags, hide_lams) boundedI image_iff linear) + hence \\k. \n. norm (f (k n)) \ 4^n\ + by metis + hence \\k. \n. norm ((f \ k) n) \ 4^n\ + by simp + then obtain k where \norm ((f \ k) n) \ 4^n\ for n + by blast + define T where \T = f \ k\ + have \T n \ range f\ for n + unfolding T_def by simp + have \norm (T n) \ of_nat (4^n)\ for n + unfolding T_def using \\ n. norm ((f \ k) n) \ 4^n\ by auto + hence \T n \ 0\ for n + by (smt T_def \\n. 4 ^ n \ norm ((f \ k) n)\ norm_zero power_not_zero zero_le_power) + have \inverse (of_nat 3^n) > (0::real)\ for n + by auto + define y::\nat \ 'a\ where \y = rec_nat 0 (\n x. \ (T n) x (inverse (of_nat 3^n)))\ + have \y (Suc n) \ ball (y n) (inverse (of_nat 3^n))\ for n + using f1 \\ n. T n \ 0\ \\ n. inverse (of_nat 3^n) > 0\ unfolding y_def by auto + hence \norm (y (Suc n) - y n) \ inverse (of_nat 3^n)\ for n + unfolding ball_def apply auto using dist_norm by (smt norm_minus_commute) + moreover have \\K. \n. sum (\k. inverse (real_of_nat 3^k)) {0..n} \ K\ + using sum_1 by blast + moreover have \Cauchy y\ + using convergent_series_Cauchy[where a = "\n. inverse (of_nat 3^n)" and \ = y] dist_norm + by (metis calculation(1) calculation(2)) + hence \\ x. y \ x\ + by (simp add: convergent_eq_Cauchy) + then obtain x where \y \ x\ + by blast + have norm_2: \norm (x - y (Suc n)) \ (inverse (of_nat 2))*(inverse (of_nat 3^n))\ for n + proof- + have \inverse (real_of_nat 3) < 1\ + by simp + moreover have \y 0 = 0\ + using y_def by auto + ultimately have \norm (x - y (Suc n)) + \ (inverse (of_nat 3)) * inverse (1 - (inverse (of_nat 3))) * ((inverse (of_nat 3)) ^ n)\ + using bound_Cauchy_to_lim[where c = "inverse (of_nat 3)" and y = y and x = x] + power_inverse semiring_norm(77) \y \ x\ + \\ n. norm (y (Suc n) - y n) \ inverse (of_nat 3^n)\ by (metis divide_inverse) + moreover have \inverse (real_of_nat 3) * inverse (1 - (inverse (of_nat 3))) + = inverse (of_nat 2)\ + by auto + ultimately show ?thesis + by (metis power_inverse) + qed + have \norm (x - y (Suc n)) \ (inverse (of_nat 2))*(inverse (of_nat 3^n))\ for n + using norm_2 by blast + have \\ M. \ n. norm ((*\<^sub>v) (T n) x) \ M\ + unfolding T_def apply auto + by (metis \\x. bounded (range (\n. (*\<^sub>v) (f n) x))\ bounded_iff rangeI) + then obtain M where \norm ((*\<^sub>v) (T n) x) \ M\ for n + by blast + have norm_1: \norm (T n) * norm (y (Suc n) - x) + norm ((*\<^sub>v) (T n) x) + \ inverse (real 2) * inverse (real 3 ^ n) * norm (T n) + norm ((*\<^sub>v) (T n) x)\ for n + proof- + have \norm (y (Suc n) - x) \ (inverse (of_nat 2))*(inverse (of_nat 3^n))\ + using \norm (x - y (Suc n)) \ (inverse (of_nat 2))*(inverse (of_nat 3^n))\ + by (simp add: norm_minus_commute) + moreover have \norm (T n) \ 0\ + by auto + ultimately have \norm (T n) * norm (y (Suc n) - x) + \ (inverse (of_nat 2))*(inverse (of_nat 3^n))*norm (T n)\ + by (simp add: \\n. T n \ 0\) + thus ?thesis by simp + qed + have inverse_2: \(inverse (of_nat 6)) * inverse (real 3 ^ n) * norm (T n) + \ norm ((*\<^sub>v) (T n) x)\ for n + proof- + have \(of_rat 2/3)*(inverse (of_nat 3^n))*norm (T n) \ norm ((*\<^sub>v) (T n) (y (Suc n)))\ + using f1 \\ n. T n \ 0\ \\ n. inverse (of_nat 3^n) > 0\ unfolding y_def by auto + also have \\ = norm ((*\<^sub>v) (T n) ((y (Suc n) - x) + x))\ + by auto + also have \\ = norm ((*\<^sub>v) (T n) (y (Suc n) - x) + (*\<^sub>v) (T n) x)\ + apply transfer apply auto by (metis diff_add_cancel linear_simps(1)) + also have \\ \ norm ((*\<^sub>v) (T n) (y (Suc n) - x)) + norm ((*\<^sub>v) (T n) x)\ + by (simp add: norm_triangle_ineq) + also have \\ \ norm (T n) * norm (y (Suc n) - x) + norm ((*\<^sub>v) (T n) x)\ + apply transfer apply auto using onorm by auto + also have \\ \ (inverse (of_nat 2))*(inverse (of_nat 3^n))*norm (T n) + + norm ((*\<^sub>v) (T n) x)\ + using norm_1 by blast + finally have \(of_rat 2/3) * inverse (real 3 ^ n) * norm (T n) + \ inverse (real 2) * inverse (real 3 ^ n) * norm (T n) + + norm ((*\<^sub>v) (T n) x)\ + by blast + hence \(of_rat 2/3) * inverse (real 3 ^ n) * norm (T n) + - inverse (real 2) * inverse (real 3 ^ n) * norm (T n) \ norm ((*\<^sub>v) (T n) x)\ + by linarith + moreover have \(of_rat 2/3) * inverse (real 3 ^ n) * norm (T n) + - inverse (real 2) * inverse (real 3 ^ n) * norm (T n) + = (inverse (of_nat 6)) * inverse (real 3 ^ n) * norm (T n)\ + by fastforce + ultimately show \(inverse (of_nat 6)) * inverse (real 3 ^ n) * norm (T n) \ norm ((*\<^sub>v) (T n) x)\ + by linarith + qed + have inverse_3: \(inverse (of_nat 6)) * (of_rat (4/3)^n) + \ (inverse (of_nat 6)) * inverse (real 3 ^ n) * norm (T n)\ for n + proof- + have \of_rat (4/3)^n = inverse (real 3 ^ n) * (of_nat 4^n)\ + apply auto by (metis divide_inverse_commute of_rat_divide power_divide of_rat_numeral_eq) + also have \\ \ inverse (real 3 ^ n) * norm (T n)\ + using \\n. norm (T n) \ of_nat (4^n)\ by simp + finally have \of_rat (4/3)^n \ inverse (real 3 ^ n) * norm (T n)\ + by blast + moreover have \inverse (of_nat 6) > (0::real)\ + by auto + ultimately show ?thesis by auto + qed + have inverse_1: \(inverse (of_nat 6)) * (of_rat (4/3)^n) \ M\ for n + proof- + have \(inverse (of_nat 6)) * (of_rat (4/3)^n) + \ (inverse (of_nat 6)) * inverse (real 3 ^ n) * norm (T n)\ + using inverse_3 by blast + also have \\ \ norm ((*\<^sub>v) (T n) x)\ + using inverse_2 by blast + finally have \(inverse (of_nat 6)) * (of_rat (4/3)^n) \ norm ((*\<^sub>v) (T n) x)\ + by auto + thus ?thesis using \\ n. norm ((*\<^sub>v) (T n) x) \ M\ by smt + qed + have \\n. M < (inverse (of_nat 6)) * (of_rat (4/3)^n)\ + using Real.real_arch_pow by auto + moreover have \(inverse (of_nat 6)) * (of_rat (4/3)^n) \ M\ for n + using inverse_1 by blast + ultimately show ?thesis by smt +qed + +subsection \A consequence of Banach-Steinhaus theorem\ + +corollary bounded_linear_limit_bounded_linear: + fixes f::\nat \ ('a::banach \\<^sub>L 'b::real_normed_vector)\ + assumes \\x. convergent (\n. (f n) *\<^sub>v x)\ + shows \\g. (\n. (*\<^sub>v) (f n)) \pointwise\ (*\<^sub>v) g\ + text\ + Explanation: If a sequence of bounded operators on a Banach space converges + pointwise, then the limit is also a bounded operator. +\ +proof- + have \\l. (\n. (*\<^sub>v) (f n) x) \ l\ for x + by (simp add: \\x. convergent (\n. (*\<^sub>v) (f n) x)\ convergentD) + hence \\F. (\n. (*\<^sub>v) (f n)) \pointwise\ F\ + unfolding pointwise_convergent_to_def by metis + obtain F where \(\n. (*\<^sub>v) (f n)) \pointwise\ F\ + using \\F. (\n. (*\<^sub>v) (f n)) \pointwise\ F\ by auto + have \\x. (\ n. (*\<^sub>v) (f n) x) \ F x\ + using \(\n. (*\<^sub>v) (f n)) \pointwise\ F\ apply transfer + by (simp add: pointwise_convergent_to_def) + have \bounded (range f)\ + using \\x. convergent (\n. (*\<^sub>v) (f n) x)\ banach_steinhaus + \\x. \l. (\n. (*\<^sub>v) (f n) x) \ l\ convergent_imp_bounded by blast + have norm_f_n: \\ M. \ n. norm (f n) \ M\ + unfolding bounded_def + by (meson UNIV_I \bounded (range f)\ bounded_iff image_eqI) + have \isCont (\ t::'b. norm t) y\ for y::'b + using Limits.isCont_norm by simp + hence \(\ n. norm ((*\<^sub>v) (f n) x)) \ (norm (F x))\ for x + using \\ x::'a. (\ n. (*\<^sub>v) (f n) x) \ F x\ by (simp add: tendsto_norm) + hence norm_f_n_x: \\ M. \ n. norm ((*\<^sub>v) (f n) x) \ M\ for x + using Elementary_Metric_Spaces.convergent_imp_bounded + by (metis UNIV_I \\ x::'a. (\ n. (*\<^sub>v) (f n) x) \ F x\ bounded_iff image_eqI) + have norm_f: \\K. \n. \x. norm ((*\<^sub>v) (f n) x) \ norm x * K\ + proof- + have \\ M. \ n. norm ((*\<^sub>v) (f n) x) \ M\ for x + using norm_f_n_x \\x. (\n. (*\<^sub>v) (f n) x) \ F x\ by blast + hence \\ M. \ n. norm (f n) \ M\ + using norm_f_n by simp + then obtain M::real where \\ M. \ n. norm (f n) \ M\ + by blast + have \\ n. \x. norm ((*\<^sub>v) (f n) x) \ norm x * norm (f n)\ + apply transfer apply auto by (metis mult.commute onorm) + thus ?thesis using \\ M. \ n. norm (f n) \ M\ + by (metis (no_types, hide_lams) dual_order.trans norm_eq_zero order_refl + real_mult_le_cancel_iff2 vector_space_over_itself.scale_zero_left zero_less_norm_iff) + qed + have norm_F_x: \\K. \x. norm (F x) \ norm x * K\ + proof- + have "\K. \n. \x. norm ((*\<^sub>v) (f n) x) \ norm x * K" + using norm_f \\x. (\n. (*\<^sub>v) (f n) x) \ F x\ by auto + thus ?thesis + using \\ x::'a. (\ n. (*\<^sub>v) (f n) x) \ F x\ apply transfer + by (metis Lim_bounded tendsto_norm) + qed + have \linear F\ + proof(rule linear_limit_linear) + show \linear ((*\<^sub>v) (f n))\ for n + apply transfer apply auto by (simp add: bounded_linear.linear) + show \f \pointwise\ F\ + using \(\n. (*\<^sub>v) (f n)) \pointwise\ F\ by auto + qed + moreover have \bounded_linear_axioms F\ + using norm_F_x by (simp add: \\x. (\n. (*\<^sub>v) (f n) x) \ F x\ bounded_linear_axioms_def) + ultimately have \bounded_linear F\ + unfolding bounded_linear_def by blast + hence \\g. (*\<^sub>v) g = F\ + using bounded_linear_Blinfun_apply by auto + thus ?thesis using \(\n. (*\<^sub>v) (f n)) \pointwise\ F\ apply transfer by auto +qed + +end diff --git a/thys/Banach_Steinhaus/Banach_Steinhaus_Missing.thy b/thys/Banach_Steinhaus/Banach_Steinhaus_Missing.thy new file mode 100644 --- /dev/null +++ b/thys/Banach_Steinhaus/Banach_Steinhaus_Missing.thy @@ -0,0 +1,898 @@ +(* + File: Banach_Steinhaus_Missing.thy + Author: Dominique Unruh, University of Tartu + Author: Jose Manuel Rodriguez Caballero, University of Tartu +*) +section \Missing results for the proof of Banach-Steinhaus theorem\ + +theory Banach_Steinhaus_Missing + imports + "HOL-Analysis.Infinite_Set_Sum" + +begin +subsection \Results missing for the proof of Banach-Steinhaus theorem\ +text \ + The results proved here are preliminaries for the proof of Banach-Steinhaus theorem using Sokal's + approach, but they do not explicitly appear in Sokal's paper ~\cite{sokal2011reall}. +\ + +text\Notation for the norm\ +bundle notation_norm begin +notation norm ("\_\") +end + +bundle no_notation_norm begin +no_notation norm ("\_\") +end + +unbundle notation_norm + +text\Notation for apply bilinear function\ +bundle notation_blinfun_apply begin +notation blinfun_apply (infixr "*\<^sub>v" 70) +end + +bundle no_notation_blinfun_apply begin +no_notation blinfun_apply (infixr "*\<^sub>v" 70) +end + +unbundle notation_blinfun_apply + +lemma bdd_above_plus: + fixes f::\'a \ real\ + assumes \bdd_above (f ` S)\ and \bdd_above (g ` S)\ + shows \bdd_above ((\ x. f x + g x) ` S)\ + text \ + Explanation: If the images of two real-valued functions \<^term>\f\,\<^term>\g\ are bounded above on a + set \<^term>\S\, then the image of their sum is bounded on \<^term>\S\. +\ +proof- + obtain M where \\ x. x\S \ f x \ M\ + using \bdd_above (f ` S)\ unfolding bdd_above_def by blast + obtain N where \\ x. x\S \ g x \ N\ + using \bdd_above (g ` S)\ unfolding bdd_above_def by blast + have \\ x. x\S \ f x + g x \ M + N\ + using \\x. x \ S \ f x \ M\ \\x. x \ S \ g x \ N\ by fastforce + thus ?thesis unfolding bdd_above_def by blast +qed + +text\The maximum of two functions\ +definition pointwise_max:: "('a \ 'b::ord) \ ('a \ 'b) \ ('a \ 'b)" where + \pointwise_max f g = (\x. max (f x) (g x))\ + +lemma max_Sup_absorb_left: + fixes f g::\'a \ real\ + assumes \X \ {}\ and \bdd_above (f ` X)\ and \bdd_above (g ` X)\ and \Sup (f ` X) \ Sup (g ` X)\ + shows \Sup ((pointwise_max f g) ` X) = Sup (f ` X)\ + + text \Explanation: For real-valued functions \<^term>\f\ and \<^term>\g\, if the supremum of \<^term>\f\ is + greater-equal the supremum of \<^term>\g\, then the supremum of \<^term>\max f g\ equals the supremum of + \<^term>\f\. (Under some technical conditions.)\ + +proof- + have y_Sup: \y \ ((\ x. max (f x) (g x)) ` X) \ y \ Sup (f ` X)\ for y + proof- + assume \y \ ((\ x. max (f x) (g x)) ` X)\ + then obtain x where \y = max (f x) (g x)\ and \x \ X\ + by blast + have \f x \ Sup (f ` X)\ + by (simp add: \x \ X\ \bdd_above (f ` X)\ cSUP_upper) + moreover have \g x \ Sup (g ` X)\ + by (simp add: \x \ X\ \bdd_above (g ` X)\ cSUP_upper) + ultimately have \max (f x) (g x) \ Sup (f ` X)\ + using \Sup (f ` X) \ Sup (g ` X)\ by auto + thus ?thesis by (simp add: \y = max (f x) (g x)\) + qed + have y_f_X: \y \ f ` X \ y \ Sup ((\ x. max (f x) (g x)) ` X)\ for y + proof- + assume \y \ f ` X\ + then obtain x where \x \ X\ and \y = f x\ + by blast + have \bdd_above ((\ \. max (f \) (g \)) ` X)\ + by (metis (no_types) \bdd_above (f ` X)\ \bdd_above (g ` X)\ bdd_above_image_sup sup_max) + moreover have \e > 0 \ \ k \ (\ \. max (f \) (g \)) ` X. y \ k + e\ + for e::real + using \Sup (f ` X) \ Sup (g ` X)\ by (smt \x \ X\ \y = f x\ image_eqI) + ultimately show ?thesis + using \x \ X\ \y = f x\ cSUP_upper by fastforce + qed + have \Sup ((\ x. max (f x) (g x)) ` X) \ Sup (f ` X)\ + using y_Sup by (simp add: \X \ {}\ cSup_least) + moreover have \Sup ((\ x. max (f x) (g x)) ` X) \ Sup (f ` X)\ + using y_f_X by (metis (mono_tags) cSup_least calculation empty_is_image) + ultimately show ?thesis unfolding pointwise_max_def by simp +qed + +lemma max_Sup_absorb_right: + fixes f g::\'a \ real\ + assumes \X \ {}\ and \bdd_above (f ` X)\ and \bdd_above (g ` X)\ and \Sup (f ` X) \ Sup (g ` X)\ + shows \Sup ((pointwise_max f g) ` X) = Sup (g ` X)\ + text \ + Explanation: For real-valued functions \<^term>\f\ and \<^term>\g\ and a nonempty set \<^term>\X\, such that + the \<^term>\f\ and \<^term>\g\ are bounded above on \<^term>\X\, if the supremum of \<^term>\f\ on \<^term>\X\ is + lower-equal the supremum of \<^term>\g\ on \<^term>\X\, then the supremum of \<^term>\pointwise_max f g\ on \<^term>\X\ + equals the supremum of \<^term>\g\. This is the right analog of @{text max_Sup_absorb_left}. +\ +proof- + have \Sup ((pointwise_max g f) ` X) = Sup (g ` X)\ + using assms by (simp add: max_Sup_absorb_left) + moreover have \pointwise_max g f = pointwise_max f g\ + unfolding pointwise_max_def by auto + ultimately show ?thesis by simp +qed + +lemma max_Sup: + fixes f g::\'a \ real\ + assumes \X \ {}\ and \bdd_above (f ` X)\ and \bdd_above (g ` X)\ + shows \Sup ((pointwise_max f g) ` X) = max (Sup (f ` X)) (Sup (g ` X))\ + text \ + Explanation: Let \<^term>\X\ be a nonempty set. Two supremum over \<^term>\X\ of the maximum of two + real-value functions is equal to the maximum of their suprema over \<^term>\X\, provided that the + functions are bounded above on \<^term>\X\. +\ +proof(cases \Sup (f ` X) \ Sup (g ` X)\) + case True thus ?thesis by (simp add: assms(1) assms(2) assms(3) max_Sup_absorb_left) +next + case False + have f1: "\ 0 \ Sup (f ` X) + - 1 * Sup (g ` X)" + using False by linarith + hence "Sup (Banach_Steinhaus_Missing.pointwise_max f g ` X) = Sup (g ` X)" + by (simp add: assms(1) assms(2) assms(3) max_Sup_absorb_right) + thus ?thesis + using f1 by linarith +qed + + +lemma identity_telescopic: + fixes x :: \_ \ 'a::real_normed_vector\ + assumes \x \ l\ + shows \(\ N. sum (\ k. x (Suc k) - x k) {n..N}) \ l - x n\ + text\ + Expression of a limit as a telescopic series. + Explanation: If \<^term>\x\ converges to \<^term>\l\ then the sum \<^term>\sum (\ k. x (Suc k) - x k) {n..N}\ + converges to \<^term>\l - x n\ as \<^term>\N\ goes to infinity. +\ +proof- + have \(\ p. x (p + Suc n)) \ l\ + using \x \ l\ by (rule LIMSEQ_ignore_initial_segment) + hence \(\ p. x (Suc n + p)) \ l\ + by (simp add: add.commute) + hence \(\ p. x (Suc (n + p))) \ l\ + by simp + hence \(\ t. (- (x n)) + (\ p. x (Suc (n + p))) t ) \ (- (x n)) + l\ + using tendsto_add_const_iff by metis + hence f1: \(\ p. x (Suc (n + p)) - x n)\ l - x n\ + by simp + have \sum (\ k. x (Suc k) - x k) {n..n+p} = x (Suc (n+p)) - x n\ for p + by (simp add: sum_Suc_diff) + moreover have \(\ N. sum (\ k. x (Suc k) - x k) {n..N}) (n + t) + = (\ p. sum (\ k. x (Suc k) - x k) {n..n+p}) t\ for t + by blast + ultimately have \(\ p. (\ N. sum (\ k. x (Suc k) - x k) {n..N}) (n + p)) \ l - x n\ + using f1 by simp + hence \(\ p. (\ N. sum (\ k. x (Suc k) - x k) {n..N}) (p + n)) \ l - x n\ + by (simp add: add.commute) + hence \(\ p. (\ N. sum (\ k. x (Suc k) - x k) {n..N}) p) \ l - x n\ + using Topological_Spaces.LIMSEQ_offset[where f = "(\ N. sum (\ k. x (Suc k) - x k) {n..N})" + and a = "l - x n" and k = n] by blast + hence \(\ M. (\ N. sum (\ k. x (Suc k) - x k) {n..N}) M) \ l - x n\ + by simp + thus ?thesis by blast +qed + +lemma bound_Cauchy_to_lim: + assumes \y \ x\ and \\n. \y (Suc n) - y n\ \ c^n\ and \y 0 = 0\ and \c < 1\ + shows \\x - y (Suc n)\ \ (c / (1 - c)) * c ^ n\ + text\ + Inequality about a sequence of approximations assuming that the sequence of differences is bounded + by a geometric progression. + Explanation: Let \<^term>\y\ be a sequence converging to \<^term>\x\. + If \<^term>\y\ satisfies the inequality \\y (Suc n) - y n\ \ c ^ n\ for some \<^term>\c < 1\ and + assuming \<^term>\y 0 = 0\ then the inequality \\x - y (Suc n)\ \ (c / (1 - c)) * c ^ n\ holds. +\ +proof- + have \c \ 0\ + using \\ n. \y (Suc n) - y n\ \ c^n\ by (smt norm_imp_pos_and_ge power_Suc0_right) + have norm_1: \norm (\k = Suc n..N. y (Suc k) - y k) \ (c ^ Suc n)/(1 - c)\ for N + proof(cases \N < Suc n\) + case True + hence \\sum (\k. y (Suc k) - y k) {Suc n .. N}\ = 0\ + by auto + thus ?thesis using \c \ 0\ \c < 1\ by auto + next + case False + hence \N \ Suc n\ + by auto + have \c^(Suc N) \ 0\ + using \c \ 0\ by auto + have \1 - c > 0\ + by (simp add: \c < 1\) + hence \(1 - c)/(1 - c) = 1\ + by auto + have \\sum (\k. y (Suc k) - y k) {Suc n .. N}\ \ (sum (\k. \y (Suc k) - y k\) {Suc n .. N})\ + by (simp add: sum_norm_le) + hence \\sum (\k. y (Suc k) - y k) {Suc n .. N}\ \ (sum (power c) {Suc n .. N})\ + by (simp add: assms(2) sum_norm_le) + hence \(1 - c) * \sum (\k. y (Suc k) - y k) {Suc n .. N}\ + \ (1 - c) * (sum (power c) {Suc n .. N})\ + using \0 < 1 - c\ real_mult_le_cancel_iff2 by blast + also have \\ = c^(Suc n) - c^(Suc N)\ + using Set_Interval.sum_gp_multiplied \Suc n \ N\ by blast + also have \\ \ c^(Suc n)\ + using \c^(Suc N) \ 0\ by auto + finally have \(1 - c) * \\k = Suc n..N. y (Suc k) - y k\ \ c ^ Suc n\ + by blast + hence \((1 - c) * \\k = Suc n..N. y (Suc k) - y k\)/(1 - c) + \ (c ^ Suc n)/(1 - c)\ + using \0 < 1 - c\ by (smt divide_right_mono) + thus \\\k = Suc n..N. y (Suc k) - y k\ \ (c ^ Suc n)/(1 - c)\ + using \0 < 1 - c\ by auto + qed + have \(\ N. (sum (\k. y (Suc k) - y k) {Suc n .. N})) \ x - y (Suc n)\ + by (metis (no_types) \y \ x\ identity_telescopic) + hence \(\ N. \sum (\k. y (Suc k) - y k) {Suc n .. N}\) \ \x - y (Suc n)\\ + using tendsto_norm by blast + hence \\x - y (Suc n)\ \ (c ^ Suc n)/(1 - c)\ + using norm_1 Lim_bounded by blast + hence \\x - y (Suc n)\ \ (c ^ Suc n)/(1 - c)\ + by auto + moreover have \(c ^ Suc n)/(1 - c) = (c / (1 - c)) * (c ^ n)\ + by (simp add: divide_inverse_commute) + ultimately show \\x - y (Suc n)\ \ (c / (1 - c)) * (c ^ n)\ by linarith +qed + +lemma onorm_open_ball: + includes notation_norm + shows \\f\ = Sup { \f *\<^sub>v x\ | x. \x\ < 1 }\ + text \ + Explanation: Let \<^term>\f\ be a bounded linear operator. The operator norm of \<^term>\f\ is the + supremum of \<^term>\norm (f x)\ for \<^term>\x\ such that \<^term>\norm x < 1\. +\ +proof(cases \(UNIV::'a set) = 0\) + case True + hence \x = 0\ for x::'a + by auto + hence \f *\<^sub>v x = 0\ for x + by (metis (full_types) blinfun.zero_right) + hence \\f\ = 0\ + by (simp add: blinfun_eqI zero_blinfun.rep_eq) + have \{ \f *\<^sub>v x\ | x. \x\ < 1} = {0}\ + by (smt Collect_cong \\x. f *\<^sub>v x = 0\ norm_zero singleton_conv) + hence \Sup { \f *\<^sub>v x\ | x. \x\ < 1} = 0\ + by simp + thus ?thesis using \\f\ = 0\ by auto +next + case False + hence \(UNIV::'a set) \ 0\ + by simp + have nonnegative: \\f *\<^sub>v x\ \ 0\ for x + by simp + have \\ x::'a. x \ 0\ + using \UNIV \ 0\ by auto + then obtain x::'a where \x \ 0\ + by blast + hence \\x\ \ 0\ + by auto + define y where \y = x /\<^sub>R \x\\ + have \norm y = \ x /\<^sub>R \x\ \\ + unfolding y_def by auto + also have \\ = \x\ /\<^sub>R \x\\ + by auto + also have \\ = 1\ + using \\x\ \ 0\ by auto + finally have \\y\ = 1\ + by blast + hence norm_1_non_empty: \{ \f *\<^sub>v x\ | x. \x\ = 1} \ {}\ + by blast + have norm_1_bounded: \bdd_above { \f *\<^sub>v x\ | x. \x\ = 1}\ + unfolding bdd_above_def apply auto + by (metis norm_blinfun) + have norm_less_1_non_empty: \{\f *\<^sub>v x\ | x. \x\ < 1} \ {}\ + by (metis (mono_tags, lifting) Collect_empty_eq_bot bot_empty_eq empty_iff norm_zero + zero_less_one) + have norm_less_1_bounded: \bdd_above {\f *\<^sub>v x\ | x. \x\ < 1}\ + proof- + have \\r. \a r\ < 1 \ \f *\<^sub>v (a r)\ \ r\ for a :: "real \ 'a" + proof- + obtain r :: "('a \\<^sub>L 'b) \ real" where + "\f x. 0 \ r f \ (bounded_linear f \ \f *\<^sub>v x\ \ \x\ * r f)" + using bounded_linear.nonneg_bounded by moura + have \\ \f\ < 0\ + by simp + hence "(\r. \f\ * \a r\ \ r) \ (\r. \a r\ < 1 \ \f *\<^sub>v a r\ \ r)" + by (meson less_eq_real_def mult_le_cancel_left2) + thus ?thesis using dual_order.trans norm_blinfun by blast + qed + hence \\ M. \ x. \x\ < 1 \ \f *\<^sub>v x\ \ M\ + by metis + thus ?thesis by auto + qed + have Sup_non_neg: \Sup {\f *\<^sub>v x\ |x. \x\ = 1} \ 0\ + by (smt Collect_empty_eq cSup_upper mem_Collect_eq nonnegative norm_1_bounded norm_1_non_empty) + have \{0::real} \ {}\ + by simp + have \bdd_above {0::real}\ + by simp + show \\f\ = Sup {\f *\<^sub>v x\ | x. \x\ < 1}\ + proof(cases \\x. f *\<^sub>v x = 0\) + case True + have \\f *\<^sub>v x\ = 0\ for x + by (simp add: True) + hence \{\f *\<^sub>v x\ | x. \x\ < 1 } \ {0}\ + by blast + moreover have \{\f *\<^sub>v x\ | x. \x\ < 1 } \ {0}\ + using calculation norm_less_1_non_empty by fastforce + ultimately have \{\f *\<^sub>v x\ | x. \x\ < 1 } = {0}\ + by blast + hence Sup1: \Sup {\f *\<^sub>v x\ | x. \x\ < 1 } = 0\ + by simp + have \\f\ = 0\ + by (simp add: True blinfun_eqI) + moreover have \Sup {\f *\<^sub>v x\ | x. \x\ < 1} = 0\ + using Sup1 by blast + ultimately show ?thesis by simp + next + case False + have norm_f_eq_leq: \y \ {\f *\<^sub>v x\ | x. \x\ = 1} \ + y \ Sup {\f *\<^sub>v x\ | x. \x\ < 1}\ for y + proof- + assume \y \ {\f *\<^sub>v x\ | x. \x\ = 1}\ + hence \\ x. y = \f *\<^sub>v x\ \ \x\ = 1\ + by blast + then obtain x where \y = \f *\<^sub>v x\\ and \\x\ = 1\ + by auto + define y' where \y' n = (1 - (inverse (real (Suc n)))) *\<^sub>R y\ for n + have \y' n \ {\f *\<^sub>v x\ | x. \x\ < 1}\ for n + proof- + have \y' n = (1 - (inverse (real (Suc n)))) *\<^sub>R \f *\<^sub>v x\\ + using y'_def \y = \f *\<^sub>v x\\ by blast + also have \... = \(1 - (inverse (real (Suc n))))\ *\<^sub>R \f *\<^sub>v x\\ + by (metis (mono_tags, hide_lams) \y = \f *\<^sub>v x\\ abs_1 abs_le_self_iff abs_of_nat + abs_of_nonneg add_diff_cancel_left' add_eq_if cancel_comm_monoid_add_class.diff_cancel + diff_ge_0_iff_ge eq_iff_diff_eq_0 inverse_1 inverse_le_iff_le nat.distinct(1) of_nat_0 + of_nat_Suc of_nat_le_0_iff zero_less_abs_iff zero_neq_one) + also have \... = \f *\<^sub>v ((1 - (inverse (real (Suc n)))) *\<^sub>R x)\\ + by (simp add: blinfun.scaleR_right) + finally have y'_1: \y' n = \f *\<^sub>v ( (1 - (inverse (real (Suc n)))) *\<^sub>R x)\\ + by blast + have \\(1 - (inverse (Suc n))) *\<^sub>R x\ = (1 - (inverse (real (Suc n)))) * \x\\ + by (simp add: linordered_field_class.inverse_le_1_iff) + hence \\(1 - (inverse (Suc n))) *\<^sub>R x\ < 1\ + by (simp add: \\x\ = 1\) + thus ?thesis using y'_1 by blast + qed + have \(\n. (1 - (inverse (real (Suc n)))) ) \ 1\ + using Limits.LIMSEQ_inverse_real_of_nat_add_minus by simp + hence \(\n. (1 - (inverse (real (Suc n)))) *\<^sub>R y) \ 1 *\<^sub>R y\ + using Limits.tendsto_scaleR by blast + hence \(\n. (1 - (inverse (real (Suc n)))) *\<^sub>R y) \ y\ + by simp + hence \(\n. y' n) \ y\ + using y'_def by simp + hence \y' \ y\ + by simp + have \y' n \ Sup {\f *\<^sub>v x\ | x. \x\ < 1}\ for n + using cSup_upper \\n. y' n \ {\f *\<^sub>v x\ |x. \x\ < 1}\ norm_less_1_bounded by blast + hence \y \ Sup {\f *\<^sub>v x\ | x. \x\ < 1}\ + using \y' \ y\ Topological_Spaces.Sup_lim by (meson LIMSEQ_le_const2) + thus ?thesis by blast + qed + hence \Sup {\f *\<^sub>v x\ | x. \x\ = 1} \ Sup {\f *\<^sub>v x\ | x. \x\ < 1}\ + by (metis (lifting) cSup_least norm_1_non_empty) + have \y \ {\f *\<^sub>v x\ | x. \x\ < 1} \ y \ Sup {\f *\<^sub>v x\ | x. \x\ = 1}\ for y + proof(cases \y = 0\) + case True thus ?thesis by (simp add: Sup_non_neg) + next + case False + hence \y \ 0\ by blast + assume \y \ {\f *\<^sub>v x\ | x. \x\ < 1}\ + hence \\ x. y = \f *\<^sub>v x\ \ \x\ < 1\ + by blast + then obtain x where \y = \f *\<^sub>v x\\ and \\x\ < 1\ + by blast + have \(1/\x\) * y = (1/\x\) * \f x\\ + by (simp add: \y = \f *\<^sub>v x\\) + also have \... = \1/\x\\ * \f *\<^sub>v x\\ + by simp + also have \... = \(1/\x\) *\<^sub>R (f *\<^sub>v x)\\ + by simp + also have \... = \f *\<^sub>v ((1/\x\) *\<^sub>R x)\\ + by (simp add: blinfun.scaleR_right) + finally have \(1/\x\) * y = \f *\<^sub>v ((1/\x\) *\<^sub>R x)\\ + by blast + have \x \ 0\ + using \y \ 0\ \y = \f *\<^sub>v x\\ blinfun.zero_right by auto + have \\ (1/\x\) *\<^sub>R x \ = \ (1/\x\) \ * \x\\ + by simp + also have \... = (1/\x\) * \x\\ + by simp + finally have \\(1/\x\) *\<^sub>R x\ = 1\ + using \x \ 0\ by simp + hence \(1/\x\) * y \ { \f *\<^sub>v x\ | x. \x\ = 1}\ + using \1 / \x\ * y = \f *\<^sub>v (1 / \x\) *\<^sub>R x\\ by blast + hence \(1/\x\) * y \ Sup { \f *\<^sub>v x\ | x. \x\ = 1}\ + by (simp add: cSup_upper norm_1_bounded) + moreover have \y \ (1/\x\) * y\ + by (metis \\x\ < 1\ \y = \f *\<^sub>v x\\ mult_le_cancel_right1 norm_not_less_zero + order.strict_implies_order \x \ 0\ less_divide_eq_1_pos zero_less_norm_iff) + ultimately show ?thesis by linarith + qed + hence \Sup { \f *\<^sub>v x\ | x. \x\ < 1} \ Sup { \f *\<^sub>v x\ | x. \x\ = 1}\ + by (smt cSup_least norm_less_1_non_empty) + hence \Sup { \f *\<^sub>v x\ | x. \x\ = 1} = Sup { \f *\<^sub>v x\ | x. \x\ < 1}\ + using \Sup {\f *\<^sub>v x\ |x. norm x = 1} \ Sup { \f *\<^sub>v x\ |x. \x\ < 1}\ by linarith + have f1: \(SUP x. \f *\<^sub>v x\ / \x\) = Sup { \f *\<^sub>v x\ / \x\ | x. True}\ + by (simp add: full_SetCompr_eq) + have \y \ { \f *\<^sub>v x\ / \x\ |x. True} \ y \ { \f *\<^sub>v x\ |x. \x\ = 1} \ {0}\ + for y + proof- + assume \y \ { \f *\<^sub>v x\ / \x\ |x. True}\ show ?thesis + proof(cases \y = 0\) + case True thus ?thesis by simp + next + case False + have \\ x. y = \f *\<^sub>v x\ / \x\\ + using \y \ { \f *\<^sub>v x\ / \x\ |x. True}\ by auto + then obtain x where \y = \f *\<^sub>v x\ / \x\\ + by blast + hence \y = \(1/\x\)\ * \ f *\<^sub>v x \\ + by simp + hence \y = \(1/\x\) *\<^sub>R (f *\<^sub>v x)\\ + by simp + hence \y = \f ((1/\x\) *\<^sub>R x)\\ + by (simp add: blinfun.scaleR_right) + moreover have \\ (1/\x\) *\<^sub>R x \ = 1\ + using False \y = \f *\<^sub>v x\ / \x\\ by auto + ultimately have \y \ {\f *\<^sub>v x\ |x. \x\ = 1}\ + by blast + thus ?thesis by blast + qed + qed + moreover have \y \ {\f x\ |x. \x\ = 1} \ {0} \ y \ {\f *\<^sub>v x\ / \x\ |x. True}\ + for y + proof(cases \y = 0\) + case True thus ?thesis by auto + next + case False + hence \y \ {0}\ + by simp + moreover assume \y \ {\f *\<^sub>v x\ |x. \x\ = 1} \ {0}\ + ultimately have \y \ {\f *\<^sub>v x\ |x. \x\ = 1}\ + by simp + then obtain x where \\x\ = 1\ and \y = \f *\<^sub>v x\\ + by auto + have \y = \f *\<^sub>v x\ / \x\\ using \\x\ = 1\ \y = \f *\<^sub>v x\\ + by simp + thus ?thesis by auto + qed + ultimately have \{\f *\<^sub>v x\ / \x\ |x. True} = {\f *\<^sub>v x\ |x. \x\ = 1} \ {0}\ + by blast + hence \Sup {\f *\<^sub>v x\ / \x\ |x. True} = Sup ({\f *\<^sub>v x\ |x. \x\ = 1} \ {0})\ + by simp + have "\r s. \ (r::real) \ s \ sup r s = s" + by (metis (lifting) sup.absorb_iff1 sup_commute) + hence \Sup ({\f *\<^sub>v x\ |x. \x\ = 1} \ {(0::real)}) + = max (Sup {\f *\<^sub>v x\ |x. \x\ = 1}) (Sup {0::real})\ + using \0 \ Sup {\f *\<^sub>v x\ |x. \x\ = 1}\ \bdd_above {0}\ \{0} \ {}\ cSup_singleton + cSup_union_distrib max.absorb_iff1 sup_commute norm_1_bounded norm_1_non_empty + by (metis (no_types, lifting) ) + moreover have \Sup {(0::real)} = (0::real)\ + by simp + ultimately have \Sup ({\f *\<^sub>v x\ |x. \x\ = 1} \ {0}) = Sup {\f *\<^sub>v x\ |x. \x\ = 1}\ + using Sup_non_neg by linarith + moreover have \Sup ( {\f *\<^sub>v x\ |x. \x\ = 1} \ {0}) + = max (Sup {\f *\<^sub>v x\ |x. \x\ = 1}) (Sup {0}) \ + using Sup_non_neg \Sup ({\f *\<^sub>v x\ |x. \x\ = 1} \ {0}) + = max (Sup {\f *\<^sub>v x\ |x. \x\ = 1}) (Sup {0})\ + by auto + ultimately have f2: \Sup {\f *\<^sub>v x\ / \x\ | x. True} = Sup {\f *\<^sub>v x\ | x. \x\ = 1}\ + using \Sup {\f *\<^sub>v x\ / \x\ |x. True} = Sup ({\f *\<^sub>v x\ |x. \x\ = 1} \ {0})\ by linarith + have \(SUP x. \f *\<^sub>v x\ / \x\) = Sup {\f *\<^sub>v x\ | x. \x\ = 1}\ + using f1 f2 by linarith + hence \(SUP x. \f *\<^sub>v x\ / \x\) = Sup {\f *\<^sub>v x\ | x. \x\ < 1 }\ + by (simp add: \Sup {\f *\<^sub>v x\ |x. \x\ = 1} = Sup {\f *\<^sub>v x\ |x. \x\ < 1}\) + thus ?thesis apply transfer by (simp add: onorm_def) + qed +qed + +lemma onorm_r: + includes notation_norm + assumes \r > 0\ + shows \\f\ = Sup ((\x. \f *\<^sub>v x\) ` (ball 0 r)) / r\ + text \ + Explanation: The norm of \<^term>\f\ is \<^term>\1/r\ of the supremum of the norm of \<^term>\f *\<^sub>v x\ for + \<^term>\x\ in the ball of radius \<^term>\r\ centered at the origin. +\ +proof- + have \\f\ = Sup {\f *\<^sub>v x\ |x. \x\ < 1}\ + using onorm_open_ball by blast + moreover have \{\f *\<^sub>v x\ |x. \x\ < 1} = (\x. \f *\<^sub>v x\) ` (ball 0 1)\ + unfolding ball_def by auto + ultimately have onorm_f: \\f\ = Sup ((\x. \f *\<^sub>v x\) ` (ball 0 1))\ + by simp + have s2: \x \ (\t. r *\<^sub>R \f *\<^sub>v t\) ` ball 0 1 \ x \ r * Sup ((\t. \f *\<^sub>v t\) ` ball 0 1)\ for x + proof- + assume \x \ (\t. r *\<^sub>R \f *\<^sub>v t\) ` ball 0 1\ + hence \\ t. x = r *\<^sub>R \f *\<^sub>v t\ \ \t\ < 1\ + by auto + then obtain t where \x = r *\<^sub>R \f *\<^sub>v t\\ and \\t\ < 1\ + by blast + define y where \y = x /\<^sub>R r\ + have \x = r * (inverse r * x)\ + using \x = r *\<^sub>R norm (f t)\ by auto + hence \x - (r * (inverse r * x)) \ 0\ + by linarith + hence \x \ r * (x /\<^sub>R r)\ + by auto + have \y \ (\k. \f *\<^sub>v k\) ` ball 0 1\ + unfolding y_def by (smt \x \ (\t. r *\<^sub>R \f *\<^sub>v t\) ` ball 0 1\ assms image_iff + inverse_inverse_eq pos_le_divideR_eq positive_imp_inverse_positive) + moreover have \x \ r * y\ + using \x \ r * (x /\<^sub>R r)\ y_def by blast + ultimately have y_norm_f: \y \ (\t. \f *\<^sub>v t\) ` ball 0 1 \ x \ r * y\ + by blast + have \(\t. \f *\<^sub>v t\) ` ball 0 1 \ {}\ + by simp + moreover have \bdd_above ((\t. \f *\<^sub>v t\) ` ball 0 1)\ + by (simp add: bounded_linear_image blinfun.bounded_linear_right bounded_imp_bdd_above + bounded_norm_comp) + moreover have \\ y. y \ (\t. \f *\<^sub>v t\) ` ball 0 1 \ x \ r * y\ + using y_norm_f by blast + ultimately show ?thesis + by (smt \0 < r\ cSup_upper ordered_comm_semiring_class.comm_mult_left_mono) + qed + have s3: \(\x. x \ (\t. r * \f *\<^sub>v t\) ` ball 0 1 \ x \ y) \ + r * Sup ((\t. \f *\<^sub>v t\) ` ball 0 1) \ y\ for y + proof- + assume \\x. x \ (\t. r * \f *\<^sub>v t\) ` ball 0 1 \ x \ y\ + have x_leq: \x \ (\t. \f *\<^sub>v t\) ` ball 0 1 \ x \ y / r\ for x + proof- + assume \x \ (\t. \f *\<^sub>v t\) ` ball 0 1\ + then obtain t where \t \ ball (0::'a) 1\ and \x = \f *\<^sub>v t\\ + by auto + define x' where \x' = r *\<^sub>R x\ + have \x' = r * \f *\<^sub>v t\\ + by (simp add: \x = \f *\<^sub>v t\\ x'_def) + hence \x' \ (\t. r * \f *\<^sub>v t\) ` ball 0 1\ + using \t \ ball (0::'a) 1\ by auto + hence \x' \ y\ + using \\x. x \ (\t. r * \f *\<^sub>v t\) ` ball 0 1 \ x \ y\ by blast + thus \x \ y / r\ + unfolding x'_def using \r > 0\ by (simp add: mult.commute pos_le_divide_eq) + qed + have \(\t. \f *\<^sub>v t\) ` ball 0 1 \ {}\ + by simp + moreover have \bdd_above ((\t. \f *\<^sub>v t\) ` ball 0 1)\ + by (simp add: bounded_linear_image blinfun.bounded_linear_right bounded_imp_bdd_above + bounded_norm_comp) + ultimately have \Sup ((\t. \f *\<^sub>v t\) ` ball 0 1) \ y/r\ + using x_leq by (simp add: \bdd_above ((\t. \f *\<^sub>v t\) ` ball 0 1)\ cSup_least) + thus ?thesis using \r > 0\ + by (smt divide_strict_right_mono nonzero_mult_div_cancel_left) + qed + have norm_scaleR: \norm \ ((*\<^sub>R) r) = ((*\<^sub>R) \r\) \ (norm::'a \ real)\ + by auto + have f_x1: \f (r *\<^sub>R x) = r *\<^sub>R f x\ for x + by (simp add: blinfun.scaleR_right) + have \ball (0::'a) r = ((*\<^sub>R) r) ` (ball 0 1)\ + by (smt assms ball_scale nonzero_mult_div_cancel_left right_inverse_eq scale_zero_right) + hence \Sup ((\t. \f *\<^sub>v t\) ` (ball 0 r)) = Sup ((\t. \f *\<^sub>v t\) ` (((*\<^sub>R) r) ` (ball 0 1)))\ + by simp + also have \\ = Sup (((\t. \f *\<^sub>v t\) \ ((*\<^sub>R) r)) ` (ball 0 1))\ + using Sup.SUP_image by auto + also have \\ = Sup ((\t. \f *\<^sub>v (r *\<^sub>R t)\) ` (ball 0 1))\ + using f_x1 by (simp add: comp_assoc) + also have \\ = Sup ((\t. \r\ *\<^sub>R \f *\<^sub>v t\) ` (ball 0 1))\ + using norm_scaleR f_x1 by auto + also have \\ = Sup ((\t. r *\<^sub>R \f *\<^sub>v t\) ` (ball 0 1))\ + using \r > 0\ by auto + also have \\ = r * Sup ((\t. \f *\<^sub>v t\) ` (ball 0 1))\ + apply (rule cSup_eq_non_empty) apply simp using s2 apply auto using s3 by auto + also have \\ = r * \f\\ + using onorm_f by auto + finally have \Sup ((\t. \f *\<^sub>v t\) ` ball 0 r) = r * \f\\ + by blast + thus \\f\ = Sup ((\x. \f *\<^sub>v x\) ` (ball 0 r)) / r\ using \r > 0\ by simp +qed + +text\Pointwise convergence\ +definition pointwise_convergent_to :: + \( nat \ ('a \ 'b::topological_space) ) \ ('a \ 'b) \ bool\ + (\((_)/ \pointwise\ (_))\ [60, 60] 60) where + \pointwise_convergent_to x l = (\ t::'a. (\ n. (x n) t) \ l t)\ + +lemma linear_limit_linear: + fixes f :: \_ \ ('a::real_vector \ 'b::real_normed_vector)\ + assumes \\n. linear (f n)\ and \f \pointwise\ F\ + shows \linear F\ + text\ + Explanation: If a family of linear operators converges pointwise, then the limit is also a linear + operator. +\ +proof + show "F (x + y) = F x + F y" for x y + proof- + have "\a. F a = lim (\n. f n a)" + using \f \pointwise\ F\ unfolding pointwise_convergent_to_def by (metis (full_types) limI) + moreover have "\f b c g. (lim (\n. g n + f n) = (b::'b) + c \ \ f \ c) \ \ g \ b" + by (metis (no_types) limI tendsto_add) + moreover have "\a. (\n. f n a) \ F a" + using assms(2) pointwise_convergent_to_def by force + ultimately have + lim_sum: \lim (\ n. (f n) x + (f n) y) = lim (\ n. (f n) x) + lim (\ n. (f n) y)\ + by metis + have \(f n) (x + y) = (f n) x + (f n) y\ for n + using \\ n. linear (f n)\ unfolding linear_def using Real_Vector_Spaces.linear_iff assms(1) + by auto + hence \lim (\ n. (f n) (x + y)) = lim (\ n. (f n) x + (f n) y)\ + by simp + hence \lim (\ n. (f n) (x + y)) = lim (\ n. (f n) x) + lim (\ n. (f n) y)\ + using lim_sum by simp + moreover have \(\ n. (f n) (x + y)) \ F (x + y)\ + using \f \pointwise\ F\ unfolding pointwise_convergent_to_def by blast + moreover have \(\ n. (f n) x) \ F x\ + using \f \pointwise\ F\ unfolding pointwise_convergent_to_def by blast + moreover have \(\ n. (f n) y) \ F y\ + using \f \pointwise\ F\ unfolding pointwise_convergent_to_def by blast + ultimately show ?thesis + by (metis limI) + qed + show "F (r *\<^sub>R x) = r *\<^sub>R F x" for r and x + proof- + have \(f n) (r *\<^sub>R x) = r *\<^sub>R (f n) x\ for n + using \\ n. linear (f n)\ + by (simp add: Real_Vector_Spaces.linear_def real_vector.linear_scale) + hence \lim (\ n. (f n) (r *\<^sub>R x)) = lim (\ n. r *\<^sub>R (f n) x)\ + by simp + have \convergent (\ n. (f n) x)\ + by (metis assms(2) convergentI pointwise_convergent_to_def) + moreover have \isCont (\ t::'b. r *\<^sub>R t) tt\ for tt + by (simp add: bounded_linear_scaleR_right) + ultimately have \lim (\ n. r *\<^sub>R ((f n) x)) = r *\<^sub>R lim (\ n. (f n) x)\ + using \f \pointwise\ F\ unfolding pointwise_convergent_to_def + by (metis (mono_tags) isCont_tendsto_compose limI) + hence \lim (\ n. (f n) (r *\<^sub>R x)) = r *\<^sub>R lim (\ n. (f n) x)\ + using \lim (\ n. (f n) (r *\<^sub>R x)) = lim (\ n. r *\<^sub>R (f n) x)\ by simp + moreover have \(\ n. (f n) x) \ F x\ + using \f \pointwise\ F\ unfolding pointwise_convergent_to_def by blast + moreover have \(\ n. (f n) (r *\<^sub>R x)) \ F (r *\<^sub>R x)\ + using \f \pointwise\ F\ unfolding pointwise_convergent_to_def by blast + ultimately show ?thesis + by (metis limI) + qed +qed + + +lemma non_Cauchy_unbounded: + fixes a ::\_ \ real\ + assumes \\n. a n \ 0\ and \e > 0\ + and \\M. \m. \n. m \ M \ n \ M \ m > n \ sum a {Suc n..m} \ e\ + shows \(\n. (sum a {0..n})) \ \\ + text\ + Explanation: If the sequence of partial sums of nonnegative terms is not Cauchy, then it converges + to infinite. +\ +proof- + define S::"ereal set" where \S = range (\n. sum a {0..n})\ + have \\s\S. k*e \ s\ for k::nat + proof(induction k) + case 0 + from \\M. \m. \n. m \ M \ n \ M \ m > n \ sum a {Suc n..m} \ e\ + obtain m n where \m \ 0\ and \n \ 0\ and \m > n\ and \sum a {Suc n..m} \ e\ by blast + have \n < Suc n\ + by simp + hence \{0..n} \ {Suc n..m} = {0..m}\ + using Set_Interval.ivl_disj_un(7) \n < m\ by auto + moreover have \finite {0..n}\ + by simp + moreover have \finite {Suc n..m}\ + by simp + moreover have \{0..n} \ {Suc n..m} = {}\ + by simp + ultimately have \sum a {0..n} + sum a {Suc n..m} = sum a {0..m}\ + by (metis sum.union_disjoint) + moreover have \sum a {Suc n..m} > 0\ + using \e > 0\ \sum a {Suc n..m} \ e\ by linarith + moreover have \sum a {0..n} \ 0\ + by (simp add: assms(1) sum_nonneg) + ultimately have \sum a {0..m} > 0\ + by linarith + moreover have \sum a {0..m} \ S\ + unfolding S_def by blast + ultimately have \\s\S. 0 \ s\ + using ereal_less_eq(5) by fastforce + thus ?case + by (simp add: zero_ereal_def) + next + case (Suc k) + assume \\s\S. k*e \ s\ + then obtain s where \s\S\ and \ereal (k * e) \ s\ + by blast + have \\N. s = sum a {0..N}\ + using \s\S\ unfolding S_def by blast + then obtain N where \s = sum a {0..N}\ + by blast + from \\M. \m. \n. m \ M \ n \ M \ m > n \ sum a {Suc n..m} \ e\ + obtain m n where \m \ Suc N\ and \n \ Suc N\ and \m > n\ and \sum a {Suc n..m} \ e\ + by blast + have \finite {Suc N..n}\ + by simp + moreover have \finite {Suc n..m}\ + by simp + moreover have \{Suc N..n} \ {Suc n..m} = {Suc N..m}\ + using Set_Interval.ivl_disj_un + by (smt \Suc N \ n\ \n < m\ atLeastSucAtMost_greaterThanAtMost less_imp_le_nat) + moreover have \{} = {Suc N..n} \ {Suc n..m}\ + by simp + ultimately have \sum a {Suc N..m} = sum a {Suc N..n} + sum a {Suc n..m}\ + by (metis sum.union_disjoint) + moreover have \sum a {Suc N..n} \ 0\ + using \\n. a n \ 0\ by (simp add: sum_nonneg) + ultimately have \sum a {Suc N..m} \ e\ + using \e \ sum a {Suc n..m}\ by linarith + have \finite {0..N}\ + by simp + have \finite {Suc N..m}\ + by simp + moreover have \{0..N} \ {Suc N..m} = {0..m}\ + using Set_Interval.ivl_disj_un(7) \Suc N \ m\ by auto + moreover have \{0..N} \ {Suc N..m} = {}\ + by simp + ultimately have \sum a {0..N} + sum a {Suc N..m} = sum a {0..m}\ + by (metis \finite {0..N}\ sum.union_disjoint) + hence \e + k * e \ sum a {0..m}\ + using \ereal (real k * e) \ s\ \s = ereal (sum a {0..N})\ \e \ sum a {Suc N..m}\ by auto + moreover have \e + k * e = (Suc k) * e\ + by (simp add: semiring_normalization_rules(3)) + ultimately have \(Suc k) * e \ sum a {0..m}\ + by linarith + hence \ereal ((Suc k) * e) \ sum a {0..m}\ + by auto + moreover have \sum a {0..m}\S\ + unfolding S_def by blast + ultimately show ?case by blast + qed + hence \\s\S. (real n) \ s\ for n + by (meson assms(2) ereal_le_le ex_less_of_nat_mult less_le_not_le) + hence \Sup S = \\ + using Sup_le_iff Sup_subset_mono dual_order.strict_trans1 leD less_PInf_Ex_of_nat subsetI + by metis + hence Sup: \Sup ((range (\ n. (sum a {0..n})))::ereal set) = \\ using S_def + by blast + have \incseq (\n. (sum a {.. + using \\n. a n \ 0\ using Extended_Real.incseq_sumI by auto + hence \incseq (\n. (sum a {..< Suc n}))\ + by (meson incseq_Suc_iff) + hence \incseq (\n. (sum a {0..n})::ereal)\ + using incseq_ereal by (simp add: atLeast0AtMost lessThan_Suc_atMost) + hence \(\n. sum a {0..n}) \ Sup (range (\n. (sum a {0..n})::ereal))\ + using LIMSEQ_SUP by auto + thus ?thesis using Sup PInfty_neq_ereal by auto +qed + +lemma sum_Cauchy_positive: + fixes a ::\_ \ real\ + assumes \\n. a n \ 0\ and \\K. \n. (sum a {0..n}) \ K\ + shows \Cauchy (\n. sum a {0..n})\ + text\ + Explanation: If a series of nonnegative reals is bounded, then the series is + Cauchy. +\ +proof (unfold Cauchy_altdef2, rule, rule) + fix e::real + assume \e>0\ + have \\M. \m\M. \n\M. m > n \ sum a {Suc n..m} < e\ + proof(rule classical) + assume \\(\M. \m\M. \n\M. m > n \ sum a {Suc n..m} < e)\ + hence \\M. \m. \n. m \ M \ n \ M \ m > n \ \(sum a {Suc n..m} < e)\ + by blast + hence \\M. \m. \n. m \ M \ n \ M \ m > n \ sum a {Suc n..m} \ e\ + by fastforce + hence \(\n. (sum a {0..n}) ) \ \\ + using non_Cauchy_unbounded \0 < e\ assms(1) by blast + from \\K. \n. sum a {0..n} \ K\ + obtain K where \\n. sum a {0..n} \ K\ + by blast + from \(\n. sum a {0..n}) \ \\ + have \\B. \N. \n\N. (\ n. (sum a {0..n}) ) n \ B\ + using Lim_PInfty by simp + hence \\n. (sum a {0..n}) \ K+1\ + using ereal_less_eq(3) by blast + thus ?thesis using \\n. (sum a {0..n}) \ K\ by smt + qed + have \sum a {Suc n..m} = sum a {0..m} - sum a {0..n}\ + if "m > n" for m n + apply (simp add: that atLeast0AtMost) using sum_up_index_split + by (smt less_imp_add_positive that) + hence \\M. \m\M. \n\M. m > n \ sum a {0..m} - sum a {0..n} < e\ + using \\M. \m\M. \n\M. m > n \ sum a {Suc n..m} < e\ by smt + from \\M. \m\M. \n\M. m > n \ sum a {0..m} - sum a {0..n} < e\ + obtain M where \\m\M. \n\M. m > n \ sum a {0..m} - sum a {0..n} < e\ + by blast + moreover have \m > n \ sum a {0..m} \ sum a {0..n}\ for m n + using \\ n. a n \ 0\ by (simp add: sum_mono2) + ultimately have \\M. \m\M. \n\M. m > n \ \sum a {0..m} - sum a {0..n}\ < e\ + by auto + hence \\M. \m\M. \n\M. m \ n \ \sum a {0..m} - sum a {0..n}\ < e\ + by (metis \0 < e\ abs_zero cancel_comm_monoid_add_class.diff_cancel diff_is_0_eq' + less_irrefl_nat linorder_neqE_nat zero_less_diff) + hence \\M. \m\M. \n\M. \sum a {0..m} - sum a {0..n}\ < e\ + by (metis abs_minus_commute nat_le_linear) + hence \\M. \m\M. \n\M. dist (sum a {0..m}) (sum a {0..n}) < e\ + by (simp add: dist_real_def) + hence \\M. \m\M. \n\M. dist (sum a {0..m}) (sum a {0..n}) < e\ by blast + thus \\N. \n\N. dist (sum a {0..n}) (sum a {0..N}) < e\ by auto +qed + +lemma convergent_series_Cauchy: + fixes a::\nat \ real\ and \::\nat \ 'a::metric_space\ + assumes \\M. \n. sum a {0..n} \ M\ and \\n. dist (\ (Suc n)) (\ n) \ a n\ + shows \Cauchy \\ + text\ + Explanation: Let \<^term>\a\ be a real-valued sequence and let \<^term>\\\ be sequence in a metric space. + If the partial sums of \<^term>\a\ are uniformly bounded and the distance between consecutive terms of \<^term>\\\ + are bounded by the sequence \<^term>\a\, then \<^term>\\\ is Cauchy.\ +proof (unfold Cauchy_altdef2, rule, rule) + fix e::real + assume \e > 0\ + have \\k. a k \ 0\ + using \\n. dist (\ (Suc n)) (\ n) \ a n\ dual_order.trans zero_le_dist by blast + hence \Cauchy (\k. sum a {0..k})\ + using \\M. \n. sum a {0..n} \ M\ sum_Cauchy_positive by blast + hence \\M. \m\M. \n\M. dist (sum a {0..m}) (sum a {0..n}) < e\ + unfolding Cauchy_def using \e > 0\ by blast + hence \\M. \m\M. \n\M. m > n \ dist (sum a {0..m}) (sum a {0..n}) < e\ + by blast + have \dist (sum a {0..m}) (sum a {0..n}) = sum a {Suc n..m}\ if \n for m n + proof - + have \n < Suc n\ + by simp + have \finite {0..n}\ + by simp + moreover have \finite {Suc n..m}\ + by simp + moreover have \{0..n} \ {Suc n..m} = {0..m}\ + using \n < Suc n\ \n < m\ by auto + moreover have \{0..n} \ {Suc n..m} = {}\ + by simp + ultimately have sum_plus: \(sum a {0..n}) + sum a {Suc n..m} = (sum a {0..m})\ + by (metis sum.union_disjoint) + have \dist (sum a {0..m}) (sum a {0..n}) = \(sum a {0..m}) - (sum a {0..n})\\ + using dist_real_def by blast + moreover have \(sum a {0..m}) - (sum a {0..n}) = sum a {Suc n..m}\ + using sum_plus by linarith + ultimately show ?thesis + by (simp add: \\k. 0 \ a k\ sum_nonneg) + qed + hence sum_a: \\M. \m\M. \n\M. m > n \ sum a {Suc n..m} < e\ + by (metis \\M. \m\M. \n\M. dist (sum a {0..m}) (sum a {0..n}) < e\) + obtain M where \\m\M. \n\M. m > n \ sum a {Suc n..m} < e\ + using sum_a \e > 0\ by blast + hence \\m. \n. Suc m \ Suc M \ Suc n \ Suc M \ Suc m > Suc n \ sum a {Suc n..Suc m - 1} < e\ + by simp + hence \\m\1. \n\1. m \ Suc M \ n \ Suc M \ m > n \ sum a {n..m - 1} < e\ + by (metis Suc_le_D) + hence sum_a2: \\M. \m\M. \n\M. m > n \ sum a {n..m-1} < e\ + by (meson add_leE) + have \dist (\ (n+p+1)) (\ n) \ sum a {n..n+p}\ for p n :: nat + proof(induction p) + case 0 thus ?case by (simp add: assms(2)) + next + case (Suc p) thus ?case + by (smt Suc_eq_plus1 add_Suc_right add_less_same_cancel1 assms(2) dist_self dist_triangle2 + gr_implies_not0 sum.cl_ivl_Suc) + qed + hence \m > n \ dist (\ m) (\ n) \ sum a {n..m-1}\ for m n :: nat + by (metis Suc_eq_plus1 Suc_le_D diff_Suc_1 gr0_implies_Suc less_eq_Suc_le less_imp_Suc_add + zero_less_Suc) + hence \\M. \m\M. \n\M. m > n \ dist (\ m) (\ n) < e\ + using sum_a2 \e > 0\ by smt + thus "\N. \n\N. dist (\ n) (\ N) < e" + using \0 < e\ by fastforce +qed + +unbundle notation_blinfun_apply + +unbundle no_notation_norm + +end diff --git a/thys/Banach_Steinhaus/ROOT b/thys/Banach_Steinhaus/ROOT new file mode 100644 --- /dev/null +++ b/thys/Banach_Steinhaus/ROOT @@ -0,0 +1,13 @@ +chapter AFP + +session Banach_Steinhaus (AFP) = "HOL-Analysis" + + options [timeout = 300] + sessions + "HOL-Types_To_Sets" + "HOL-ex" + theories + Banach_Steinhaus + Banach_Steinhaus_Missing + document_files + "root.tex" + "root.bib" diff --git a/thys/Banach_Steinhaus/document/root.bib b/thys/Banach_Steinhaus/document/root.bib new file mode 100644 --- /dev/null +++ b/thys/Banach_Steinhaus/document/root.bib @@ -0,0 +1,26 @@ +@article{banach1927principe, + title={Sur le principe de la condensation de singularit{\'e}s}, + author={Banach, Stefan and Steinhaus, Hugo}, + journal={Fundamenta Mathematicae}, + volume={1}, + number={9}, + pages={50--61}, + year={1927} +} + +@article{sokal2011really, + title={A really simple elementary proof of the uniform boundedness theorem}, + author={Sokal, Alan D}, + journal={The American Mathematical Monthly}, + volume={118}, + number={5}, + pages={450--452}, + year={2011}, + publisher={Taylor \& Francis} +} + +@article{Weisstein_UBP, + title={Uniform Boundedness Principle}, + author={Moslehian, Mohammad Sal and Weisstein, Eric W.}, + journal={From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/UniformBoundednessPrinciple.html} +} diff --git a/thys/Banach_Steinhaus/document/root.tex b/thys/Banach_Steinhaus/document/root.tex new file mode 100644 --- /dev/null +++ b/thys/Banach_Steinhaus/document/root.tex @@ -0,0 +1,30 @@ +\documentclass[11pt,a4paper]{article} +\usepackage{isabelle,isabellesym} +\usepackage{amsmath,amssymb} + +%this should be the last package used +\usepackage{pdfsetup} + +% urls in roman style, theory text in math-similar italics +\urlstyle{rm} +\isabellestyle{it} + +\begin{document} + +\title{Banach-Steinhaus theorem} +\author{Dominique Unruh \and Jos\'e Manuel Rodr\'iguez Caballero} +\maketitle + +\begin{abstract} +We formalize in Isabelle/HOL a result \cite{Weisstein_UBP} due to S. Banach and H. Steinhaus \cite{banach1927principe} known as Banach-Steinhaus theorem or Uniform boundedness principle: a pointwise-bounded family of continuous linear operators from a Banach space to a normed space is uniformly bounded. Our approach is an adaptation to Isabelle/HOL of a proof due to A. Sokal \cite{sokal2011really}. +\end{abstract} + +\tableofcontents + +\input{session} + + +\bibliographystyle{abbrv} +\bibliography{root} + +\end{document} diff --git a/thys/Forcing/Arities.thy b/thys/Forcing/Arities.thy new file mode 100644 --- /dev/null +++ b/thys/Forcing/Arities.thy @@ -0,0 +1,364 @@ +section\Arities of internalized formulas\ +theory Arities + imports FrecR +begin + +lemma arity_upair_fm : "\ t1\nat ; t2\nat ; up\nat \ \ + arity(upair_fm(t1,t2,up)) = \ {succ(t1),succ(t2),succ(up)}" + unfolding upair_fm_def + using nat_union_abs1 nat_union_abs2 pred_Un + by auto + + +lemma arity_pair_fm : "\ t1\nat ; t2\nat ; p\nat \ \ + arity(pair_fm(t1,t2,p)) = \ {succ(t1),succ(t2),succ(p)}" + unfolding pair_fm_def + using arity_upair_fm nat_union_abs1 nat_union_abs2 pred_Un + by auto + +lemma arity_composition_fm : + "\ r\nat ; s\nat ; t\nat \ \ arity(composition_fm(r,s,t)) = \ {succ(r), succ(s), succ(t)}" + unfolding composition_fm_def + using arity_pair_fm nat_union_abs1 nat_union_abs2 pred_Un_distrib + by auto + +lemma arity_domain_fm : + "\ r\nat ; z\nat \ \ arity(domain_fm(r,z)) = succ(r) \ succ(z)" + unfolding domain_fm_def + using arity_pair_fm nat_union_abs1 nat_union_abs2 pred_Un_distrib + by auto + +lemma arity_range_fm : + "\ r\nat ; z\nat \ \ arity(range_fm(r,z)) = succ(r) \ succ(z)" + unfolding range_fm_def + using arity_pair_fm nat_union_abs1 nat_union_abs2 pred_Un_distrib + by auto + +lemma arity_union_fm : + "\ x\nat ; y\nat ; z\nat \ \ arity(union_fm(x,y,z)) = \ {succ(x), succ(y), succ(z)}" + unfolding union_fm_def + using nat_union_abs1 nat_union_abs2 pred_Un_distrib + by auto + +lemma arity_image_fm : + "\ x\nat ; y\nat ; z\nat \ \ arity(image_fm(x,y,z)) = \ {succ(x), succ(y), succ(z)}" + unfolding image_fm_def + using arity_pair_fm nat_union_abs1 nat_union_abs2 pred_Un_distrib + by auto + +lemma arity_pre_image_fm : + "\ x\nat ; y\nat ; z\nat \ \ arity(pre_image_fm(x,y,z)) = \ {succ(x), succ(y), succ(z)}" + unfolding pre_image_fm_def + using arity_pair_fm nat_union_abs1 nat_union_abs2 pred_Un_distrib + by auto + + +lemma arity_big_union_fm : + "\ x\nat ; y\nat \ \ arity(big_union_fm(x,y)) = succ(x) \ succ(y)" + unfolding big_union_fm_def + using nat_union_abs1 nat_union_abs2 pred_Un_distrib + by auto + +lemma arity_fun_apply_fm : + "\ x\nat ; y\nat ; f\nat \ \ + arity(fun_apply_fm(f,x,y)) = succ(f) \ succ(x) \ succ(y)" + unfolding fun_apply_fm_def + using arity_upair_fm arity_image_fm arity_big_union_fm nat_union_abs2 pred_Un_distrib + by auto + +lemma arity_field_fm : + "\ r\nat ; z\nat \ \ arity(field_fm(r,z)) = succ(r) \ succ(z)" + unfolding field_fm_def + using arity_pair_fm arity_domain_fm arity_range_fm arity_union_fm + nat_union_abs1 nat_union_abs2 pred_Un_distrib + by auto + +lemma arity_empty_fm : + "\ r\nat \ \ arity(empty_fm(r)) = succ(r)" + unfolding empty_fm_def + using nat_union_abs1 nat_union_abs2 pred_Un_distrib + by simp + +lemma arity_succ_fm : + "\x\nat;y\nat\ \ arity(succ_fm(x,y)) = succ(x) \ succ(y)" + unfolding succ_fm_def cons_fm_def + using arity_upair_fm arity_union_fm nat_union_abs2 pred_Un_distrib + by auto + + +lemma number1arity__fm : + "\ r\nat \ \ arity(number1_fm(r)) = succ(r)" + unfolding number1_fm_def + using arity_empty_fm arity_succ_fm nat_union_abs1 nat_union_abs2 pred_Un_distrib + by simp + + +lemma arity_function_fm : + "\ r\nat \ \ arity(function_fm(r)) = succ(r)" + unfolding function_fm_def + using arity_pair_fm nat_union_abs1 nat_union_abs2 pred_Un_distrib + by simp + +lemma arity_relation_fm : + "\ r\nat \ \ arity(relation_fm(r)) = succ(r)" + unfolding relation_fm_def + using arity_pair_fm nat_union_abs1 nat_union_abs2 pred_Un_distrib + by simp + +lemma arity_restriction_fm : + "\ r\nat ; z\nat ; A\nat \ \ arity(restriction_fm(A,z,r)) = succ(A) \ succ(r) \ succ(z)" + unfolding restriction_fm_def + using arity_pair_fm nat_union_abs2 pred_Un_distrib + by auto + +lemma arity_typed_function_fm : + "\ x\nat ; y\nat ; f\nat \ \ + arity(typed_function_fm(f,x,y)) = \ {succ(f), succ(x), succ(y)}" + unfolding typed_function_fm_def + using arity_pair_fm arity_relation_fm arity_function_fm arity_domain_fm + nat_union_abs2 pred_Un_distrib + by auto + + +lemma arity_subset_fm : + "\x\nat ; y\nat\ \ arity(subset_fm(x,y)) = succ(x) \ succ(y)" + unfolding subset_fm_def + using nat_union_abs2 pred_Un_distrib + by auto + +lemma arity_transset_fm : + "\x\nat\ \ arity(transset_fm(x)) = succ(x)" + unfolding transset_fm_def + using arity_subset_fm nat_union_abs2 pred_Un_distrib + by auto + +lemma arity_ordinal_fm : + "\x\nat\ \ arity(ordinal_fm(x)) = succ(x)" + unfolding ordinal_fm_def + using arity_transset_fm nat_union_abs2 pred_Un_distrib + by auto + +lemma arity_limit_ordinal_fm : + "\x\nat\ \ arity(limit_ordinal_fm(x)) = succ(x)" + unfolding limit_ordinal_fm_def + using arity_ordinal_fm arity_succ_fm arity_empty_fm nat_union_abs2 pred_Un_distrib + by auto + +lemma arity_finite_ordinal_fm : + "\x\nat\ \ arity(finite_ordinal_fm(x)) = succ(x)" + unfolding finite_ordinal_fm_def + using arity_ordinal_fm arity_limit_ordinal_fm arity_succ_fm arity_empty_fm + nat_union_abs2 pred_Un_distrib + by auto + +lemma arity_omega_fm : + "\x\nat\ \ arity(omega_fm(x)) = succ(x)" + unfolding omega_fm_def + using arity_limit_ordinal_fm nat_union_abs2 pred_Un_distrib + by auto + +lemma arity_cartprod_fm : + "\ A\nat ; B\nat ; z\nat \ \ arity(cartprod_fm(A,B,z)) = succ(A) \ succ(B) \ succ(z)" + unfolding cartprod_fm_def + using arity_pair_fm nat_union_abs2 pred_Un_distrib + by auto + +lemma arity_fst_fm : + "\x\nat ; t\nat\ \ arity(fst_fm(x,t)) = succ(x) \ succ(t)" + unfolding fst_fm_def + using arity_pair_fm arity_empty_fm nat_union_abs2 pred_Un_distrib + by auto + +lemma arity_snd_fm : + "\x\nat ; t\nat\ \ arity(snd_fm(x,t)) = succ(x) \ succ(t)" + unfolding snd_fm_def + using arity_pair_fm arity_empty_fm nat_union_abs2 pred_Un_distrib + by auto + +lemma arity_snd_snd_fm : + "\x\nat ; t\nat\ \ arity(snd_snd_fm(x,t)) = succ(x) \ succ(t)" + unfolding snd_snd_fm_def hcomp_fm_def + using arity_snd_fm arity_empty_fm nat_union_abs2 pred_Un_distrib + by auto + +lemma arity_ftype_fm : + "\x\nat ; t\nat\ \ arity(ftype_fm(x,t)) = succ(x) \ succ(t)" + unfolding ftype_fm_def + using arity_fst_fm + by auto + +lemma name1arity__fm : + "\x\nat ; t\nat\ \ arity(name1_fm(x,t)) = succ(x) \ succ(t)" + unfolding name1_fm_def hcomp_fm_def + using arity_fst_fm arity_snd_fm nat_union_abs2 pred_Un_distrib + by auto + +lemma name2arity__fm : + "\x\nat ; t\nat\ \ arity(name2_fm(x,t)) = succ(x) \ succ(t)" + unfolding name2_fm_def hcomp_fm_def + using arity_fst_fm arity_snd_snd_fm nat_union_abs2 pred_Un_distrib + by auto + +lemma arity_cond_of_fm : + "\x\nat ; t\nat\ \ arity(cond_of_fm(x,t)) = succ(x) \ succ(t)" + unfolding cond_of_fm_def hcomp_fm_def + using arity_snd_fm arity_snd_snd_fm nat_union_abs2 pred_Un_distrib + by auto + +lemma arity_singleton_fm : + "\x\nat ; t\nat\ \ arity(singleton_fm(x,t)) = succ(x) \ succ(t)" + unfolding singleton_fm_def cons_fm_def + using arity_union_fm arity_upair_fm arity_empty_fm nat_union_abs2 pred_Un_distrib + by auto + +lemma arity_Memrel_fm : + "\x\nat ; t\nat\ \ arity(Memrel_fm(x,t)) = succ(x) \ succ(t)" + unfolding Memrel_fm_def + using arity_pair_fm nat_union_abs2 pred_Un_distrib + by auto + +lemma arity_quasinat_fm : + "\x\nat\ \ arity(quasinat_fm(x)) = succ(x)" + unfolding quasinat_fm_def cons_fm_def + using arity_succ_fm arity_empty_fm + nat_union_abs2 pred_Un_distrib + by auto + +lemma arity_is_recfun_fm : + "\p\formula ; v\nat ; n\nat; Z\nat;i\nat\ \ arity(p) = i \ + arity(is_recfun_fm(p,v,n,Z)) = succ(v) \ succ(n) \ succ(Z) \ pred(pred(pred(pred(i))))" + unfolding is_recfun_fm_def + using arity_upair_fm arity_pair_fm arity_pre_image_fm arity_restriction_fm + nat_union_abs2 pred_Un_distrib + by auto + +lemma arity_is_wfrec_fm : + "\p\formula ; v\nat ; n\nat; Z\nat ; i\nat\ \ arity(p) = i \ + arity(is_wfrec_fm(p,v,n,Z)) = succ(v) \ succ(n) \ succ(Z) \ pred(pred(pred(pred(pred(i)))))" + unfolding is_wfrec_fm_def + using arity_succ_fm arity_is_recfun_fm + nat_union_abs2 pred_Un_distrib + by auto + +lemma arity_is_nat_case_fm : + "\p\formula ; v\nat ; n\nat; Z\nat; i\nat\ \ arity(p) = i \ + arity(is_nat_case_fm(v,p,n,Z)) = succ(v) \ succ(n) \ succ(Z) \ pred(pred(i))" + unfolding is_nat_case_fm_def + using arity_succ_fm arity_empty_fm arity_quasinat_fm + nat_union_abs2 pred_Un_distrib + by auto + +lemma arity_iterates_MH_fm : + assumes "isF\formula" "v\nat" "n\nat" "g\nat" "z\nat" "i\nat" + "arity(isF) = i" + shows "arity(iterates_MH_fm(isF,v,n,g,z)) = + succ(v) \ succ(n) \ succ(g) \ succ(z) \ pred(pred(pred(pred(i))))" +proof - + let ?\ = "Exists(And(fun_apply_fm(succ(succ(succ(g))), 2, 0), Forall(Implies(Equal(0, 2), isF))))" + let ?ar = "succ(succ(succ(g))) \ pred(pred(i))" + from assms + have "arity(?\) =?ar" "?\\formula" + using arity_fun_apply_fm + nat_union_abs1 nat_union_abs2 pred_Un_distrib succ_Un_distrib Un_assoc[symmetric] + by simp_all + then + show ?thesis + unfolding iterates_MH_fm_def + using arity_is_nat_case_fm[OF \?\\_\ _ _ _ _ \arity(?\) = _\] assms pred_succ_eq pred_Un_distrib + by auto +qed + +lemma arity_is_iterates_fm : + assumes "p\formula" "v\nat" "n\nat" "Z\nat" "i\nat" + "arity(p) = i" + shows "arity(is_iterates_fm(p,v,n,Z)) = succ(v) \ succ(n) \ succ(Z) \ + pred(pred(pred(pred(pred(pred(pred(pred(pred(pred(pred(i)))))))))))" +proof - + let ?\ = "iterates_MH_fm(p, 7#+v, 2, 1, 0)" + let ?\ = "is_wfrec_fm(?\, 0, succ(succ(n)),succ(succ(Z)))" + from \v\_\ + have "arity(?\) = (8#+v) \ pred(pred(pred(pred(i))))" "?\\formula" + using assms arity_iterates_MH_fm nat_union_abs2 + by simp_all + then + have "arity(?\) = succ(succ(succ(n))) \ succ(succ(succ(Z))) \ (3#+v) \ + pred(pred(pred(pred(pred(pred(pred(pred(pred(i)))))))))" + using assms arity_is_wfrec_fm[OF \?\\_\ _ _ _ _ \arity(?\) = _\] nat_union_abs1 pred_Un_distrib + by auto + then + show ?thesis + unfolding is_iterates_fm_def + using arity_Memrel_fm arity_succ_fm assms nat_union_abs1 pred_Un_distrib + by auto +qed + +lemma arity_eclose_n_fm : + assumes "A\nat" "x\nat" "t\nat" + shows "arity(eclose_n_fm(A,x,t)) = succ(A) \ succ(x) \ succ(t)" +proof - + let ?\ = "big_union_fm(1,0)" + have "arity(?\) = 2" "?\\formula" + using arity_big_union_fm nat_union_abs2 + by simp_all + with assms + show ?thesis + unfolding eclose_n_fm_def + using arity_is_iterates_fm[OF \?\\_\ _ _ _,of _ _ _ 2] + by auto +qed + +lemma arity_mem_eclose_fm : + assumes "x\nat" "t\nat" + shows "arity(mem_eclose_fm(x,t)) = succ(x) \ succ(t)" +proof - + let ?\="eclose_n_fm(x #+ 2, 1, 0)" + from \x\nat\ + have "arity(?\) = x#+3" + using arity_eclose_n_fm nat_union_abs2 + by simp + with assms + show ?thesis + unfolding mem_eclose_fm_def + using arity_finite_ordinal_fm nat_union_abs2 pred_Un_distrib + by simp +qed + +lemma arity_is_eclose_fm : + "\x\nat ; t\nat\ \ arity(is_eclose_fm(x,t)) = succ(x) \ succ(t)" + unfolding is_eclose_fm_def + using arity_mem_eclose_fm nat_union_abs2 pred_Un_distrib + by auto + +lemma eclose_n1arity__fm : + "\x\nat ; t\nat\ \ arity(eclose_n1_fm(x,t)) = succ(x) \ succ(t)" + unfolding eclose_n1_fm_def + using arity_is_eclose_fm arity_singleton_fm name1arity__fm nat_union_abs2 pred_Un_distrib + by auto + +lemma eclose_n2arity__fm : + "\x\nat ; t\nat\ \ arity(eclose_n2_fm(x,t)) = succ(x) \ succ(t)" + unfolding eclose_n2_fm_def + using arity_is_eclose_fm arity_singleton_fm name2arity__fm nat_union_abs2 pred_Un_distrib + by auto + +lemma arity_ecloseN_fm : + "\x\nat ; t\nat\ \ arity(ecloseN_fm(x,t)) = succ(x) \ succ(t)" + unfolding ecloseN_fm_def + using eclose_n1arity__fm eclose_n2arity__fm arity_union_fm nat_union_abs2 pred_Un_distrib + by auto + +lemma arity_frecR_fm : + "\a\nat;b\nat\ \ arity(frecR_fm(a,b)) = succ(a) \ succ(b)" + unfolding frecR_fm_def + using arity_ftype_fm name1arity__fm name2arity__fm arity_domain_fm + number1arity__fm arity_empty_fm nat_union_abs2 pred_Un_distrib + by auto + +lemma arity_Collect_fm : + assumes "x \ nat" "y \ nat" "p\formula" + shows "arity(Collect_fm(x,p,y)) = succ(x) \ succ(y) \ pred(arity(p))" + unfolding Collect_fm_def + using assms pred_Un_distrib + by auto + +end \ No newline at end of file diff --git a/thys/Forcing/Choice_Axiom.thy b/thys/Forcing/Choice_Axiom.thy new file mode 100644 --- /dev/null +++ b/thys/Forcing/Choice_Axiom.thy @@ -0,0 +1,385 @@ +section\The Axiom of Choice in $M[G]$\ +theory Choice_Axiom + imports Powerset_Axiom Pairing_Axiom Union_Axiom Extensionality_Axiom + Foundation_Axiom Powerset_Axiom Separation_Axiom + Replacement_Axiom Interface Infinity_Axiom +begin + +definition + induced_surj :: "i\i\i\i" where + "induced_surj(f,a,e) \ f-``(range(f)-a)\{e} \ restrict(f,f-``a)" + +lemma domain_induced_surj: "domain(induced_surj(f,a,e)) = domain(f)" + unfolding induced_surj_def using domain_restrict domain_of_prod by auto + +lemma range_restrict_vimage: + assumes "function(f)" + shows "range(restrict(f,f-``a)) \ a" +proof + from assms + have "function(restrict(f,f-``a))" + using function_restrictI by simp + fix y + assume "y \ range(restrict(f,f-``a))" + then + obtain x where "\x,y\ \ restrict(f,f-``a)" "x \ f-``a" "x\domain(f)" + using domain_restrict domainI[of _ _ "restrict(f,f-``a)"] by auto + moreover + note \function(restrict(f,f-``a))\ + ultimately + have "y = restrict(f,f-``a)`x" + using function_apply_equality by blast + also from \x \ f-``a\ + have "restrict(f,f-``a)`x = f`x" + by simp + finally + have "y=f`x" . + moreover from assms \x\domain(f)\ + have "\x,f`x\ \ f" + using function_apply_Pair by auto + moreover + note assms \x \ f-``a\ + ultimately + show "y\a" + using function_image_vimage[of f a] by auto +qed + +lemma induced_surj_type: + assumes + "function(f)" (* "relation(f)" (* a function can contain nonpairs *) *) + shows + "induced_surj(f,a,e): domain(f) \ {e} \ a" + and + "x \ f-``a \ induced_surj(f,a,e)`x = f`x" +proof - + let ?f1="f-``(range(f)-a) \ {e}" and ?f2="restrict(f, f-``a)" + have "domain(?f2) = domain(f) \ f-``a" + using domain_restrict by simp + moreover from assms + have 1: "domain(?f1) = f-``(range(f))-f-``a" + using domain_of_prod function_vimage_Diff by simp + ultimately + have "domain(?f1) \ domain(?f2) = 0" + by auto + moreover + have "function(?f1)" "relation(?f1)" "range(?f1) \ {e}" + unfolding function_def relation_def range_def by auto + moreover from this and assms + have "?f1: domain(?f1) \ range(?f1)" + using function_imp_Pi by simp + moreover from assms + have "?f2: domain(?f2) \ range(?f2)" + using function_imp_Pi[of "restrict(f, f -`` a)"] function_restrictI by simp + moreover from assms + have "range(?f2) \ a" + using range_restrict_vimage by simp + ultimately + have "induced_surj(f,a,e): domain(?f1) \ domain(?f2) \ {e} \ a" + unfolding induced_surj_def using fun_is_function fun_disjoint_Un fun_weaken_type by simp + moreover + have "domain(?f1) \ domain(?f2) = domain(f)" + using domain_restrict domain_of_prod by auto + ultimately + show "induced_surj(f,a,e): domain(f) \ {e} \ a" + by simp + assume "x \ f-``a" + then + have "?f2`x = f`x" + using restrict by simp + moreover from \x \ f-``a\ and 1 + have "x \ domain(?f1)" + by simp + ultimately + show "induced_surj(f,a,e)`x = f`x" + unfolding induced_surj_def using fun_disjoint_apply2[of x ?f1 ?f2] by simp +qed + +lemma induced_surj_is_surj : + assumes + "e\a" "function(f)" "domain(f) = \" "\y. y \ a \ \x\\. f ` x = y" + shows + "induced_surj(f,a,e) \ surj(\,a)" + unfolding surj_def +proof (intro CollectI ballI) + from assms + show "induced_surj(f,a,e): \ \ a" + using induced_surj_type[of f a e] cons_eq cons_absorb by simp + fix y + assume "y \ a" + with assms + have "\x\\. f ` x = y" + by simp + then + obtain x where "x\\" "f ` x = y" by auto + with \y\a\ assms + have "x\f-``a" + using vimage_iff function_apply_Pair[of f x] by auto + with \f ` x = y\ assms + have "induced_surj(f, a, e) ` x = y" + using induced_surj_type by simp + with \x\\\ show + "\x\\. induced_surj(f, a, e) ` x = y" by auto +qed + +context G_generic +begin + +definition + upair_name :: "i \ i \ i" where + "upair_name(\,\) \ {\\,one\,\\,one\}" + +definition + is_upair_name :: "[i,i,i] \ o" where + "is_upair_name(x,y,z) \ \xo\M. \yo\M. pair(##M,x,one,xo) \ pair(##M,y,one,yo) \ + upair(##M,xo,yo,z)" + +lemma upair_name_abs : + assumes "x\M" "y\M" "z\M" + shows "is_upair_name(x,y,z) \ z = upair_name(x,y)" + unfolding is_upair_name_def upair_name_def using assms one_in_M pair_in_M_iff by simp + +lemma upair_name_closed : + "\ x\M; y\M \ \ upair_name(x,y)\M" + unfolding upair_name_def using upair_in_M_iff pair_in_M_iff one_in_M by simp + +definition + upair_name_fm :: "[i,i,i,i] \ i" where + "upair_name_fm(x,y,o,z) \ Exists(Exists(And(pair_fm(x#+2,o#+2,1), + And(pair_fm(y#+2,o#+2,0),upair_fm(1,0,z#+2)))))" + +lemma upair_name_fm_type[TC] : + "\ s\nat;x\nat;y\nat;o\nat\ \ upair_name_fm(s,x,y,o)\formula" + unfolding upair_name_fm_def by simp + +lemma sats_upair_name_fm : + assumes "x\nat" "y\nat" "z\nat" "o\nat" "env\list(M)""nth(o,env)=one" + shows + "sats(M,upair_name_fm(x,y,o,z),env) \ is_upair_name(nth(x,env),nth(y,env),nth(z,env))" + unfolding upair_name_fm_def is_upair_name_def using assms by simp + +definition + opair_name :: "i \ i \ i" where + "opair_name(\,\) \ upair_name(upair_name(\,\),upair_name(\,\))" + +definition + is_opair_name :: "[i,i,i] \ o" where + "is_opair_name(x,y,z) \ \upxx\M. \upxy\M. is_upair_name(x,x,upxx) \ is_upair_name(x,y,upxy) + \ is_upair_name(upxx,upxy,z)" + +lemma opair_name_abs : + assumes "x\M" "y\M" "z\M" + shows "is_opair_name(x,y,z) \ z = opair_name(x,y)" + unfolding is_opair_name_def opair_name_def using assms upair_name_abs upair_name_closed by simp + +lemma opair_name_closed : + "\ x\M; y\M \ \ opair_name(x,y)\M" + unfolding opair_name_def using upair_name_closed by simp + +definition + opair_name_fm :: "[i,i,i,i] \ i" where + "opair_name_fm(x,y,o,z) \ Exists(Exists(And(upair_name_fm(x#+2,x#+2,o#+2,1), + And(upair_name_fm(x#+2,y#+2,o#+2,0),upair_name_fm(1,0,o#+2,z#+2)))))" + +lemma opair_name_fm_type[TC] : + "\ s\nat;x\nat;y\nat;o\nat\ \ opair_name_fm(s,x,y,o)\formula" + unfolding opair_name_fm_def by simp + +lemma sats_opair_name_fm : + assumes "x\nat" "y\nat" "z\nat" "o\nat" "env\list(M)""nth(o,env)=one" + shows + "sats(M,opair_name_fm(x,y,o,z),env) \ is_opair_name(nth(x,env),nth(y,env),nth(z,env))" + unfolding opair_name_fm_def is_opair_name_def using assms sats_upair_name_fm by simp + +lemma val_upair_name : "val(G,upair_name(\,\)) = {val(G,\),val(G,\)}" + unfolding upair_name_def using val_Upair generic one_in_G one_in_P by simp + +lemma val_opair_name : "val(G,opair_name(\,\)) = \val(G,\),val(G,\)\" + unfolding opair_name_def Pair_def using val_upair_name by simp + +lemma val_RepFun_one: "val(G,{\f(x),one\ . x\a}) = {val(G,f(x)) . x\a}" +proof - + let ?A = "{f(x) . x \ a}" + let ?Q = "\\x,p\ . p = one" + have "one \ P\G" using generic one_in_G one_in_P by simp + have "{\f(x),one\ . x \ a} = {t \ ?A \ P . ?Q(t)}" + using one_in_P by force + then + have "val(G,{\f(x),one\ . x \ a}) = val(G,{t \ ?A \ P . ?Q(t)})" + by simp + also + have "... = {val(G,t) .. t \ ?A , \p\P\G . ?Q(\t,p\)}" + using val_of_name_alt by simp + also + have "... = {val(G,t) . t \ ?A }" + using \one\P\G\ by force + also + have "... = {val(G,f(x)) . x \ a}" + by auto + finally show ?thesis by simp +qed + +subsection\$M[G]$ is a transitive model of ZF\ + +interpretation mgzf: M_ZF_trans "M[G]" + using Transset_MG generic pairing_in_MG Union_MG + extensionality_in_MG power_in_MG foundation_in_MG + strong_replacement_in_MG separation_in_MG infinity_in_MG + by unfold_locales simp_all + +(* y = opair_name(check(\),s`\) *) +definition + is_opname_check :: "[i,i,i] \ o" where + "is_opname_check(s,x,y) \ \chx\M. \sx\M. is_check(x,chx) \ fun_apply(##M,s,x,sx) \ + is_opair_name(chx,sx,y)" + +definition + opname_check_fm :: "[i,i,i,i] \ i" where + "opname_check_fm(s,x,y,o) \ Exists(Exists(And(check_fm(2#+x,2#+o,1), + And(fun_apply_fm(2#+s,2#+x,0),opair_name_fm(1,0,2#+o,2#+y)))))" + +lemma opname_check_fm_type[TC] : + "\ s\nat;x\nat;y\nat;o\nat\ \ opname_check_fm(s,x,y,o)\formula" + unfolding opname_check_fm_def by simp + +lemma sats_opname_check_fm: + assumes "x\nat" "y\nat" "z\nat" "o\nat" "env\list(M)" "nth(o,env)=one" + "y is_opname_check(nth(x,env),nth(y,env),nth(z,env))" + unfolding opname_check_fm_def is_opname_check_def + using assms sats_check_fm sats_opair_name_fm one_in_M by simp + + +lemma opname_check_abs : + assumes "s\M" "x\M" "y\M" + shows "is_opname_check(s,x,y) \ y = opair_name(check(x),s`x)" + unfolding is_opname_check_def + using assms check_abs check_in_M opair_name_abs apply_abs apply_closed by simp + +lemma repl_opname_check : + assumes + "A\M" "f\M" + shows + "{opair_name(check(x),f`x). x\A}\M" +proof - + have "arity(opname_check_fm(3,0,1,2))= 4" + unfolding opname_check_fm_def opair_name_fm_def upair_name_fm_def + check_fm_def rcheck_fm_def trans_closure_fm_def is_eclose_fm_def mem_eclose_fm_def + is_Hcheck_fm_def Replace_fm_def PHcheck_fm_def finite_ordinal_fm_def is_iterates_fm_def + is_wfrec_fm_def is_recfun_fm_def restriction_fm_def pre_image_fm_def eclose_n_fm_def + is_nat_case_fm_def quasinat_fm_def Memrel_fm_def singleton_fm_def fm_defs iterates_MH_fm_def + by (simp add:nat_simp_union) + moreover + have "x\A \ opair_name(check(x), f ` x)\M" for x + using assms opair_name_closed apply_closed transitivity check_in_M + by simp + ultimately + show ?thesis using assms opname_check_abs[of f] sats_opname_check_fm + one_in_M + Repl_in_M[of "opname_check_fm(3,0,1,2)" "[one,f]" "is_opname_check(f)" + "\x. opair_name(check(x),f`x)"] + by simp +qed + + + +theorem choice_in_MG: + assumes "choice_ax(##M)" + shows "choice_ax(##M[G])" +proof - + { + fix a + assume "a\M[G]" + then + obtain \ where "\\M" "val(G,\) = a" + using GenExt_def by auto + with \\\M\ + have "domain(\)\M" + using domain_closed by simp + then + obtain s \ where "s\surj(\,domain(\))" "Ord(\)" "s\M" "\\M" + using assms choice_ax_abs by auto + then + have "\\M[G]" + using M_subset_MG generic one_in_G subsetD by blast + let ?A="domain(\)\P" + let ?g = "{opair_name(check(\),s`\). \\\}" + have "?g \ M" using \s\M\ \\\M\ repl_opname_check by simp + let ?f_dot="{\opair_name(check(\),s`\),one\. \\\}" + have "?f_dot = ?g \ {one}" by blast + from one_in_M have "{one} \ M" using singletonM by simp + define f where + "f \ val(G,?f_dot)" + from \{one}\M\ \?g\M\ \?f_dot = ?g\{one}\ + have "?f_dot\M" + using cartprod_closed by simp + then + have "f \ M[G]" + unfolding f_def by (blast intro:GenExtI) + have "f = {val(G,opair_name(check(\),s`\)) . \\\}" + unfolding f_def using val_RepFun_one by simp + also + have "... = {\\,val(G,s`\)\ . \\\}" + using val_opair_name valcheck generic one_in_G one_in_P by simp + finally + have "f = {\\,val(G,s`\)\ . \\\}" . + then + have 1: "domain(f) = \" "function(f)" + unfolding function_def by auto + have 2: "y \ a \ \x\\. f ` x = y" for y + proof - + fix y + assume + "y \ a" + with \val(G,\) = a\ + obtain \ where "\\domain(\)" "val(G,\) = y" + using elem_of_val[of y _ \] by blast + with \s\surj(\,domain(\))\ + obtain \ where "\\\" "s`\ = \" + unfolding surj_def by auto + with \val(G,\) = y\ + have "val(G,s`\) = y" + by simp + with \f = {\\,val(G,s`\)\ . \\\}\ \\\\\ + have "\\,y\\f" + by auto + with \function(f)\ + have "f`\ = y" + using function_apply_equality by simp + with \\\\\ show + "\\\\. f ` \ = y" + by auto + qed + then + have "\\\(M[G]). \f'\(M[G]). Ord(\) \ f' \ surj(\,a)" + proof (cases "a=0") + case True + then + have "0\surj(0,a)" + unfolding surj_def by simp + then + show ?thesis using zero_in_MG by auto + next + case False + with \a\M[G]\ + obtain e where "e\a" "e\M[G]" + using transitivity_MG by blast + with 1 and 2 + have "induced_surj(f,a,e) \ surj(\,a)" + using induced_surj_is_surj by simp + moreover from \f\M[G]\ \a\M[G]\ \e\M[G]\ + have "induced_surj(f,a,e) \ M[G]" + unfolding induced_surj_def + by (simp flip: setclass_iff) + moreover note + \\\M[G]\ \Ord(\)\ + ultimately show ?thesis by auto + qed + } + then + show ?thesis using mgzf.choice_ax_abs by simp +qed + +end (* G_generic_extra_repl *) + +end \ No newline at end of file diff --git a/thys/Forcing/Extensionality_Axiom.thy b/thys/Forcing/Extensionality_Axiom.thy new file mode 100644 --- /dev/null +++ b/thys/Forcing/Extensionality_Axiom.thy @@ -0,0 +1,32 @@ +section\The Axiom of Extensionality in $M[G]$\ +theory Extensionality_Axiom +imports + Names +begin + +context forcing_data +begin + +lemma extensionality_in_MG : "extensionality(##(M[G]))" +proof - + { + fix x y z + assume + asms: "x\M[G]" "y\M[G]" "(\w\M[G] . w \ x \ w \ y)" + from \x\M[G]\ have + "z\x \ z\M[G] \ z\x" + using transitivity_MG by auto + also have + "... \ z\y" + using asms transitivity_MG by auto + finally have + "z\x \ z\y" . + } + then have + "\x\M[G] . \y\M[G] . (\z\M[G] . z \ x \ z \ y) \ x = y" + by blast + then show ?thesis unfolding extensionality_def by simp +qed + +end (* context forcing_data *) +end \ No newline at end of file diff --git a/thys/Forcing/Forces_Definition.thy b/thys/Forcing/Forces_Definition.thy new file mode 100644 --- /dev/null +++ b/thys/Forcing/Forces_Definition.thy @@ -0,0 +1,1793 @@ +section\The definition of \<^term>\forces\\ +theory Forces_Definition imports Arities FrecR Synthetic_Definition begin + +text\This is the core of our development.\ + +subsection\The relation \<^term>\frecrel\\ + +definition + frecrelP :: "[i\o,i] \ o" where + "frecrelP(M,xy) \ (\x[M]. \y[M]. pair(M,x,y,xy) \ is_frecR(M,x,y))" + +definition + frecrelP_fm :: "i \ i" where + "frecrelP_fm(a) \ Exists(Exists(And(pair_fm(1,0,a#+2),frecR_fm(1,0))))" + +lemma arity_frecrelP_fm : + "a\nat \ arity(frecrelP_fm(a)) = succ(a)" + unfolding frecrelP_fm_def + using arity_frecR_fm arity_pair_fm pred_Un_distrib + by simp + +lemma frecrelP_fm_type[TC] : + "a\nat \ frecrelP_fm(a)\formula" + unfolding frecrelP_fm_def by simp + +lemma sats_frecrelP_fm : + assumes "a\nat" "env\list(A)" + shows "sats(A,frecrelP_fm(a),env) \ frecrelP(##A,nth(a, env))" + unfolding frecrelP_def frecrelP_fm_def + using assms by (auto simp add:frecR_fm_iff_sats[symmetric]) + +lemma frecrelP_iff_sats: + assumes + "nth(a,env) = aa" "a\ nat" "env \ list(A)" + shows + "frecrelP(##A,aa) \ sats(A, frecrelP_fm(a), env)" + using assms + by (simp add:sats_frecrelP_fm) + +definition + is_frecrel :: "[i\o,i,i] \ o" where + "is_frecrel(M,A,r) \ \A2[M]. cartprod(M,A,A,A2) \ is_Collect(M,A2, frecrelP(M) ,r)" + +definition + frecrel_fm :: "[i,i] \ i" where + "frecrel_fm(a,r) \ Exists(And(cartprod_fm(a#+1,a#+1,0),Collect_fm(0,frecrelP_fm(0),r#+1)))" + +lemma frecrel_fm_type[TC] : + "\a\nat;b\nat\ \ frecrel_fm(a,b)\formula" + unfolding frecrel_fm_def by simp + +lemma arity_frecrel_fm : + assumes "a\nat" "b\nat" + shows "arity(frecrel_fm(a,b)) = succ(a) \ succ(b)" + unfolding frecrel_fm_def + using assms arity_Collect_fm arity_cartprod_fm arity_frecrelP_fm pred_Un_distrib + by auto + +lemma sats_frecrel_fm : + assumes + "a\nat" "r\nat" "env\list(A)" + shows + "sats(A,frecrel_fm(a,r),env) + \ is_frecrel(##A,nth(a, env),nth(r, env))" + unfolding is_frecrel_def frecrel_fm_def + using assms + by (simp add:sats_Collect_fm sats_frecrelP_fm) + +lemma is_frecrel_iff_sats: + assumes + "nth(a,env) = aa" "nth(r,env) = rr" "a\ nat" "r\ nat" "env \ list(A)" + shows + "is_frecrel(##A, aa,rr) \ sats(A, frecrel_fm(a,r), env)" + using assms + by (simp add:sats_frecrel_fm) + +definition + names_below :: "i \ i \ i" where + "names_below(P,x) \ 2\ecloseN(x)\ecloseN(x)\P" + +lemma names_belowsD: + assumes "x \ names_below(P,z)" + obtains f n1 n2 p where + "x = \f,n1,n2,p\" "f\2" "n1\ecloseN(z)" "n2\ecloseN(z)" "p\P" + using assms unfolding names_below_def by auto + + +definition + is_names_below :: "[i\o,i,i,i] \ o" where + "is_names_below(M,P,x,nb) \ \p1[M]. \p0[M]. \t[M]. \ec[M]. + is_ecloseN(M,ec,x) \ number2(M,t) \ cartprod(M,ec,P,p0) \ cartprod(M,ec,p0,p1) + \ cartprod(M,t,p1,nb)" + +definition + number2_fm :: "i\i" where + "number2_fm(a) \ Exists(And(number1_fm(0), succ_fm(0,succ(a))))" + +lemma number2_fm_type[TC] : + "a\nat \ number2_fm(a) \ formula" + unfolding number2_fm_def by simp + +lemma number2arity__fm : + "a\nat \ arity(number2_fm(a)) = succ(a)" + unfolding number2_fm_def + using number1arity__fm arity_succ_fm nat_union_abs2 pred_Un_distrib + by simp + +lemma sats_number2_fm [simp]: + "\ x \ nat; env \ list(A) \ + \ sats(A, number2_fm(x), env) \ number2(##A, nth(x,env))" + by (simp add: number2_fm_def number2_def) + +definition + is_names_below_fm :: "[i,i,i] \ i" where + "is_names_below_fm(P,x,nb) \ Exists(Exists(Exists(Exists( + And(ecloseN_fm(0,x #+ 4),And(number2_fm(1), + And(cartprod_fm(0,P #+ 4,2),And(cartprod_fm(0,2,3),cartprod_fm(1,3,nb#+4)))))))))" + +lemma arity_is_names_below_fm : + "\P\nat;x\nat;nb\nat\ \ arity(is_names_below_fm(P,x,nb)) = succ(P) \ succ(x) \ succ(nb)" + unfolding is_names_below_fm_def + using arity_cartprod_fm number2arity__fm arity_ecloseN_fm nat_union_abs2 pred_Un_distrib + by auto + + +lemma is_names_below_fm_type[TC]: + "\P\nat;x\nat;nb\nat\ \ is_names_below_fm(P,x,nb)\formula" + unfolding is_names_below_fm_def by simp + +lemma sats_is_names_below_fm : + assumes + "P\nat" "x\nat" "nb\nat" "env\list(A)" + shows + "sats(A,is_names_below_fm(P,x,nb),env) + \ is_names_below(##A,nth(P, env),nth(x, env),nth(nb, env))" + unfolding is_names_below_fm_def is_names_below_def using assms by simp + +definition + is_tuple :: "[i\o,i,i,i,i,i] \ o" where + "is_tuple(M,z,t1,t2,p,t) \ \t1t2p[M]. \t2p[M]. pair(M,t2,p,t2p) \ pair(M,t1,t2p,t1t2p) \ + pair(M,z,t1t2p,t)" + + +definition + is_tuple_fm :: "[i,i,i,i,i] \ i" where + "is_tuple_fm(z,t1,t2,p,tup) = Exists(Exists(And(pair_fm(t2 #+ 2,p #+ 2,0), + And(pair_fm(t1 #+ 2,0,1),pair_fm(z #+ 2,1,tup #+ 2)))))" + + +lemma arity_is_tuple_fm : "\ z\nat ; t1\nat ; t2\nat ; p\nat ; tup\nat \ \ + arity(is_tuple_fm(z,t1,t2,p,tup)) = \ {succ(z),succ(t1),succ(t2),succ(p),succ(tup)}" + unfolding is_tuple_fm_def + using arity_pair_fm nat_union_abs1 nat_union_abs2 pred_Un_distrib + by auto + +lemma is_tuple_fm_type[TC] : + "z\nat \ t1\nat \ t2\nat \ p\nat \ tup\nat \ is_tuple_fm(z,t1,t2,p,tup)\formula" + unfolding is_tuple_fm_def by simp + +lemma sats_is_tuple_fm : + assumes + "z\nat" "t1\nat" "t2\nat" "p\nat" "tup\nat" "env\list(A)" + shows + "sats(A,is_tuple_fm(z,t1,t2,p,tup),env) + \ is_tuple(##A,nth(z, env),nth(t1, env),nth(t2, env),nth(p, env),nth(tup, env))" + unfolding is_tuple_def is_tuple_fm_def using assms by simp + +lemma is_tuple_iff_sats: + assumes + "nth(a,env) = aa" "nth(b,env) = bb" "nth(c,env) = cc" "nth(d,env) = dd" "nth(e,env) = ee" + "a\nat" "b\nat" "c\nat" "d\nat" "e\nat" "env \ list(A)" + shows + "is_tuple(##A,aa,bb,cc,dd,ee) \ sats(A, is_tuple_fm(a,b,c,d,e), env)" + using assms by (simp add: sats_is_tuple_fm) + +subsection\Definition of \<^term>\forces\ for equality and membership\ + +(* p ||- \ = \ \ + \\. \\domain(\) \ domain(\) \ (\q\P. \q,p\\leq \ ((q ||- \\\) \ (q ||- \\\)) ) *) +definition + eq_case :: "[i,i,i,i,i,i] \ o" where + "eq_case(t1,t2,p,P,leq,f) \ \s. s\domain(t1) \ domain(t2) \ + (\q. q\P \ \q,p\\leq \ (f`\1,s,t1,q\=1 \ f`\1,s,t2,q\ =1))" + + +definition + is_eq_case :: "[i\o,i,i,i,i,i,i] \ o" where + "is_eq_case(M,t1,t2,p,P,leq,f) \ + \s[M]. (\d[M]. is_domain(M,t1,d) \ s\d) \ (\d[M]. is_domain(M,t2,d) \ s\d) + \ (\q[M]. q\P \ (\qp[M]. pair(M,q,p,qp) \ qp\leq) \ + (\ost1q[M]. \ost2q[M]. \o[M]. \vf1[M]. \vf2[M]. + is_tuple(M,o,s,t1,q,ost1q) \ + is_tuple(M,o,s,t2,q,ost2q) \ number1(M,o) \ + fun_apply(M,f,ost1q,vf1) \ fun_apply(M,f,ost2q,vf2) \ + (vf1 = o \ vf2 = o)))" + +(* p ||- + \ \ \ \ \v\P. \v,p\\leq \ (\q\P. \q,v\\leq \ (\\. \r\P. \\,r\\\ \ \q,r\\leq \ q ||- \ = \)) *) +definition + mem_case :: "[i,i,i,i,i,i] \ o" where + "mem_case(t1,t2,p,P,leq,f) \ \v\P. \v,p\\leq \ + (\q. \s. \r. r\P \ q\P \ \q,v\\leq \ \s,r\ \ t2 \ \q,r\\leq \ f`\0,t1,s,q\ = 1)" + +definition + is_mem_case :: "[i\o,i,i,i,i,i,i] \ o" where + "is_mem_case(M,t1,t2,p,P,leq,f) \ \v[M]. \vp[M]. v\P \ pair(M,v,p,vp) \ vp\leq \ + (\q[M]. \s[M]. \r[M]. \qv[M]. \sr[M]. \qr[M]. \z[M]. \zt1sq[M]. \o[M]. + r\ P \ q\P \ pair(M,q,v,qv) \ pair(M,s,r,sr) \ pair(M,q,r,qr) \ + empty(M,z) \ is_tuple(M,z,t1,s,q,zt1sq) \ + number1(M,o) \ qv\leq \ sr\t2 \ qr\leq \ fun_apply(M,f,zt1sq,o))" + + +schematic_goal sats_is_mem_case_fm_auto: + assumes + "n1\nat" "n2\nat" "p\nat" "P\nat" "leq\nat" "f\nat" "env\list(A)" + shows + "is_mem_case(##A, nth(n1, env),nth(n2, env),nth(p, env),nth(P, env), nth(leq, env),nth(f,env)) + \ sats(A,?imc_fm(n1,n2,p,P,leq,f),env)" + unfolding is_mem_case_def + by (insert assms ; (rule sep_rules' is_tuple_iff_sats | simp)+) + + +synthesize "mem_case_fm" from_schematic sats_is_mem_case_fm_auto + +lemma arity_mem_case_fm : + assumes + "n1\nat" "n2\nat" "p\nat" "P\nat" "leq\nat" "f\nat" + shows + "arity(mem_case_fm(n1,n2,p,P,leq,f)) = + succ(n1) \ succ(n2) \ succ(p) \ succ(P) \ succ(leq) \ succ(f)" + unfolding mem_case_fm_def + using assms arity_pair_fm arity_is_tuple_fm number1arity__fm arity_fun_apply_fm arity_empty_fm + pred_Un_distrib + by auto + +schematic_goal sats_is_eq_case_fm_auto: + assumes + "n1\nat" "n2\nat" "p\nat" "P\nat" "leq\nat" "f\nat" "env\list(A)" + shows + "is_eq_case(##A, nth(n1, env),nth(n2, env),nth(p, env),nth(P, env), nth(leq, env),nth(f,env)) + \ sats(A,?iec_fm(n1,n2,p,P,leq,f),env)" + unfolding is_eq_case_def + by (insert assms ; (rule sep_rules' is_tuple_iff_sats | simp)+) + +synthesize "eq_case_fm" from_schematic sats_is_eq_case_fm_auto + +lemma arity_eq_case_fm : + assumes + "n1\nat" "n2\nat" "p\nat" "P\nat" "leq\nat" "f\nat" + shows + "arity(eq_case_fm(n1,n2,p,P,leq,f)) = + succ(n1) \ succ(n2) \ succ(p) \ succ(P) \ succ(leq) \ succ(f)" + unfolding eq_case_fm_def + using assms arity_pair_fm arity_is_tuple_fm number1arity__fm arity_fun_apply_fm arity_empty_fm + arity_domain_fm pred_Un_distrib + by auto + +definition + Hfrc :: "[i,i,i,i] \ o" where + "Hfrc(P,leq,fnnc,f) \ \ft. \n1. \n2. \c. c\P \ fnnc = \ft,n1,n2,c\ \ + ( ft = 0 \ eq_case(n1,n2,c,P,leq,f) + \ ft = 1 \ mem_case(n1,n2,c,P,leq,f))" + +definition + is_Hfrc :: "[i\o,i,i,i,i] \ o" where + "is_Hfrc(M,P,leq,fnnc,f) \ + \ft[M]. \n1[M]. \n2[M]. \co[M]. + co\P \ is_tuple(M,ft,n1,n2,co,fnnc) \ + ( (empty(M,ft) \ is_eq_case(M,n1,n2,co,P,leq,f)) + \ (number1(M,ft) \ is_mem_case(M,n1,n2,co,P,leq,f)))" + +definition + Hfrc_fm :: "[i,i,i,i] \ i" where + "Hfrc_fm(P,leq,fnnc,f) \ + Exists(Exists(Exists(Exists( + And(Member(0,P #+ 4),And(is_tuple_fm(3,2,1,0,fnnc #+ 4), + Or(And(empty_fm(3),eq_case_fm(2,1,0,P #+ 4,leq #+ 4,f #+ 4)), + And(number1_fm(3),mem_case_fm(2,1,0,P #+ 4,leq #+ 4,f #+ 4)))))))))" + +lemma Hfrc_fm_type[TC] : + "\P\nat;leq\nat;fnnc\nat;f\nat\ \ Hfrc_fm(P,leq,fnnc,f)\formula" + unfolding Hfrc_fm_def by simp + +lemma arity_Hfrc_fm : + assumes + "P\nat" "leq\nat" "fnnc\nat" "f\nat" + shows + "arity(Hfrc_fm(P,leq,fnnc,f)) = succ(P) \ succ(leq) \ succ(fnnc) \ succ(f)" + unfolding Hfrc_fm_def + using assms arity_is_tuple_fm arity_mem_case_fm arity_eq_case_fm + arity_empty_fm number1arity__fm pred_Un_distrib + by auto + +lemma sats_Hfrc_fm: + assumes + "P\nat" "leq\nat" "fnnc\nat" "f\nat" "env\list(A)" + shows + "sats(A,Hfrc_fm(P,leq,fnnc,f),env) + \ is_Hfrc(##A,nth(P, env), nth(leq, env), nth(fnnc, env),nth(f, env))" + unfolding is_Hfrc_def Hfrc_fm_def + using assms + by (simp add: sats_is_tuple_fm eq_case_fm_iff_sats[symmetric] mem_case_fm_iff_sats[symmetric]) + +lemma Hfrc_iff_sats: + assumes + "P\nat" "leq\nat" "fnnc\nat" "f\nat" "env\list(A)" + "nth(P,env) = PP" "nth(leq,env) = lleq" "nth(fnnc,env) = ffnnc" "nth(f,env) = ff" + shows + "is_Hfrc(##A, PP, lleq,ffnnc,ff) + \ sats(A,Hfrc_fm(P,leq,fnnc,f),env)" + using assms + by (simp add:sats_Hfrc_fm) + +definition + is_Hfrc_at :: "[i\o,i,i,i,i,i] \ o" where + "is_Hfrc_at(M,P,leq,fnnc,f,z) \ + (empty(M,z) \ \ is_Hfrc(M,P,leq,fnnc,f)) + \ (number1(M,z) \ is_Hfrc(M,P,leq,fnnc,f))" + +definition + Hfrc_at_fm :: "[i,i,i,i,i] \ i" where + "Hfrc_at_fm(P,leq,fnnc,f,z) \ Or(And(empty_fm(z),Neg(Hfrc_fm(P,leq,fnnc,f))), + And(number1_fm(z),Hfrc_fm(P,leq,fnnc,f)))" + +lemma arity_Hfrc_at_fm : + assumes + "P\nat" "leq\nat" "fnnc\nat" "f\nat" "z\nat" + shows + "arity(Hfrc_at_fm(P,leq,fnnc,f,z)) = succ(P) \ succ(leq) \ succ(fnnc) \ succ(f) \ succ(z)" + unfolding Hfrc_at_fm_def + using assms arity_Hfrc_fm arity_empty_fm number1arity__fm pred_Un_distrib + by auto + + +lemma Hfrc_at_fm_type[TC] : + "\P\nat;leq\nat;fnnc\nat;f\nat;z\nat\ \ Hfrc_at_fm(P,leq,fnnc,f,z)\formula" + unfolding Hfrc_at_fm_def by simp + +lemma sats_Hfrc_at_fm: + assumes + "P\nat" "leq\nat" "fnnc\nat" "f\nat" "z\nat" "env\list(A)" + shows + "sats(A,Hfrc_at_fm(P,leq,fnnc,f,z),env) + \ is_Hfrc_at(##A,nth(P, env), nth(leq, env), nth(fnnc, env),nth(f, env),nth(z, env))" + unfolding is_Hfrc_at_def Hfrc_at_fm_def using assms sats_Hfrc_fm + by simp + +lemma is_Hfrc_at_iff_sats: + assumes + "P\nat" "leq\nat" "fnnc\nat" "f\nat" "z\nat" "env\list(A)" + "nth(P,env) = PP" "nth(leq,env) = lleq" "nth(fnnc,env) = ffnnc" + "nth(f,env) = ff" "nth(z,env) = zz" + shows + "is_Hfrc_at(##A, PP, lleq,ffnnc,ff,zz) + \ sats(A,Hfrc_at_fm(P,leq,fnnc,f,z),env)" + using assms by (simp add:sats_Hfrc_at_fm) + +lemma arity_tran_closure_fm : + "\x\nat;f\nat\ \ arity(trans_closure_fm(x,f)) = succ(x) \ succ(f)" + unfolding trans_closure_fm_def + using arity_omega_fm arity_field_fm arity_typed_function_fm arity_pair_fm arity_empty_fm arity_fun_apply_fm + arity_composition_fm arity_succ_fm nat_union_abs2 pred_Un_distrib + by auto + +subsection\The well-founded relation \<^term>\forcerel\\ +definition + forcerel :: "i \ i \ i" where + "forcerel(P,x) \ frecrel(names_below(P,x))^+" + +definition + is_forcerel :: "[i\o,i,i,i] \ o" where + "is_forcerel(M,P,x,z) \ \r[M]. \nb[M]. tran_closure(M,r,z) \ + (is_names_below(M,P,x,nb) \ is_frecrel(M,nb,r))" + +definition + forcerel_fm :: "i\ i \ i \ i" where + "forcerel_fm(p,x,z) \ Exists(Exists(And(trans_closure_fm(1, z#+2), + And(is_names_below_fm(p#+2,x#+2,0),frecrel_fm(0,1)))))" + +lemma arity_forcerel_fm: + "\p\nat;x\nat;z\nat\ \ arity(forcerel_fm(p,x,z)) = succ(p) \ succ(x) \ succ(z)" + unfolding forcerel_fm_def + using arity_frecrel_fm arity_tran_closure_fm arity_is_names_below_fm pred_Un_distrib + by auto + +lemma forcerel_fm_type[TC]: + "\p\nat;x\nat;z\nat\ \ forcerel_fm(p,x,z)\formula" + unfolding forcerel_fm_def by simp + + +lemma sats_forcerel_fm: + assumes + "p\nat" "x\nat" "z\nat" "env\list(A)" + shows + "sats(A,forcerel_fm(p,x,z),env) \ is_forcerel(##A,nth(p,env),nth(x, env),nth(z, env))" +proof - + have "sats(A,trans_closure_fm(1,z #+ 2),Cons(nb,Cons(r,env))) \ + tran_closure(##A, r, nth(z, env))" if "r\A" "nb\A" for r nb + using that assms trans_closure_fm_iff_sats[of 1 "[nb,r]@env" _ "z#+2",symmetric] by simp + moreover + have "sats(A, is_names_below_fm(succ(succ(p)), succ(succ(x)), 0), Cons(nb, Cons(r, env))) \ + is_names_below(##A, nth(p,env), nth(x, env), nb)" + if "r\A" "nb\A" for nb r + using assms that sats_is_names_below_fm[of "p #+ 2" "x #+ 2" 0 "[nb,r]@env"] by simp + moreover + have "sats(A, frecrel_fm(0, 1), Cons(nb, Cons(r, env))) \ + is_frecrel(##A, nb, r)" + if "r\A" "nb\A" for r nb + using assms that sats_frecrel_fm[of 0 1 "[nb,r]@env"] by simp + ultimately + show ?thesis using assms unfolding is_forcerel_def forcerel_fm_def by simp +qed + +subsection\\<^term>\frc_at\, forcing for atomic formulas\ +definition + frc_at :: "[i,i,i] \ i" where + "frc_at(P,leq,fnnc) \ wfrec(frecrel(names_below(P,fnnc)),fnnc, + \x f. bool_of_o(Hfrc(P,leq,x,f)))" + +definition + is_frc_at :: "[i\o,i,i,i,i] \ o" where + "is_frc_at(M,P,leq,x,z) \ \r[M]. is_forcerel(M,P,x,r) \ + is_wfrec(M,is_Hfrc_at(M,P,leq),r,x,z)" + +definition + frc_at_fm :: "[i,i,i,i] \ i" where + "frc_at_fm(p,l,x,z) \ Exists(And(forcerel_fm(succ(p),succ(x),0), + is_wfrec_fm(Hfrc_at_fm(6#+p,6#+l,2,1,0),0,succ(x),succ(z))))" + +lemma frc_at_fm_type [TC] : + "\p\nat;l\nat;x\nat;z\nat\ \ frc_at_fm(p,l,x,z)\formula" + unfolding frc_at_fm_def by simp + +lemma arity_frc_at_fm : + assumes "p\nat" "l\nat" "x\nat" "z\nat" + shows "arity(frc_at_fm(p,l,x,z)) = succ(p) \ succ(l) \ succ(x) \ succ(z)" +proof - + let ?\ = "Hfrc_at_fm(6 #+ p, 6 #+ l, 2, 1, 0)" + from assms + have "arity(?\) = (7#+p) \ (7#+l)" "?\ \ formula" + using arity_Hfrc_at_fm nat_simp_union + by auto + with assms + have W: "arity(is_wfrec_fm(?\, 0, succ(x), succ(z))) = 2#+p \ (2#+l) \ (2#+x) \ (2#+z)" + using arity_is_wfrec_fm[OF \?\\_\ _ _ _ _ \arity(?\) = _\] pred_Un_distrib pred_succ_eq + nat_union_abs1 + by auto + from assms + have "arity(forcerel_fm(succ(p),succ(x),0)) = succ(succ(p)) \ succ(succ(x))" + using arity_forcerel_fm nat_simp_union + by auto + with W assms + show ?thesis + unfolding frc_at_fm_def + using arity_forcerel_fm pred_Un_distrib + by auto +qed + +lemma sats_frc_at_fm : + assumes + "p\nat" "l\nat" "i\nat" "j\nat" "env\list(A)" "i < length(env)" "j < length(env)" + shows + "sats(A,frc_at_fm(p,l,i,j),env) \ + is_frc_at(##A,nth(p,env),nth(l,env),nth(i,env),nth(j,env))" +proof - + { + fix r pp ll + assume "r\A" + have 0:"is_Hfrc_at(##A,nth(p,env),nth(l,env),a2, a1, a0) \ + sats(A, Hfrc_at_fm(6#+p,6#+l,2,1,0), [a0,a1,a2,a3,a4,r]@env)" + if "a0\A" "a1\A" "a2\A" "a3\A" "a4\A" for a0 a1 a2 a3 a4 + using that assms \r\A\ + is_Hfrc_at_iff_sats[of "6#+p" "6#+l" 2 1 0 "[a0,a1,a2,a3,a4,r]@env" A] by simp + have "sats(A,is_wfrec_fm(Hfrc_at_fm(6 #+ p, 6 #+ l, 2, 1, 0), 0, succ(i), succ(j)),[r]@env) \ + is_wfrec(##A, is_Hfrc_at(##A, nth(p,env), nth(l,env)), r,nth(i, env), nth(j, env))" + using assms \r\A\ + sats_is_wfrec_fm[OF 0[simplified]] + by simp + } + moreover + have "sats(A, forcerel_fm(succ(p), succ(i), 0), Cons(r, env)) \ + is_forcerel(##A,nth(p,env),nth(i,env),r)" if "r\A" for r + using assms sats_forcerel_fm that by simp + ultimately + show ?thesis unfolding is_frc_at_def frc_at_fm_def + using assms by simp +qed + +definition + forces_eq' :: "[i,i,i,i,i] \ o" where + "forces_eq'(P,l,p,t1,t2) \ frc_at(P,l,\0,t1,t2,p\) = 1" + +definition + forces_mem' :: "[i,i,i,i,i] \ o" where + "forces_mem'(P,l,p,t1,t2) \ frc_at(P,l,\1,t1,t2,p\) = 1" + +definition + forces_neq' :: "[i,i,i,i,i] \ o" where + "forces_neq'(P,l,p,t1,t2) \ \ (\q\P. \q,p\\l \ forces_eq'(P,l,q,t1,t2))" + +definition + forces_nmem' :: "[i,i,i,i,i] \ o" where + "forces_nmem'(P,l,p,t1,t2) \ \ (\q\P. \q,p\\l \ forces_mem'(P,l,q,t1,t2))" + +definition + is_forces_eq' :: "[i\o,i,i,i,i,i] \ o" where + "is_forces_eq'(M,P,l,p,t1,t2) \ \o[M]. \z[M]. \t[M]. number1(M,o) \ empty(M,z) \ + is_tuple(M,z,t1,t2,p,t) \ is_frc_at(M,P,l,t,o)" + +definition + is_forces_mem' :: "[i\o,i,i,i,i,i] \ o" where + "is_forces_mem'(M,P,l,p,t1,t2) \ \o[M]. \t[M]. number1(M,o) \ + is_tuple(M,o,t1,t2,p,t) \ is_frc_at(M,P,l,t,o)" + +definition + is_forces_neq' :: "[i\o,i,i,i,i,i] \ o" where + "is_forces_neq'(M,P,l,p,t1,t2) \ + \ (\q[M]. q\P \ (\qp[M]. pair(M,q,p,qp) \ qp\l \ is_forces_eq'(M,P,l,q,t1,t2)))" + +definition + is_forces_nmem' :: "[i\o,i,i,i,i,i] \ o" where + "is_forces_nmem'(M,P,l,p,t1,t2) \ + \ (\q[M]. \qp[M]. q\P \ pair(M,q,p,qp) \ qp\l \ is_forces_mem'(M,P,l,q,t1,t2))" + +definition + forces_eq_fm :: "[i,i,i,i,i] \ i" where + "forces_eq_fm(p,l,q,t1,t2) \ + Exists(Exists(Exists(And(number1_fm(2),And(empty_fm(1), + And(is_tuple_fm(1,t1#+3,t2#+3,q#+3,0),frc_at_fm(p#+3,l#+3,0,2) ))))))" + +definition + forces_mem_fm :: "[i,i,i,i,i] \ i" where + "forces_mem_fm(p,l,q,t1,t2) \ Exists(Exists(And(number1_fm(1), + And(is_tuple_fm(1,t1#+2,t2#+2,q#+2,0),frc_at_fm(p#+2,l#+2,0,1)))))" + +definition + forces_neq_fm :: "[i,i,i,i,i] \ i" where + "forces_neq_fm(p,l,q,t1,t2) \ Neg(Exists(Exists(And(Member(1,p#+2), + And(pair_fm(1,q#+2,0),And(Member(0,l#+2),forces_eq_fm(p#+2,l#+2,1,t1#+2,t2#+2)))))))" + +definition + forces_nmem_fm :: "[i,i,i,i,i] \ i" where + "forces_nmem_fm(p,l,q,t1,t2) \ Neg(Exists(Exists(And(Member(1,p#+2), + And(pair_fm(1,q#+2,0),And(Member(0,l#+2),forces_mem_fm(p#+2,l#+2,1,t1#+2,t2#+2)))))))" + + +lemma forces_eq_fm_type [TC]: + "\ p\nat;l\nat;q\nat;t1\nat;t2\nat\ \ forces_eq_fm(p,l,q,t1,t2) \ formula" + unfolding forces_eq_fm_def + by simp + +lemma forces_mem_fm_type [TC]: + "\ p\nat;l\nat;q\nat;t1\nat;t2\nat\ \ forces_mem_fm(p,l,q,t1,t2) \ formula" + unfolding forces_mem_fm_def + by simp + +lemma forces_neq_fm_type [TC]: + "\ p\nat;l\nat;q\nat;t1\nat;t2\nat\ \ forces_neq_fm(p,l,q,t1,t2) \ formula" + unfolding forces_neq_fm_def + by simp + +lemma forces_nmem_fm_type [TC]: + "\ p\nat;l\nat;q\nat;t1\nat;t2\nat\ \ forces_nmem_fm(p,l,q,t1,t2) \ formula" + unfolding forces_nmem_fm_def + by simp + +lemma arity_forces_eq_fm : + "p\nat \ l\nat \ q\nat \ t1 \ nat \ t2 \ nat \ + arity(forces_eq_fm(p,l,q,t1,t2)) = succ(t1) \ succ(t2) \ succ(q) \ succ(p) \ succ(l)" + unfolding forces_eq_fm_def + using number1arity__fm arity_empty_fm arity_is_tuple_fm arity_frc_at_fm + pred_Un_distrib + by auto + +lemma arity_forces_mem_fm : + "p\nat \ l\nat \ q\nat \ t1 \ nat \ t2 \ nat \ + arity(forces_mem_fm(p,l,q,t1,t2)) = succ(t1) \ succ(t2) \ succ(q) \ succ(p) \ succ(l)" + unfolding forces_mem_fm_def + using number1arity__fm arity_empty_fm arity_is_tuple_fm arity_frc_at_fm + pred_Un_distrib + by auto + +lemma sats_forces_eq'_fm: + assumes "p\nat" "l\nat" "q\nat" "t1\nat" "t2\nat" "env\list(M)" + shows "sats(M,forces_eq_fm(p,l,q,t1,t2),env) \ + is_forces_eq'(##M,nth(p,env),nth(l,env),nth(q,env),nth(t1,env),nth(t2,env))" + unfolding forces_eq_fm_def is_forces_eq'_def using assms sats_is_tuple_fm sats_frc_at_fm + by simp + +lemma sats_forces_mem'_fm: + assumes "p\nat" "l\nat" "q\nat" "t1\nat" "t2\nat" "env\list(M)" + shows "sats(M,forces_mem_fm(p,l,q,t1,t2),env) \ + is_forces_mem'(##M,nth(p,env),nth(l,env),nth(q,env),nth(t1,env),nth(t2,env))" + unfolding forces_mem_fm_def is_forces_mem'_def using assms sats_is_tuple_fm sats_frc_at_fm + by simp + +lemma sats_forces_neq'_fm: + assumes "p\nat" "l\nat" "q\nat" "t1\nat" "t2\nat" "env\list(M)" + shows "sats(M,forces_neq_fm(p,l,q,t1,t2),env) \ + is_forces_neq'(##M,nth(p,env),nth(l,env),nth(q,env),nth(t1,env),nth(t2,env))" + unfolding forces_neq_fm_def is_forces_neq'_def + using assms sats_forces_eq'_fm sats_is_tuple_fm sats_frc_at_fm + by simp + +lemma sats_forces_nmem'_fm: + assumes "p\nat" "l\nat" "q\nat" "t1\nat" "t2\nat" "env\list(M)" + shows "sats(M,forces_nmem_fm(p,l,q,t1,t2),env) \ + is_forces_nmem'(##M,nth(p,env),nth(l,env),nth(q,env),nth(t1,env),nth(t2,env))" + unfolding forces_nmem_fm_def is_forces_nmem'_def + using assms sats_forces_mem'_fm sats_is_tuple_fm sats_frc_at_fm + by simp + +context forcing_data +begin + +(* Absoluteness of components *) +lemma fst_abs [simp]: + "\x\M; y\M \ \ is_fst(##M,x,y) \ y = fst(x)" + unfolding fst_def is_fst_def using pair_in_M_iff zero_in_M + by (auto;rule_tac the_0 the_0[symmetric],auto) + +lemma snd_abs [simp]: + "\x\M; y\M \ \ is_snd(##M,x,y) \ y = snd(x)" + unfolding snd_def is_snd_def using pair_in_M_iff zero_in_M + by (auto;rule_tac the_0 the_0[symmetric],auto) + +lemma ftype_abs[simp] : + "\x\M; y\M \ \ is_ftype(##M,x,y) \ y = ftype(x)" unfolding ftype_def is_ftype_def by simp + +lemma name1_abs[simp] : + "\x\M; y\M \ \ is_name1(##M,x,y) \ y = name1(x)" + unfolding name1_def is_name1_def + by (rule hcomp_abs[OF fst_abs];simp_all add:fst_snd_closed) + +lemma snd_snd_abs: + "\x\M; y\M \ \ is_snd_snd(##M,x,y) \ y = snd(snd(x))" + unfolding is_snd_snd_def + by (rule hcomp_abs[OF snd_abs];simp_all add:fst_snd_closed) + +lemma name2_abs[simp]: + "\x\M; y\M \ \ is_name2(##M,x,y) \ y = name2(x)" + unfolding name2_def is_name2_def + by (rule hcomp_abs[OF fst_abs snd_snd_abs];simp_all add:fst_snd_closed) + +lemma cond_of_abs[simp]: + "\x\M; y\M \ \ is_cond_of(##M,x,y) \ y = cond_of(x)" + unfolding cond_of_def is_cond_of_def + by (rule hcomp_abs[OF snd_abs snd_snd_abs];simp_all add:fst_snd_closed) + +lemma tuple_abs[simp]: + "\z\M;t1\M;t2\M;p\M;t\M\ \ + is_tuple(##M,z,t1,t2,p,t) \ t = \z,t1,t2,p\" + unfolding is_tuple_def using tuples_in_M by simp + +lemma oneN_in_M[simp]: "1\M" + by (simp flip: setclass_iff) + +lemma twoN_in_M : "2\M" + by (simp flip: setclass_iff) + +lemma comp_in_M: + "p \ q \ p\M" + "p \ q \ q\M" + using leq_in_M transitivity[of _ leq] pair_in_M_iff by auto + +(* Absoluteness of Hfrc *) + +lemma eq_case_abs [simp]: + assumes + "t1\M" "t2\M" "p\M" "f\M" + shows + "is_eq_case(##M,t1,t2,p,P,leq,f) \ eq_case(t1,t2,p,P,leq,f)" +proof - + have "q \ p \ q\M" for q + using comp_in_M by simp + moreover + have "\s,y\\t \ s\domain(t)" if "t\M" for s y t + using that unfolding domain_def by auto + ultimately + have + "(\s\M. s \ domain(t1) \ s \ domain(t2) \ (\q\M. q\P \ q \ p \ + (f ` \1, s, t1, q\ =1 \ f ` \1, s, t2, q\=1))) \ + (\s. s \ domain(t1) \ s \ domain(t2) \ (\q. q\P \ q \ p \ + (f ` \1, s, t1, q\ =1 \ f ` \1, s, t2, q\=1)))" + using assms domain_trans[OF trans_M,of t1] + domain_trans[OF trans_M,of t2] by auto + then show ?thesis + unfolding eq_case_def is_eq_case_def + using assms pair_in_M_iff n_in_M[of 1] domain_closed tuples_in_M + apply_closed leq_in_M + by simp +qed + +lemma mem_case_abs [simp]: + assumes + "t1\M" "t2\M" "p\M" "f\M" + shows + "is_mem_case(##M,t1,t2,p,P,leq,f) \ mem_case(t1,t2,p,P,leq,f)" +proof + { + fix v + assume "v\P" "v \ p" "is_mem_case(##M,t1,t2,p,P,leq,f)" + moreover + from this + have "v\M" "\v,p\ \ M" "(##M)(v)" + using transitivity[OF _ P_in_M,of v] transitivity[OF _ leq_in_M] + by simp_all + moreover + from calculation assms + obtain q r s where + "r \ P \ q \ P \ \q, v\ \ M \ \s, r\ \ M \ \q, r\ \ M \ 0 \ M \ + \0, t1, s, q\ \ M \ q \ v \ \s, r\ \ t2 \ q \ r \ f ` \0, t1, s, q\ = 1" + unfolding is_mem_case_def by auto + then + have "\q s r. r \ P \ q \ P \ q \ v \ \s, r\ \ t2 \ q \ r \ f ` \0, t1, s, q\ = 1" + by auto + } + then + show "mem_case(t1, t2, p, P, leq, f)" if "is_mem_case(##M, t1, t2, p, P, leq, f)" + unfolding mem_case_def using that assms by auto +next + { fix v + assume "v \ M" "v \ P" "\v, p\ \ M" "v \ p" "mem_case(t1, t2, p, P, leq, f)" + moreover + from this + obtain q s r where "r \ P \ q \ P \ q \ v \ \s, r\ \ t2 \ q \ r \ f ` \0, t1, s, q\ = 1" + unfolding mem_case_def by auto + moreover + from this \t2\M\ + have "r\M" "q\M" "s\M" "r \ P \ q \ P \ q \ v \ \s, r\ \ t2 \ q \ r \ f ` \0, t1, s, q\ = 1" + using transitivity P_in_M domain_closed[of t2] by auto + moreover + note \t1\M\ + ultimately + have "\q\M . \s\M. \r\M. + r \ P \ q \ P \ \q, v\ \ M \ \s, r\ \ M \ \q, r\ \ M \ 0 \ M \ + \0, t1, s, q\ \ M \ q \ v \ \s, r\ \ t2 \ q \ r \ f ` \0, t1, s, q\ = 1" + using tuples_in_M zero_in_M by auto + } + then + show "is_mem_case(##M, t1, t2, p, P, leq, f)" if "mem_case(t1, t2, p, P, leq, f)" + unfolding is_mem_case_def using assms that by auto +qed + + +lemma Hfrc_abs: + "\fnnc\M; f\M\ \ + is_Hfrc(##M,P,leq,fnnc,f) \ Hfrc(P,leq,fnnc,f)" + unfolding is_Hfrc_def Hfrc_def using pair_in_M_iff + by auto + +lemma Hfrc_at_abs: + "\fnnc\M; f\M ; z\M\ \ + is_Hfrc_at(##M,P,leq,fnnc,f,z) \ z = bool_of_o(Hfrc(P,leq,fnnc,f)) " + unfolding is_Hfrc_at_def using Hfrc_abs + by auto + +lemma components_closed : + "x\M \ ftype(x)\M" + "x\M \ name1(x)\M" + "x\M \ name2(x)\M" + "x\M \ cond_of(x)\M" + unfolding ftype_def name1_def name2_def cond_of_def using fst_snd_closed by simp_all + +lemma ecloseN_closed: + "(##M)(A) \ (##M)(ecloseN(A))" + "(##M)(A) \ (##M)(eclose_n(name1,A))" + "(##M)(A) \ (##M)(eclose_n(name2,A))" + unfolding ecloseN_def eclose_n_def + using components_closed eclose_closed singletonM Un_closed by auto + +lemma is_eclose_n_abs : + assumes "x\M" "ec\M" + shows "is_eclose_n(##M,is_name1,ec,x) \ ec = eclose_n(name1,x)" + "is_eclose_n(##M,is_name2,ec,x) \ ec = eclose_n(name2,x)" + unfolding is_eclose_n_def eclose_n_def + using assms name1_abs name2_abs eclose_abs singletonM components_closed + by auto + + +lemma is_ecloseN_abs : + "\x\M;ec\M\ \ is_ecloseN(##M,ec,x) \ ec = ecloseN(x)" + unfolding is_ecloseN_def ecloseN_def + using is_eclose_n_abs Un_closed union_abs ecloseN_closed + by auto + +lemma frecR_abs : + "x\M \ y\M \ frecR(x,y) \ is_frecR(##M,x,y)" + unfolding frecR_def is_frecR_def using components_closed domain_closed by simp + +lemma frecrelP_abs : + "z\M \ frecrelP(##M,z) \ (\x y. z = \x,y\ \ frecR(x,y))" + using pair_in_M_iff frecR_abs unfolding frecrelP_def by auto + +lemma frecrel_abs: + assumes + "A\M" "r\M" + shows + "is_frecrel(##M,A,r) \ r = frecrel(A)" +proof - + from \A\M\ + have "z\M" if "z\A\A" for z + using cartprod_closed transitivity that by simp + then + have "Collect(A\A,frecrelP(##M)) = Collect(A\A,\z. (\x y. z = \x,y\ \ frecR(x,y)))" + using Collect_cong[of "A\A" "A\A" "frecrelP(##M)"] assms frecrelP_abs by simp + with assms + show ?thesis unfolding is_frecrel_def def_frecrel using cartprod_closed + by simp +qed + +lemma frecrel_closed: + assumes + "x\M" + shows + "frecrel(x)\M" +proof - + have "Collect(x\x,\z. (\x y. z = \x,y\ \ frecR(x,y)))\M" + using Collect_in_M_0p[of "frecrelP_fm(0)"] arity_frecrelP_fm sats_frecrelP_fm + frecrelP_abs \x\M\ cartprod_closed by simp + then show ?thesis + unfolding frecrel_def Rrel_def frecrelP_def by simp +qed + +lemma field_frecrel : "field(frecrel(names_below(P,x))) \ names_below(P,x)" + unfolding frecrel_def + using field_Rrel by simp + +lemma forcerelD : "uv \ forcerel(P,x) \ uv\ names_below(P,x) \ names_below(P,x)" + unfolding forcerel_def + using trancl_type field_frecrel by blast + +lemma wf_forcerel : + "wf(forcerel(P,x))" + unfolding forcerel_def using wf_trancl wf_frecrel . + +lemma restrict_trancl_forcerel: + assumes "frecR(w,y)" + shows "restrict(f,frecrel(names_below(P,x))-``{y})`w + = restrict(f,forcerel(P,x)-``{y})`w" + unfolding forcerel_def frecrel_def using assms restrict_trancl_Rrel[of frecR] + by simp + +lemma names_belowI : + assumes "frecR(\ft,n1,n2,p\,\a,b,c,d\)" "p\P" + shows "\ft,n1,n2,p\ \ names_below(P,\a,b,c,d\)" (is "?x \ names_below(_,?y)") +proof - + from assms + have "ft \ 2" "a \ 2" + unfolding frecR_def by (auto simp add:components_simp) + from assms + consider (e) "n1 \ domain(b) \ domain(c) \ (n2 = b \ n2 =c)" + | (m) "n1 = b \ n2 \ domain(c)" + unfolding frecR_def by (auto simp add:components_simp) + then show ?thesis + proof cases + case e + then + have "n1 \ eclose(b) \ n1 \ eclose(c)" + using Un_iff in_dom_in_eclose by auto + with e + have "n1 \ ecloseN(?y)" "n2 \ ecloseN(?y)" + using ecloseNI components_in_eclose by auto + with \ft\2\ \p\P\ + show ?thesis unfolding names_below_def by auto + next + case m + then + have "n1 \ ecloseN(?y)" "n2 \ ecloseN(?y)" + using mem_eclose_trans ecloseNI + in_dom_in_eclose components_in_eclose by auto + with \ft\2\ \p\P\ + show ?thesis unfolding names_below_def + by auto + qed +qed + +lemma names_below_tr : + assumes "x\ names_below(P,y)" + "y\ names_below(P,z)" + shows "x\ names_below(P,z)" +proof - + let ?A="\y . names_below(P,y)" + from assms + obtain fx x1 x2 px where + "x = \fx,x1,x2,px\" "fx\2" "x1\ecloseN(y)" "x2\ecloseN(y)" "px\P" + unfolding names_below_def by auto + from assms + obtain fy y1 y2 py where + "y = \fy,y1,y2,py\" "fy\2" "y1\ecloseN(z)" "y2\ecloseN(z)" "py\P" + unfolding names_below_def by auto + from \x1\_\ \x2\_\ \y1\_\ \y2\_\ \x=_\ \y=_\ + have "x1\ecloseN(z)" "x2\ecloseN(z)" + using ecloseN_mono names_simp by auto + with \fx\2\ \px\P\ \x=_\ + have "x\?A(z)" + unfolding names_below_def by simp + then show ?thesis using subsetI by simp +qed + +lemma arg_into_names_below2 : + assumes "\x,y\ \ frecrel(names_below(P,z))" + shows "x \ names_below(P,y)" +proof - + { + from assms + have "x\names_below(P,z)" "y\names_below(P,z)" "frecR(x,y)" + unfolding frecrel_def Rrel_def + by auto + obtain f n1 n2 p where + "x = \f,n1,n2,p\" "f\2" "n1\ecloseN(z)" "n2\ecloseN(z)" "p\P" + using \x\names_below(P,z)\ + unfolding names_below_def by auto + moreover + obtain fy m1 m2 q where + "q\P" "y = \fy,m1,m2,q\" + using \y\names_below(P,z)\ + unfolding names_below_def by auto + moreover + note \frecR(x,y)\ + ultimately + have "x\names_below(P,y)" using names_belowI by simp + } + then show ?thesis . +qed + +lemma arg_into_names_below : + assumes "\x,y\ \ frecrel(names_below(P,z))" + shows "x \ names_below(P,x)" +proof - + { + from assms + have "x\names_below(P,z)" + unfolding frecrel_def Rrel_def + by auto + from \x\names_below(P,z)\ + obtain f n1 n2 p where + "x = \f,n1,n2,p\" "f\2" "n1\ecloseN(z)" "n2\ecloseN(z)" "p\P" + unfolding names_below_def by auto + then + have "n1\ecloseN(x)" "n2\ecloseN(x)" + using components_in_eclose by simp_all + with \f\2\ \p\P\ \x = \f,n1,n2,p\\ + have "x\names_below(P,x)" + unfolding names_below_def by simp + } + then show ?thesis . +qed + +lemma forcerel_arg_into_names_below : + assumes "\x,y\ \ forcerel(P,z)" + shows "x \ names_below(P,x)" + using assms + unfolding forcerel_def + by(rule trancl_induct;auto simp add: arg_into_names_below) + +lemma names_below_mono : + assumes "\x,y\ \ frecrel(names_below(P,z))" + shows "names_below(P,x) \ names_below(P,y)" +proof - + from assms + have "x\names_below(P,y)" + using arg_into_names_below2 by simp + then + show ?thesis + using names_below_tr subsetI by simp +qed + +lemma frecrel_mono : + assumes "\x,y\ \ frecrel(names_below(P,z))" + shows "frecrel(names_below(P,x)) \ frecrel(names_below(P,y))" + unfolding frecrel_def + using Rrel_mono names_below_mono assms by simp + +lemma forcerel_mono2 : + assumes "\x,y\ \ frecrel(names_below(P,z))" + shows "forcerel(P,x) \ forcerel(P,y)" + unfolding forcerel_def + using trancl_mono frecrel_mono assms by simp + +lemma forcerel_mono_aux : + assumes "\x,y\ \ frecrel(names_below(P, w))^+" + shows "forcerel(P,x) \ forcerel(P,y)" + using assms + by (rule trancl_induct,simp_all add: subset_trans forcerel_mono2) + +lemma forcerel_mono : + assumes "\x,y\ \ forcerel(P,z)" + shows "forcerel(P,x) \ forcerel(P,y)" + using forcerel_mono_aux assms unfolding forcerel_def by simp + +lemma aux: "x \ names_below(P, w) \ \x,y\ \ forcerel(P,z) \ + (y \ names_below(P, w) \ \x,y\ \ forcerel(P,w))" + unfolding forcerel_def +proof(rule_tac a=x and b=y and P="\ y . y \ names_below(P, w) \ \x,y\ \ frecrel(names_below(P,w))^+" in trancl_induct,simp) + let ?A="\ a . names_below(P, a)" + let ?R="\ a . frecrel(?A(a))" + let ?fR="\ a .forcerel(a)" + show "u\?A(w) \ \x,u\\?R(w)^+" if "x\?A(w)" "\x,y\\?R(z)^+" "\x,u\\?R(z)" for u + using that frecrelD frecrelI r_into_trancl unfolding names_below_def by simp + { + fix u v + assume "x \ ?A(w)" + "\x, y\ \ ?R(z)^+" + "\x, u\ \ ?R(z)^+" + "\u, v\ \ ?R(z)" + "u \ ?A(w) \ \x, u\ \ ?R(w)^+" + then + have "v \ ?A(w) \ \x, v\ \ ?R(w)^+" + proof - + assume "v \?A(w)" + from \\u,v\\_\ + have "u\?A(v)" + using arg_into_names_below2 by simp + with \v \?A(w)\ + have "u\?A(w)" + using names_below_tr by simp + with \v\_\ \\u,v\\_\ + have "\u,v\\ ?R(w)" + using frecrelD frecrelI r_into_trancl unfolding names_below_def by simp + with \u \ ?A(w) \ \x, u\ \ ?R(w)^+\ \u\?A(w)\ + have "\x, u\ \ ?R(w)^+" by simp + with \\u,v\\ ?R(w)\ + show "\x,v\\ ?R(w)^+" using trancl_trans r_into_trancl + by simp + qed + } + then show "v \ ?A(w) \ \x, v\ \ ?R(w)^+" + if "x \ ?A(w)" + "\x, y\ \ ?R(z)^+" + "\x, u\ \ ?R(z)^+" + "\u, v\ \ ?R(z)" + "u \ ?A(w) \ \x, u\ \ ?R(w)^+" for u v + using that by simp +qed + +lemma forcerel_eq : + assumes "\z,x\ \ forcerel(P,x)" + shows "forcerel(P,z) = forcerel(P,x) \ names_below(P,z)\names_below(P,z)" + using assms aux forcerelD forcerel_mono[of z x x] subsetI + by auto + +lemma forcerel_below_aux : + assumes "\z,x\ \ forcerel(P,x)" "\u,z\ \ forcerel(P,x)" + shows "u \ names_below(P,z)" + using assms(2) + unfolding forcerel_def +proof(rule trancl_induct) + show "u \ names_below(P,y)" if " \u, y\ \ frecrel(names_below(P, x))" for y + using that vimage_singleton_iff arg_into_names_below2 by simp +next + show "u \ names_below(P,z)" + if "\u, y\ \ frecrel(names_below(P, x))^+" + "\y, z\ \ frecrel(names_below(P, x))" + "u \ names_below(P, y)" + for y z + using that arg_into_names_below2[of y z x] names_below_tr by simp +qed + +lemma forcerel_below : + assumes "\z,x\ \ forcerel(P,x)" + shows "forcerel(P,x) -`` {z} \ names_below(P,z)" + using vimage_singleton_iff assms forcerel_below_aux by auto + +lemma relation_forcerel : + shows "relation(forcerel(P,z))" "trans(forcerel(P,z))" + unfolding forcerel_def using relation_trancl trans_trancl by simp_all + +lemma Hfrc_restrict_trancl: "bool_of_o(Hfrc(P, leq, y, restrict(f,frecrel(names_below(P,x))-``{y}))) + = bool_of_o(Hfrc(P, leq, y, restrict(f,(frecrel(names_below(P,x))^+)-``{y})))" + unfolding Hfrc_def bool_of_o_def eq_case_def mem_case_def + using restrict_trancl_forcerel frecRI1 frecRI2 frecRI3 + unfolding forcerel_def + by simp + +(* Recursive definition of forces for atomic formulas using a transitive relation *) +lemma frc_at_trancl: "frc_at(P,leq,z) = wfrec(forcerel(P,z),z,\x f. bool_of_o(Hfrc(P,leq,x,f)))" + unfolding frc_at_def forcerel_def using wf_eq_trancl Hfrc_restrict_trancl by simp + + +lemma forcerelI1 : + assumes "n1 \ domain(b) \ n1 \ domain(c)" "p\P" "d\P" + shows "\\1, n1, b, p\, \0,b,c,d\\\ forcerel(P,\0,b,c,d\)" +proof - + let ?x="\1, n1, b, p\" + let ?y="\0,b,c,d\" + from assms + have "frecR(?x,?y)" + using frecRI1 by simp + then + have "?x\names_below(P,?y)" "?y \ names_below(P,?y)" + using names_belowI assms components_in_eclose + unfolding names_below_def by auto + with \frecR(?x,?y)\ + show ?thesis + unfolding forcerel_def frecrel_def + using subsetD[OF r_subset_trancl[OF relation_Rrel]] RrelI + by auto +qed + +lemma forcerelI2 : + assumes "n1 \ domain(b) \ n1 \ domain(c)" "p\P" "d\P" + shows "\\1, n1, c, p\, \0,b,c,d\\\ forcerel(P,\0,b,c,d\)" +proof - + let ?x="\1, n1, c, p\" + let ?y="\0,b,c,d\" + from assms + have "frecR(?x,?y)" + using frecRI2 by simp + then + have "?x\names_below(P,?y)" "?y \ names_below(P,?y)" + using names_belowI assms components_in_eclose + unfolding names_below_def by auto + with \frecR(?x,?y)\ + show ?thesis + unfolding forcerel_def frecrel_def + using subsetD[OF r_subset_trancl[OF relation_Rrel]] RrelI + by auto +qed + +lemma forcerelI3 : + assumes "\n2, r\ \ c" "p\P" "d\P" "r \ P" + shows "\\0, b, n2, p\,\1, b, c, d\\ \ forcerel(P,\1,b,c,d\)" +proof - + let ?x="\0, b, n2, p\" + let ?y="\1, b, c, d\" + from assms + have "frecR(?x,?y)" + using assms frecRI3 by simp + then + have "?x\names_below(P,?y)" "?y \ names_below(P,?y)" + using names_belowI assms components_in_eclose + unfolding names_below_def by auto + with \frecR(?x,?y)\ + show ?thesis + unfolding forcerel_def frecrel_def + using subsetD[OF r_subset_trancl[OF relation_Rrel]] RrelI + by auto +qed + +lemmas forcerelI = forcerelI1[THEN vimage_singleton_iff[THEN iffD2]] + forcerelI2[THEN vimage_singleton_iff[THEN iffD2]] + forcerelI3[THEN vimage_singleton_iff[THEN iffD2]] + +lemma aux_def_frc_at: + assumes "z \ forcerel(P,x) -`` {x}" + shows "wfrec(forcerel(P,x), z, H) = wfrec(forcerel(P,z), z, H)" +proof - + let ?A="names_below(P,z)" + from assms + have "\z,x\ \ forcerel(P,x)" + using vimage_singleton_iff by simp + then + have "z \ ?A" + using forcerel_arg_into_names_below by simp + from \\z,x\ \ forcerel(P,x)\ + have E:"forcerel(P,z) = forcerel(P,x) \ (?A\?A)" + "forcerel(P,x) -`` {z} \ ?A" + using forcerel_eq forcerel_below + by auto + with \z\?A\ + have "wfrec(forcerel(P,x), z, H) = wfrec[?A](forcerel(P,x), z, H)" + using wfrec_trans_restr[OF relation_forcerel(1) wf_forcerel relation_forcerel(2), of x z ?A] + by simp + then show ?thesis + using E wfrec_restr_eq by simp +qed + +subsection\Recursive expression of \<^term>\frc_at\\ + +lemma def_frc_at : + assumes "p\P" + shows + "frc_at(P,leq,\ft,n1,n2,p\) = + bool_of_o( p \P \ + ( ft = 0 \ (\s. s\domain(n1) \ domain(n2) \ + (\q. q\P \ q \ p \ (frc_at(P,leq,\1,s,n1,q\) =1 \ frc_at(P,leq,\1,s,n2,q\) =1))) + \ ft = 1 \ ( \v\P. v \ p \ + (\q. \s. \r. r\P \ q\P \ q \ v \ \s,r\ \ n2 \ q \ r \ frc_at(P,leq,\0,n1,s,q\) = 1))))" +proof - + let ?r="\y. forcerel(P,y)" and ?Hf="\x f. bool_of_o(Hfrc(P,leq,x,f))" + let ?t="\y. ?r(y) -`` {y}" + let ?arg="\ft,n1,n2,p\" + from wf_forcerel + have wfr: "\w . wf(?r(w))" .. + with wfrec [of "?r(?arg)" ?arg ?Hf] + have "frc_at(P,leq,?arg) = ?Hf( ?arg, \x\?r(?arg) -`` {?arg}. wfrec(?r(?arg), x, ?Hf))" + using frc_at_trancl by simp + also + have " ... = ?Hf( ?arg, \x\?r(?arg) -`` {?arg}. frc_at(P,leq,x))" + using aux_def_frc_at frc_at_trancl by simp + finally + show ?thesis + unfolding Hfrc_def mem_case_def eq_case_def + using forcerelI assms + by auto +qed + + +subsection\Absoluteness of \<^term>\frc_at\\ + +lemma trans_forcerel_t : "trans(forcerel(P,x))" + unfolding forcerel_def using trans_trancl . + +lemma relation_forcerel_t : "relation(forcerel(P,x))" + unfolding forcerel_def using relation_trancl . + + +lemma forcerel_in_M : + assumes + "x\M" + shows + "forcerel(P,x)\M" + unfolding forcerel_def def_frecrel names_below_def +proof - + let ?Q = "2 \ ecloseN(x) \ ecloseN(x) \ P" + have "?Q \ ?Q \ M" + using \x\M\ P_in_M twoN_in_M ecloseN_closed cartprod_closed by simp + moreover + have "separation(##M,\z. \x y. z = \x, y\ \ frecR(x, y))" + proof - + have "arity(frecrelP_fm(0)) = 1" + unfolding number1_fm_def frecrelP_fm_def + by (simp del:FOL_sats_iff pair_abs empty_abs + add: fm_defs frecR_fm_def number1_fm_def components_defs nat_simp_union) + then + have "separation(##M, \z. sats(M,frecrelP_fm(0) , [z]))" + using separation_ax by simp + moreover + have "frecrelP(##M,z) \ sats(M,frecrelP_fm(0),[z])" + if "z\M" for z + using that sats_frecrelP_fm[of 0 "[z]"] by simp + ultimately + have "separation(##M,frecrelP(##M))" + unfolding separation_def by simp + then + show ?thesis using frecrelP_abs + separation_cong[of "##M" "frecrelP(##M)" "\z. \x y. z = \x, y\ \ frecR(x, y)"] + by simp + qed + ultimately + show "{z \ ?Q \ ?Q . \x y. z = \x, y\ \ frecR(x, y)}^+ \ M" + using separation_closed frecrelP_abs trancl_closed by simp +qed + +lemma relation2_Hfrc_at_abs: + "relation2(##M,is_Hfrc_at(##M,P,leq),\x f. bool_of_o(Hfrc(P,leq,x,f)))" + unfolding relation2_def using Hfrc_at_abs + by simp + +lemma Hfrc_at_closed : + "\x\M. \g\M. function(g) \ bool_of_o(Hfrc(P,leq,x,g))\M" + unfolding bool_of_o_def using zero_in_M n_in_M[of 1] by simp + +lemma wfrec_Hfrc_at : + assumes + "X\M" + shows + "wfrec_replacement(##M,is_Hfrc_at(##M,P,leq),forcerel(P,X))" +proof - + have 0:"is_Hfrc_at(##M,P,leq,a,b,c) \ + sats(M,Hfrc_at_fm(8,9,2,1,0),[c,b,a,d,e,y,x,z,P,leq,forcerel(P,X)])" + if "a\M" "b\M" "c\M" "d\M" "e\M" "y\M" "x\M" "z\M" + for a b c d e y x z + using that P_in_M leq_in_M \X\M\ forcerel_in_M + is_Hfrc_at_iff_sats[of concl:M P leq a b c 8 9 2 1 0 + "[c,b,a,d,e,y,x,z,P,leq,forcerel(P,X)]"] by simp + have 1:"sats(M,is_wfrec_fm(Hfrc_at_fm(8,9,2,1,0),5,1,0),[y,x,z,P,leq,forcerel(P,X)]) \ + is_wfrec(##M, is_Hfrc_at(##M,P,leq),forcerel(P,X), x, y)" + if "x\M" "y\M" "z\M" for x y z + using that \X\M\ forcerel_in_M P_in_M leq_in_M + sats_is_wfrec_fm[OF 0] + by simp + let + ?f="Exists(And(pair_fm(1,0,2),is_wfrec_fm(Hfrc_at_fm(8,9,2,1,0),5,1,0)))" + have satsf:"sats(M, ?f, [x,z,P,leq,forcerel(P,X)]) \ + (\y\M. pair(##M,x,y,z) & is_wfrec(##M, is_Hfrc_at(##M,P,leq),forcerel(P,X), x, y))" + if "x\M" "z\M" for x z + using that 1 \X\M\ forcerel_in_M P_in_M leq_in_M by (simp del:pair_abs) + have artyf:"arity(?f) = 5" + unfolding is_wfrec_fm_def Hfrc_at_fm_def Hfrc_fm_def Replace_fm_def PHcheck_fm_def + pair_fm_def upair_fm_def is_recfun_fm_def fun_apply_fm_def big_union_fm_def + pre_image_fm_def restriction_fm_def image_fm_def fm_defs number1_fm_def + eq_case_fm_def mem_case_fm_def is_tuple_fm_def + by (simp add:nat_simp_union) + moreover + have "?f\formula" + unfolding fm_defs Hfrc_at_fm_def by simp + ultimately + have "strong_replacement(##M,\x z. sats(M,?f,[x,z,P,leq,forcerel(P,X)]))" + using replacement_ax 1 artyf \X\M\ forcerel_in_M P_in_M leq_in_M by simp + then + have "strong_replacement(##M,\x z. + \y\M. pair(##M,x,y,z) & is_wfrec(##M, is_Hfrc_at(##M,P,leq),forcerel(P,X), x, y))" + using repl_sats[of M ?f "[P,leq,forcerel(P,X)]"] satsf by (simp del:pair_abs) + then + show ?thesis unfolding wfrec_replacement_def by simp +qed + +lemma names_below_abs : + "\Q\M;x\M;nb\M\ \ is_names_below(##M,Q,x,nb) \ nb = names_below(Q,x)" + unfolding is_names_below_def names_below_def + using succ_in_M_iff zero_in_M cartprod_closed is_ecloseN_abs ecloseN_closed + by auto + +lemma names_below_closed: + "\Q\M;x\M\ \ names_below(Q,x) \ M" + unfolding names_below_def + using zero_in_M cartprod_closed ecloseN_closed succ_in_M_iff + by simp + +lemma "names_below_productE" : + assumes "Q \ M" "x \ M" + "\A1 A2 A3 A4. A1 \ M \ A2 \ M \ A3 \ M \ A4 \ M \ R(A1 \ A2 \ A3 \ A4)" + shows "R(names_below(Q,x))" + unfolding names_below_def using assms zero_in_M ecloseN_closed[of x] twoN_in_M by auto + +lemma forcerel_abs : + "\x\M;z\M\ \ is_forcerel(##M,P,x,z) \ z = forcerel(P,x)" + unfolding is_forcerel_def forcerel_def + using frecrel_abs names_below_abs trancl_abs P_in_M twoN_in_M ecloseN_closed names_below_closed + names_below_productE[of concl:"\p. is_frecrel(##M,p,_) \ _ = frecrel(p)"] frecrel_closed + by simp + +lemma frc_at_abs: + assumes "fnnc\M" "z\M" + shows "is_frc_at(##M,P,leq,fnnc,z) \ z = frc_at(P,leq,fnnc)" +proof - + from assms + have "(\r\M. is_forcerel(##M,P,fnnc, r) \ is_wfrec(##M, is_Hfrc_at(##M, P, leq), r, fnnc, z)) + \ is_wfrec(##M, is_Hfrc_at(##M, P, leq), forcerel(P,fnnc), fnnc, z)" + using forcerel_abs forcerel_in_M by simp + then + show ?thesis + unfolding frc_at_trancl is_frc_at_def + using assms wfrec_Hfrc_at[of fnnc] wf_forcerel trans_forcerel_t relation_forcerel_t forcerel_in_M + Hfrc_at_closed relation2_Hfrc_at_abs + trans_wfrec_abs[of "forcerel(P,fnnc)" fnnc z "is_Hfrc_at(##M,P,leq)" "\x f. bool_of_o(Hfrc(P,leq,x,f))"] + by (simp flip:setclass_iff) +qed + +lemma forces_eq'_abs : + "\p\M ; t1\M ; t2\M\ \ is_forces_eq'(##M,P,leq,p,t1,t2) \ forces_eq'(P,leq,p,t1,t2)" + unfolding is_forces_eq'_def forces_eq'_def + using frc_at_abs zero_in_M tuples_in_M by auto + +lemma forces_mem'_abs : + "\p\M ; t1\M ; t2\M\ \ is_forces_mem'(##M,P,leq,p,t1,t2) \ forces_mem'(P,leq,p,t1,t2)" + unfolding is_forces_mem'_def forces_mem'_def + using frc_at_abs zero_in_M tuples_in_M by auto + +lemma forces_neq'_abs : + assumes + "p\M" "t1\M" "t2\M" + shows + "is_forces_neq'(##M,P,leq,p,t1,t2) \ forces_neq'(P,leq,p,t1,t2)" +proof - + have "q\M" if "q\P" for q + using that transitivity P_in_M by simp + then show ?thesis + unfolding is_forces_neq'_def forces_neq'_def + using assms forces_eq'_abs pair_in_M_iff + by (auto,blast) +qed + + +lemma forces_nmem'_abs : + assumes + "p\M" "t1\M" "t2\M" + shows + "is_forces_nmem'(##M,P,leq,p,t1,t2) \ forces_nmem'(P,leq,p,t1,t2)" +proof - + have "q\M" if "q\P" for q + using that transitivity P_in_M by simp + then show ?thesis + unfolding is_forces_nmem'_def forces_nmem'_def + using assms forces_mem'_abs pair_in_M_iff + by (auto,blast) +qed + +end (* forcing_data *) + +subsection\Forcing for general formulas\ + +definition + ren_forces_nand :: "i\i" where + "ren_forces_nand(\) \ Exists(And(Equal(0,1),iterates(\p. incr_bv(p)`1 , 2, \)))" + +lemma ren_forces_nand_type[TC] : + "\\formula \ ren_forces_nand(\) \formula" + unfolding ren_forces_nand_def + by simp + +lemma arity_ren_forces_nand : + assumes "\\formula" + shows "arity(ren_forces_nand(\)) \ succ(arity(\))" +proof - + consider (lt) "1)" | (ge) "\ 1 < arity(\)" + by auto + then + show ?thesis + proof cases + case lt + with \\\_\ + have "2 < succ(arity(\))" "2)#+2" + using succ_ltI by auto + with \\\_\ + have "arity(iterates(\p. incr_bv(p)`1,2,\)) = 2#+arity(\)" + using arity_incr_bv_lemma lt + by auto + with \\\_\ + show ?thesis + unfolding ren_forces_nand_def + using lt pred_Un_distrib nat_union_abs1 Un_assoc[symmetric] Un_le_compat + by simp + next + case ge + with \\\_\ + have "arity(\) \ 1" "pred(arity(\)) \ 1" + using not_lt_iff_le le_trans[OF le_pred] + by simp_all + with \\\_\ + have "arity(iterates(\p. incr_bv(p)`1,2,\)) = (arity(\))" + using arity_incr_bv_lemma ge + by simp + with \arity(\) \ 1\ \\\_\ \pred(_) \ 1\ + show ?thesis + unfolding ren_forces_nand_def + using pred_Un_distrib nat_union_abs1 Un_assoc[symmetric] nat_union_abs2 + by simp + qed +qed + +lemma sats_ren_forces_nand: + "[q,P,leq,o,p] @ env \ list(M) \ \\formula \ + sats(M, ren_forces_nand(\),[q,p,P,leq,o] @ env) \ sats(M, \,[q,P,leq,o] @ env)" + unfolding ren_forces_nand_def + using sats_incr_bv_iff [of _ _ M _ "[q]"] + by simp + + +definition + ren_forces_forall :: "i\i" where + "ren_forces_forall(\) \ + Exists(Exists(Exists(Exists(Exists( + And(Equal(0,6),And(Equal(1,7),And(Equal(2,8),And(Equal(3,9), + And(Equal(4,5),iterates(\p. incr_bv(p)`5 , 5, \)))))))))))" + +lemma arity_ren_forces_all : + assumes "\\formula" + shows "arity(ren_forces_forall(\)) = 5 \ arity(\)" +proof - + consider (lt) "5)" | (ge) "\ 5 < arity(\)" + by auto + then + show ?thesis + proof cases + case lt + with \\\_\ + have "5 < succ(arity(\))" "5)#+2" "5)#+3" "5)#+4" + using succ_ltI by auto + with \\\_\ + have "arity(iterates(\p. incr_bv(p)`5,5,\)) = 5#+arity(\)" + using arity_incr_bv_lemma lt + by simp + with \\\_\ + show ?thesis + unfolding ren_forces_forall_def + using pred_Un_distrib nat_union_abs1 Un_assoc[symmetric] nat_union_abs2 + by simp + next + case ge + with \\\_\ + have "arity(\) \ 5" "pred^5(arity(\)) \ 5" + using not_lt_iff_le le_trans[OF le_pred] + by simp_all + with \\\_\ + have "arity(iterates(\p. incr_bv(p)`5,5,\)) = arity(\)" + using arity_incr_bv_lemma ge + by simp + with \arity(\) \ 5\ \\\_\ \pred^5(_) \ 5\ + show ?thesis + unfolding ren_forces_forall_def + using pred_Un_distrib nat_union_abs1 Un_assoc[symmetric] nat_union_abs2 + by simp + qed +qed + +lemma ren_forces_forall_type[TC] : + "\\formula \ ren_forces_forall(\) \formula" + unfolding ren_forces_forall_def by simp + +lemma sats_ren_forces_forall : + "[x,P,leq,o,p] @ env \ list(M) \ \\formula \ + sats(M, ren_forces_forall(\),[x,p,P,leq,o] @ env) \ sats(M, \,[p,P,leq,o,x] @ env)" + unfolding ren_forces_forall_def + using sats_incr_bv_iff [of _ _ M _ "[p,P,leq,o,x]"] + by simp + + +definition + is_leq :: "[i\o,i,i,i] \ o" where + "is_leq(A,l,q,p) \ \qp[A]. (pair(A,q,p,qp) \ qp\l)" + +lemma (in forcing_data) leq_abs[simp]: + "\ l\M ; q\M ; p\M \ \ is_leq(##M,l,q,p) \ \q,p\\l" + unfolding is_leq_def using pair_in_M_iff by simp + + +definition + leq_fm :: "[i,i,i] \ i" where + "leq_fm(leq,q,p) \ Exists(And(pair_fm(q#+1,p#+1,0),Member(0,leq#+1)))" + +lemma arity_leq_fm : + "\leq\nat;q\nat;p\nat\ \ arity(leq_fm(leq,q,p)) = succ(q) \ succ(p) \ succ(leq)" + unfolding leq_fm_def + using arity_pair_fm pred_Un_distrib nat_simp_union + by auto + +lemma leq_fm_type[TC] : + "\leq\nat;q\nat;p\nat\ \ leq_fm(leq,q,p)\formula" + unfolding leq_fm_def by simp + +lemma sats_leq_fm : + "\ leq\nat;q\nat;p\nat;env\list(A) \ \ + sats(A,leq_fm(leq,q,p),env) \ is_leq(##A,nth(leq,env),nth(q,env),nth(p,env))" + unfolding leq_fm_def is_leq_def by simp + +subsubsection\The primitive recursion\ + +consts forces' :: "i\i" +primrec + "forces'(Member(x,y)) = forces_mem_fm(1,2,0,x#+4,y#+4)" + "forces'(Equal(x,y)) = forces_eq_fm(1,2,0,x#+4,y#+4)" + "forces'(Nand(p,q)) = + Neg(Exists(And(Member(0,2),And(leq_fm(3,0,1),And(ren_forces_nand(forces'(p)), + ren_forces_nand(forces'(q)))))))" + "forces'(Forall(p)) = Forall(ren_forces_forall(forces'(p)))" + + +definition + forces :: "i\i" where + "forces(\) \ And(Member(0,1),forces'(\))" + +lemma forces'_type [TC]: "\\formula \ forces'(\) \ formula" + by (induct \ set:formula; simp) + +lemma forces_type[TC] : "\\formula \ forces(\) \ formula" + unfolding forces_def by simp + +context forcing_data +begin + +subsection\Forcing for atomic formulas in context\ + +definition + forces_eq :: "[i,i,i] \ o" where + "forces_eq \ forces_eq'(P,leq)" + +definition + forces_mem :: "[i,i,i] \ o" where + "forces_mem \ forces_mem'(P,leq)" + +(* frc_at(P,leq,\0,t1,t2,p\) = 1*) +definition + is_forces_eq :: "[i,i,i] \ o" where + "is_forces_eq \ is_forces_eq'(##M,P,leq)" + +(* frc_at(P,leq,\1,t1,t2,p\) = 1*) +definition + is_forces_mem :: "[i,i,i] \ o" where + "is_forces_mem \ is_forces_mem'(##M,P,leq)" + + +lemma def_forces_eq: "p\P \ forces_eq(p,t1,t2) \ + (\s\domain(t1) \ domain(t2). \q. q\P \ q \ p \ + (forces_mem(q,s,t1) \ forces_mem(q,s,t2)))" + unfolding forces_eq_def forces_mem_def forces_eq'_def forces_mem'_def + using def_frc_at[of p 0 t1 t2 ] unfolding bool_of_o_def + by auto + +lemma def_forces_mem: "p\P \ forces_mem(p,t1,t2) \ + (\v\P. v \ p \ + (\q. \s. \r. r\P \ q\P \ q \ v \ \s,r\ \ t2 \ q \ r \ forces_eq(q,t1,s)))" + unfolding forces_eq'_def forces_mem'_def forces_eq_def forces_mem_def + using def_frc_at[of p 1 t1 t2] unfolding bool_of_o_def + by auto + +lemma forces_eq_abs : + "\p\M ; t1\M ; t2\M\ \ is_forces_eq(p,t1,t2) \ forces_eq(p,t1,t2)" + unfolding is_forces_eq_def forces_eq_def + using forces_eq'_abs by simp + +lemma forces_mem_abs : + "\p\M ; t1\M ; t2\M\ \ is_forces_mem(p,t1,t2) \ forces_mem(p,t1,t2)" + unfolding is_forces_mem_def forces_mem_def + using forces_mem'_abs by simp + +lemma sats_forces_eq_fm: + assumes "p\nat" "l\nat" "q\nat" "t1\nat" "t2\nat" "env\list(M)" + "nth(p,env)=P" "nth(l,env)=leq" + shows "sats(M,forces_eq_fm(p,l,q,t1,t2),env) \ + is_forces_eq(nth(q,env),nth(t1,env),nth(t2,env))" + unfolding forces_eq_fm_def is_forces_eq_def is_forces_eq'_def + using assms sats_is_tuple_fm sats_frc_at_fm + by simp + +lemma sats_forces_mem_fm: + assumes "p\nat" "l\nat" "q\nat" "t1\nat" "t2\nat" "env\list(M)" + "nth(p,env)=P" "nth(l,env)=leq" + shows "sats(M,forces_mem_fm(p,l,q,t1,t2),env) \ + is_forces_mem(nth(q,env),nth(t1,env),nth(t2,env))" + unfolding forces_mem_fm_def is_forces_mem_def is_forces_mem'_def + using assms sats_is_tuple_fm sats_frc_at_fm + by simp + + +definition + forces_neq :: "[i,i,i] \ o" where + "forces_neq(p,t1,t2) \ \ (\q\P. q\p \ forces_eq(q,t1,t2))" + +definition + forces_nmem :: "[i,i,i] \ o" where + "forces_nmem(p,t1,t2) \ \ (\q\P. q\p \ forces_mem(q,t1,t2))" + + +lemma forces_neq : + "forces_neq(p,t1,t2) \ forces_neq'(P,leq,p,t1,t2)" + unfolding forces_neq_def forces_neq'_def forces_eq_def by simp + +lemma forces_nmem : + "forces_nmem(p,t1,t2) \ forces_nmem'(P,leq,p,t1,t2)" + unfolding forces_nmem_def forces_nmem'_def forces_mem_def by simp + + +lemma sats_forces_Member : + assumes "x\nat" "y\nat" "env\list(M)" + "nth(x,env)=xx" "nth(y,env)=yy" "q\M" + shows "sats(M,forces(Member(x,y)),[q,P,leq,one]@env) \ + (q\P \ is_forces_mem(q,xx,yy))" + unfolding forces_def + using assms sats_forces_mem_fm P_in_M leq_in_M one_in_M + by simp + +lemma sats_forces_Equal : + assumes "x\nat" "y\nat" "env\list(M)" + "nth(x,env)=xx" "nth(y,env)=yy" "q\M" + shows "sats(M,forces(Equal(x,y)),[q,P,leq,one]@env) \ + (q\P \ is_forces_eq(q,xx,yy))" + unfolding forces_def + using assms sats_forces_eq_fm P_in_M leq_in_M one_in_M + by simp + +lemma sats_forces_Nand : + assumes "\\formula" "\\formula" "env\list(M)" "p\M" + shows "sats(M,forces(Nand(\,\)),[p,P,leq,one]@env) \ + (p\P \ \(\q\M. q\P \ is_leq(##M,leq,q,p) \ + (sats(M,forces'(\),[q,P,leq,one]@env) \ sats(M,forces'(\),[q,P,leq,one]@env))))" + unfolding forces_def using sats_leq_fm assms sats_ren_forces_nand P_in_M leq_in_M one_in_M + by simp + +lemma sats_forces_Neg : + assumes "\\formula" "env\list(M)" "p\M" + shows "sats(M,forces(Neg(\)),[p,P,leq,one]@env) \ + (p\P \ \(\q\M. q\P \ is_leq(##M,leq,q,p) \ + (sats(M,forces'(\),[q,P,leq,one]@env))))" + unfolding Neg_def using assms sats_forces_Nand + by simp + +lemma sats_forces_Forall : + assumes "\\formula" "env\list(M)" "p\M" + shows "sats(M,forces(Forall(\)),[p,P,leq,one]@env) \ + p\P \ (\x\M. sats(M,forces'(\),[p,P,leq,one,x]@env))" + unfolding forces_def using assms sats_ren_forces_forall P_in_M leq_in_M one_in_M + by simp + +end (* forcing_data *) + +subsection\The arity of \<^term>\forces\\ + +lemma arity_forces_at: + assumes "x \ nat" "y \ nat" + shows "arity(forces(Member(x, y))) = (succ(x) \ succ(y)) #+ 4" + "arity(forces(Equal(x, y))) = (succ(x) \ succ(y)) #+ 4" + unfolding forces_def + using assms arity_forces_mem_fm arity_forces_eq_fm succ_Un_distrib nat_simp_union + by auto + +lemma arity_forces': + assumes "\\formula" + shows "arity(forces'(\)) \ arity(\) #+ 4" + using assms +proof (induct set:formula) + case (Member x y) + then + show ?case + using arity_forces_mem_fm succ_Un_distrib nat_simp_union + by simp +next + case (Equal x y) + then + show ?case + using arity_forces_eq_fm succ_Un_distrib nat_simp_union + by simp +next + case (Nand \ \) + let ?\' = "ren_forces_nand(forces'(\))" + let ?\' = "ren_forces_nand(forces'(\))" + have "arity(leq_fm(3, 0, 1)) = 4" + using arity_leq_fm succ_Un_distrib nat_simp_union + by simp + have "3 \ (4#+arity(\)) \ (4#+arity(\))" (is "_ \ ?rhs") + using nat_simp_union by simp + from \\\_\ Nand + have "pred(arity(?\')) \ ?rhs" "pred(arity(?\')) \ ?rhs" + proof - + from \\\_\ \\\_\ + have A:"pred(arity(?\')) \ arity(forces'(\))" + "pred(arity(?\')) \ arity(forces'(\))" + using pred_mono[OF _ arity_ren_forces_nand] pred_succ_eq + by simp_all + from Nand + have "3 \ arity(forces'(\)) \ arity(\) #+ 4" + "3 \ arity(forces'(\)) \ arity(\) #+ 4" + using Un_le by simp_all + with Nand + show "pred(arity(?\')) \ ?rhs" + "pred(arity(?\')) \ ?rhs" + using le_trans[OF A(1)] le_trans[OF A(2)] le_Un_iff + by simp_all + qed + with Nand \_=4\ + show ?case + using pred_Un_distrib Un_assoc[symmetric] succ_Un_distrib nat_union_abs1 Un_leI3[OF \3 \ ?rhs\] + by simp +next + case (Forall \) + let ?\' = "ren_forces_forall(forces'(\))" + show ?case + proof (cases "arity(\) = 0") + case True + with Forall + show ?thesis + proof - + from Forall True + have "arity(forces'(\)) \ 5" + using le_trans[of _ 4 5] by auto + with \\\_\ + have "arity(?\') \ 5" + using arity_ren_forces_all[OF forces'_type[OF \\\_\]] nat_union_abs2 + by auto + with Forall True + show ?thesis + using pred_mono[OF _ \arity(?\') \ 5\] + by simp + qed + next + case False + with Forall + show ?thesis + proof - + from Forall False + have "arity(?\') = 5 \ arity(forces'(\))" + "arity(forces'(\)) \ 5 #+ arity(\)" + "4 \ succ(succ(succ(arity(\))))" + using Ord_0_lt arity_ren_forces_all + le_trans[OF _ add_le_mono[of 4 5, OF _ le_refl]] + by auto + with \\\_\ + have "5 \ arity(forces'(\)) \ 5#+arity(\)" + using nat_simp_union by auto + with \\\_\ \arity(?\') = 5 \ _\ + show ?thesis + using pred_Un_distrib succ_pred_eq[OF _ \arity(\)\0\] + pred_mono[OF _ Forall(2)] Un_le[OF \4\succ(_)\] + by simp + qed + qed +qed + +lemma arity_forces : + assumes "\\formula" + shows "arity(forces(\)) \ 4#+arity(\)" + unfolding forces_def + using assms arity_forces' le_trans nat_simp_union by auto + +lemma arity_forces_le : + assumes "\\formula" "n\nat" "arity(\) \ n" + shows "arity(forces(\)) \ 4#+n" + using assms le_trans[OF _ add_le_mono[OF le_refl[of 5] \arity(\)\_\]] arity_forces + by auto + +end \ No newline at end of file diff --git a/thys/Forcing/Forcing_Data.thy b/thys/Forcing/Forcing_Data.thy new file mode 100644 --- /dev/null +++ b/thys/Forcing/Forcing_Data.thy @@ -0,0 +1,369 @@ +section\Transitive set models of ZF\ +text\This theory defines the locale \<^term>\M_ZF_trans\ for +transitive models of ZF, and the associated \<^term>\forcing_data\ + that adds a forcing notion\ +theory Forcing_Data + imports + Forcing_Notions + Interface + +begin + +lemma Transset_M : + "Transset(M) \ y\x \ x \ M \ y \ M" + by (simp add: Transset_def,auto) + + +locale M_ZF = + fixes M + assumes + upair_ax: "upair_ax(##M)" + and Union_ax: "Union_ax(##M)" + and power_ax: "power_ax(##M)" + and extensionality: "extensionality(##M)" + and foundation_ax: "foundation_ax(##M)" + and infinity_ax: "infinity_ax(##M)" + and separation_ax: "\\formula \ env\list(M) \ arity(\) \ 1 #+ length(env) \ + separation(##M,\x. sats(M,\,[x] @ env))" + and replacement_ax: "\\formula \ env\list(M) \ arity(\) \ 2 #+ length(env) \ + strong_replacement(##M,\x y. sats(M,\,[x,y] @ env))" + +locale M_ctm = M_ZF + + fixes enum + assumes M_countable: "enum\bij(nat,M)" + and trans_M: "Transset(M)" + +begin +interpretation intf: M_ZF_trans "M" + using M_ZF_trans.intro + trans_M upair_ax Union_ax power_ax extensionality + foundation_ax infinity_ax separation_ax[simplified] + replacement_ax[simplified] + by simp + + +lemmas transitivity = Transset_intf[OF trans_M] + +lemma zero_in_M: "0 \ M" + by (rule intf.zero_in_M) + +lemma tuples_in_M: "A\M \ B\M \ \A,B\\M" + by (simp flip:setclass_iff) + +lemma nat_in_M : "nat \ M" + by (rule intf.nat_in_M) + +lemma n_in_M : "n\nat \ n\M" + using nat_in_M transitivity by simp + +lemma mtriv: "M_trivial(##M)" + by (rule intf.mtriv) + +lemma mtrans: "M_trans(##M)" + by (rule intf.mtrans) + +lemma mbasic: "M_basic(##M)" + by (rule intf.mbasic) + +lemma mtrancl: "M_trancl(##M)" + by (rule intf.mtrancl) + +lemma mdatatypes: "M_datatypes(##M)" + by (rule intf.mdatatypes) + +lemma meclose: "M_eclose(##M)" + by (rule intf.meclose) + +lemma meclose_pow: "M_eclose_pow(##M)" + by (rule intf.meclose_pow) + + + +end (* M_ctm *) + +(* M_ctm interface *) +sublocale M_ctm \ M_trivial "##M" + by (rule mtriv) + +sublocale M_ctm \ M_trans "##M" + by (rule mtrans) + +sublocale M_ctm \ M_basic "##M" + by (rule mbasic) + +sublocale M_ctm \ M_trancl "##M" + by (rule mtrancl) + +sublocale M_ctm \ M_datatypes "##M" + by (rule mdatatypes) + +sublocale M_ctm \ M_eclose "##M" + by (rule meclose) + +sublocale M_ctm \ M_eclose_pow "##M" + by (rule meclose_pow) + +(* end interface *) + +context M_ctm +begin + +subsection\\<^term>\Collects\ in $M$\ +lemma Collect_in_M_0p : + assumes + Qfm : "Q_fm \ formula" and + Qarty : "arity(Q_fm) = 1" and + Qsats : "\x. x\M \ sats(M,Q_fm,[x]) \ is_Q(##M,x)" and + Qabs : "\x. x\M \ is_Q(##M,x) \ Q(x)" and + "A\M" + shows + "Collect(A,Q)\M" +proof - + have "z\A \ z\M" for z + using \A\M\ transitivity[of z A] by simp + then + have 1:"Collect(A,is_Q(##M)) = Collect(A,Q)" + using Qabs Collect_cong[of "A" "A" "is_Q(##M)" "Q"] by simp + have "separation(##M,is_Q(##M))" + using separation_ax Qsats Qarty Qfm + separation_cong[of "##M" "\y. sats(M,Q_fm,[y])" "is_Q(##M)"] + by simp + then + have "Collect(A,is_Q(##M))\M" + using separation_closed \A\M\ by simp + then + show ?thesis using 1 by simp +qed + +lemma Collect_in_M_2p : + assumes + Qfm : "Q_fm \ formula" and + Qarty : "arity(Q_fm) = 3" and + params_M : "y\M" "z\M" and + Qsats : "\x. x\M \ sats(M,Q_fm,[x,y,z]) \ is_Q(##M,x,y,z)" and + Qabs : "\x. x\M \ is_Q(##M,x,y,z) \ Q(x,y,z)" and + "A\M" + shows + "Collect(A,\x. Q(x,y,z))\M" +proof - + have "z\A \ z\M" for z + using \A\M\ transitivity[of z A] by simp + then + have 1:"Collect(A,\x. is_Q(##M,x,y,z)) = Collect(A,\x. Q(x,y,z))" + using Qabs Collect_cong[of "A" "A" "\x. is_Q(##M,x,y,z)" "\x. Q(x,y,z)"] by simp + have "separation(##M,\x. is_Q(##M,x,y,z))" + using separation_ax Qsats Qarty Qfm params_M + separation_cong[of "##M" "\x. sats(M,Q_fm,[x,y,z])" "\x. is_Q(##M,x,y,z)"] + by simp + then + have "Collect(A,\x. is_Q(##M,x,y,z))\M" + using separation_closed \A\M\ by simp + then + show ?thesis using 1 by simp +qed + +lemma Collect_in_M_4p : + assumes + Qfm : "Q_fm \ formula" and + Qarty : "arity(Q_fm) = 5" and + params_M : "a1\M" "a2\M" "a3\M" "a4\M" and + Qsats : "\x. x\M \ sats(M,Q_fm,[x,a1,a2,a3,a4]) \ is_Q(##M,x,a1,a2,a3,a4)" and + Qabs : "\x. x\M \ is_Q(##M,x,a1,a2,a3,a4) \ Q(x,a1,a2,a3,a4)" and + "A\M" + shows + "Collect(A,\x. Q(x,a1,a2,a3,a4))\M" +proof - + have "z\A \ z\M" for z + using \A\M\ transitivity[of z A] by simp + then + have 1:"Collect(A,\x. is_Q(##M,x,a1,a2,a3,a4)) = Collect(A,\x. Q(x,a1,a2,a3,a4))" + using Qabs Collect_cong[of "A" "A" "\x. is_Q(##M,x,a1,a2,a3,a4)" "\x. Q(x,a1,a2,a3,a4)"] + by simp + have "separation(##M,\x. is_Q(##M,x,a1,a2,a3,a4))" + using separation_ax Qsats Qarty Qfm params_M + separation_cong[of "##M" "\x. sats(M,Q_fm,[x,a1,a2,a3,a4])" + "\x. is_Q(##M,x,a1,a2,a3,a4)"] + by simp + then + have "Collect(A,\x. is_Q(##M,x,a1,a2,a3,a4))\M" + using separation_closed \A\M\ by simp + then + show ?thesis using 1 by simp +qed + +lemma Repl_in_M : + assumes + f_fm: "f_fm \ formula" and + f_ar: "arity(f_fm)\ 2 #+ length(env)" and + fsats: "\x y. x\M \ y\M \ sats(M,f_fm,[x,y]@env) \ is_f(x,y)" and + fabs: "\x y. x\M \ y\M \ is_f(x,y) \ y = f(x)" and + fclosed: "\x. x\A \ f(x) \ M" and + "A\M" "env\list(M)" + shows "{f(x). x\A}\M" +proof - + have "strong_replacement(##M, \x y. sats(M,f_fm,[x,y]@env))" + using replacement_ax f_fm f_ar \env\list(M)\ by simp + then + have "strong_replacement(##M, \x y. y = f(x))" + using fsats fabs + strong_replacement_cong[of "##M" "\x y. sats(M,f_fm,[x,y]@env)" "\x y. y = f(x)"] + by simp + then + have "{ y . x\A , y = f(x) } \ M" + using \A\M\ fclosed strong_replacement_closed by simp + moreover + have "{f(x). x\A} = { y . x\A , y = f(x) }" + by auto + ultimately show ?thesis by simp +qed + +end (* M_ctm *) + +subsection\A forcing locale and generic filters\ +locale forcing_data = forcing_notion + M_ctm + + assumes P_in_M: "P \ M" + and leq_in_M: "leq \ M" + +begin + +lemma transD : "Transset(M) \ y \ M \ y \ M" + by (unfold Transset_def, blast) + +(* P \ M *) +lemmas P_sub_M = transD[OF trans_M P_in_M] + +definition + M_generic :: "i\o" where + "M_generic(G) \ filter(G) \ (\D\M. D\P \ dense(D)\D\G\0)" + +lemma M_genericD [dest]: "M_generic(G) \ x\G \ x\P" + unfolding M_generic_def by (blast dest:filterD) + +lemma M_generic_leqD [dest]: "M_generic(G) \ p\G \ q\P \ p\q \ q\G" + unfolding M_generic_def by (blast dest:filter_leqD) + +lemma M_generic_compatD [dest]: "M_generic(G) \ p\G \ r\G \ \q\G. q\p \ q\r" + unfolding M_generic_def by (blast dest:low_bound_filter) + +lemma M_generic_denseD [dest]: "M_generic(G) \ dense(D) \ D\P \ D\M \ \q\G. q\D" + unfolding M_generic_def by blast + +lemma G_nonempty: "M_generic(G) \ G\0" +proof - + have "P\P" .. + assume + "M_generic(G)" + with P_in_M P_dense \P\P\ show + "G \ 0" + unfolding M_generic_def by auto +qed + +lemma one_in_G : + assumes "M_generic(G)" + shows "one \ G" +proof - + from assms have "G\P" + unfolding M_generic_def and filter_def by simp + from \M_generic(G)\ have "increasing(G)" + unfolding M_generic_def and filter_def by simp + with \G\P\ and \M_generic(G)\ + show ?thesis + using G_nonempty and one_in_P and one_max + unfolding increasing_def by blast +qed + +lemma G_subset_M: "M_generic(G) \ G \ M" + using transitivity[OF _ P_in_M] by auto + +declare iff_trans [trans] + +lemma generic_filter_existence: + "p\P \ \G. p\G \ M_generic(G)" +proof - + assume "p\P" + let ?D="\n\nat. (if (enum`n\P \ dense(enum`n)) then enum`n else P)" + have "\n\nat. ?D`n \ Pow(P)" + by auto + then + have "?D:nat\Pow(P)" + using lam_type by auto + have Eq4: "\n\nat. dense(?D`n)" + proof(intro ballI) + fix n + assume "n\nat" + then + have "dense(?D`n) \ dense(if enum`n \ P \ dense(enum`n) then enum`n else P)" + by simp + also + have "... \ (\(enum`n \ P \ dense(enum`n)) \ dense(P)) " + using split_if by simp + finally + show "dense(?D`n)" + using P_dense \n\nat\ by auto + qed + from \?D\_\ and Eq4 + interpret cg: countable_generic P leq one ?D + by (unfold_locales, auto) + from \p\P\ + obtain G where Eq6: "p\G \ filter(G) \ (\n\nat.(?D`n)\G\0)" + using cg.countable_rasiowa_sikorski[where M="\_. M"] P_sub_M + M_countable[THEN bij_is_fun] M_countable[THEN bij_is_surj, THEN surj_range] + unfolding cg.D_generic_def by blast + then + have Eq7: "(\D\M. D\P \ dense(D)\D\G\0)" + proof (intro ballI impI) + fix D + assume "D\M" and Eq9: "D \ P \ dense(D) " + have "\y\M. \x\nat. enum`x= y" + using M_countable and bij_is_surj unfolding surj_def by (simp) + with \D\M\ obtain n where Eq10: "n\nat \ enum`n = D" + by auto + with Eq9 and if_P + have "?D`n = D" by (simp) + with Eq6 and Eq10 + show "D\G\0" by auto + qed + with Eq6 + show ?thesis unfolding M_generic_def by auto +qed + +(* Compatibility lemmas *) +lemma compat_in_abs : + assumes + "A\M" "r\M" "p\M" "q\M" + shows + "is_compat_in(##M,A,r,p,q) \ compat_in(A,r,p,q)" +proof - + have "d\A \ d\M" for d + using transitivity \A\M\ by simp + moreover from this + have "d\A \ \d, t\ \ M" if "t\M" for t d + using that pair_in_M_iff by simp + ultimately + show ?thesis + unfolding is_compat_in_def compat_in_def + using assms pair_in_M_iff transitivity by auto +qed + +definition + compat_in_fm :: "[i,i,i,i] \ i" where + "compat_in_fm(A,r,p,q) \ + Exists(And(Member(0,succ(A)),Exists(And(pair_fm(1,p#+2,0), + And(Member(0,r#+2), + Exists(And(pair_fm(2,q#+3,0),Member(0,r#+3))))))))" + +lemma compat_in_fm_type[TC] : + "\ A\nat;r\nat;p\nat;q\nat\ \ compat_in_fm(A,r,p,q)\formula" + unfolding compat_in_fm_def by simp + +lemma sats_compat_in_fm: + assumes + "A\nat" "r\nat" "p\nat" "q\nat" "env\list(M)" + shows + "sats(M,compat_in_fm(A,r,p,q),env) \ + is_compat_in(##M,nth(A, env),nth(r, env),nth(p, env),nth(q, env))" + unfolding compat_in_fm_def is_compat_in_def using assms by simp + +end (* forcing_data *) + +end diff --git a/thys/Forcing/Forcing_Main.thy b/thys/Forcing/Forcing_Main.thy new file mode 100644 --- /dev/null +++ b/thys/Forcing/Forcing_Main.thy @@ -0,0 +1,227 @@ +section\The main theorem\ +theory Forcing_Main + imports + Internal_ZFC_Axioms + Choice_Axiom + Ordinals_In_MG + Succession_Poset + +begin + +subsection\The generic extension is countable\ +(* +\ \Useful missing lemma\ +lemma surj_imp_well_ord: + assumes "well_ord(A,r)" "h \ surj(A,B)" + shows "\s. well_ord(B,r)" +*) + +definition + minimum :: "i \ i \ i" where + "minimum(r,B) \ THE b. b\B \ (\y\B. y \ b \ \b, y\ \ r)" + +lemma well_ord_imp_min: + assumes + "well_ord(A,r)" "B \ A" "B \ 0" + shows + "minimum(r,B) \ B" +proof - + from \well_ord(A,r)\ + have "wf[A](r)" + using well_ord_is_wf[OF \well_ord(A,r)\] by simp + with \B\A\ + have "wf[B](r)" + using Sigma_mono Int_mono wf_subset unfolding wf_on_def by simp + then + have "\ x. x \ B \ (\z\B. \y. \y, z\ \ r\B\B \ y \ B)" + unfolding wf_on_def using wf_eq_minimal + by blast + with \B\0\ + obtain z where + B: "z\B \ (\y. \y,z\\r\B\B \ y\B)" + by blast + then + have "z\B \ (\y\B. y \ z \ \z, y\ \ r)" + proof - + { + fix y + assume "y\B" "y\z" + with \well_ord(A,r)\ B \B\A\ + have "\z,y\\r|\y,z\\r|y=z" + unfolding well_ord_def tot_ord_def linear_def by auto + with B \y\B\ \y\z\ + have "\z,y\\r" + by (cases;auto) + } + with B + show ?thesis by blast + qed + have "v = z" if "v\B \ (\y\B. y \ v \ \v, y\ \ r)" for v + using that B by auto + with \z\B \ (\y\B. y \ z \ \z, y\ \ r)\ + show ?thesis + unfolding minimum_def + using the_equality2[OF ex1I[of "\x .x\B \ (\y\B. y \ x \ \x, y\ \ r)" z]] + by auto +qed + +lemma well_ord_surj_imp_lepoll: + assumes "well_ord(A,r)" "h \ surj(A,B)" + shows "B \ A" +proof - + let ?f="\b\B. minimum(r, {a\A. h`a=b})" + have "b \ B \ minimum(r, {a \ A . h ` a = b}) \ {a\A. h`a=b}" for b + proof - + fix b + assume "b\B" + with \h \ surj(A,B)\ + have "\a\A. h`a=b" + unfolding surj_def by blast + then + have "{a\A. h`a=b} \ 0" + by auto + with assms + show "minimum(r,{a\A. h`a=b}) \ {a\A. h`a=b}" + using well_ord_imp_min by blast + qed + moreover from this + have "?f : B \ A" + using lam_type[of B _ "\_.A"] by simp + moreover + have "?f ` w = ?f ` x \ w = x" if "w\B" "x\B" for w x + proof - + from calculation(1)[OF that(1)] calculation(1)[OF that(2)] + have "w = h ` minimum(r, {a \ A . h ` a = w})" + "x = h ` minimum(r, {a \ A . h ` a = x})" + by simp_all + moreover + assume "?f ` w = ?f ` x" + moreover from this and that + have "minimum(r, {a \ A . h ` a = w}) = minimum(r, {a \ A . h ` a = x})" + by simp_all + moreover from calculation(1,2,4) + show "w=x" by simp + qed + ultimately + show ?thesis + unfolding lepoll_def inj_def by blast +qed + +lemma (in forcing_data) surj_nat_MG : + "\f. f \ surj(nat,M[G])" +proof - + let ?f="\n\nat. val(G,enum`n)" + have "x \ nat \ val(G, enum ` x)\ M[G]" for x + using GenExtD[THEN iffD2, of _ G] bij_is_fun[OF M_countable] by force + then + have "?f: nat \ M[G]" + using lam_type[of nat "\n. val(G,enum`n)" "\_.M[G]"] by simp + moreover + have "\n\nat. ?f`n = x" if "x\M[G]" for x + using that GenExtD[of _ G] bij_is_surj[OF M_countable] + unfolding surj_def by auto + ultimately + show ?thesis + unfolding surj_def by blast +qed + +lemma (in G_generic) MG_eqpoll_nat: "M[G] \ nat" +proof - + interpret MG: M_ZF_trans "M[G]" + using Transset_MG generic pairing_in_MG + Union_MG extensionality_in_MG power_in_MG + foundation_in_MG strong_replacement_in_MG[simplified] + separation_in_MG[simplified] infinity_in_MG + by unfold_locales simp_all + obtain f where "f \ surj(nat,M[G])" + using surj_nat_MG by blast + then + have "M[G] \ nat" + using well_ord_surj_imp_lepoll well_ord_Memrel[of nat] + by simp + moreover + have "nat \ M[G]" + using MG.nat_into_M subset_imp_lepoll by auto + ultimately + show ?thesis using eqpollI + by simp +qed + +subsection\The main result\ + +theorem extensions_of_ctms: + assumes + "M \ nat" "Transset(M)" "M \ ZF" + shows + "\N. + M \ N \ N \ nat \ Transset(N) \ N \ ZF \ M\N \ + (\\. Ord(\) \ (\ \ M \ \ \ N)) \ + (M, []\ AC \ N \ ZFC)" +proof - + from \M \ nat\ + obtain enum where "enum \ bij(nat,M)" + using eqpoll_sym unfolding eqpoll_def by blast + with assms + interpret M_ctm M enum + using M_ZF_iff_M_satT + by intro_locales (simp_all add:M_ctm_axioms_def) + interpret ctm_separative "2^<\" seqle 0 M enum + proof (unfold_locales) + fix f + let ?q="seq_upd(f,0)" and ?r="seq_upd(f,1)" + assume "f \ 2^<\" + then + have "?q \s f \ ?r \s f \ ?q \s ?r" + using upd_leI seqspace_separative by auto + moreover from calculation + have "?q \ 2^<\" "?r \ 2^<\" + using seq_upd_type[of f 2] by auto + ultimately + show "\q\2^<\. \r\2^<\. q \s f \ r \s f \ q \s r" + by (rule_tac bexI)+ \ \why the heck auto-tools don't solve this?\ + next + show "2^<\ \ M" using nat_into_M seqspace_closed by simp + next + show "seqle \ M" using seqle_in_M . + qed + from cohen_extension_is_proper + obtain G where "M_generic(G)" + "M \ GenExt(G)" (is "M\?N") + by blast + then + interpret G_generic "2^<\" seqle 0 _ enum G by unfold_locales + interpret MG: M_ZF "?N" + using generic pairing_in_MG + Union_MG extensionality_in_MG power_in_MG + foundation_in_MG strong_replacement_in_MG[simplified] + separation_in_MG[simplified] infinity_in_MG + by unfold_locales simp_all + have "?N \ ZF" + using M_ZF_iff_M_satT[of ?N] MG.M_ZF_axioms by simp + moreover + have "M, []\ AC \ ?N \ ZFC" + proof - + assume "M, [] \ AC" + then + have "choice_ax(##M)" + unfolding ZF_choice_fm_def using ZF_choice_auto by simp + then + have "choice_ax(##?N)" using choice_in_MG by simp + with \?N \ ZF\ + show "?N \ ZFC" + using ZF_choice_auto sats_ZFC_iff_sats_ZF_AC + unfolding ZF_choice_fm_def by simp + qed + moreover + note \M \ ?N\ + moreover + have "Transset(?N)" using Transset_MG . + moreover + have "M \ ?N" using M_subset_MG[OF one_in_G] generic by simp + ultimately + show ?thesis + using Ord_MG_iff MG_eqpoll_nat + by (rule_tac x="?N" in exI, simp) +qed + +end \ No newline at end of file diff --git a/thys/Forcing/Forcing_Notions.thy b/thys/Forcing/Forcing_Notions.thy new file mode 100644 --- /dev/null +++ b/thys/Forcing/Forcing_Notions.thy @@ -0,0 +1,464 @@ +section\Forcing notions\ +text\This theory defines a locale for forcing notions, that is, + preorders with a distinguished maximum element.\ + +theory Forcing_Notions + imports "ZF-Constructible.Relative" +begin + +subsection\Basic concepts\ +text\We say that two elements $p,q$ are + \<^emph>\compatible\ if they have a lower bound in $P$\ +definition compat_in :: "i\i\i\i\o" where + "compat_in(A,r,p,q) \ \d\A . \d,p\\r \ \d,q\\r" + +definition + is_compat_in :: "[i\o,i,i,i,i] \ o" where + "is_compat_in(M,A,r,p,q) \ \d[M]. d\A \ (\dp[M]. pair(M,d,p,dp) \ dp\r \ + (\dq[M]. pair(M,d,q,dq) \ dq\r))" + +lemma compat_inI : + "\ d\A ; \d,p\\r ; \d,g\\r \ \ compat_in(A,r,p,g)" + by (auto simp add: compat_in_def) + +lemma refl_compat: + "\ refl(A,r) ; \p,q\ \ r | p=q | \q,p\ \ r ; p\A ; q\A\ \ compat_in(A,r,p,q)" + by (auto simp add: refl_def compat_inI) + +lemma chain_compat: + "refl(A,r) \ linear(A,r) \ (\p\A.\q\A. compat_in(A,r,p,q))" + by (simp add: refl_compat linear_def) + +lemma subset_fun_image: "f:N\P \ f``N\P" + by (auto simp add: image_fun apply_funtype) + +lemma refl_monot_domain: "refl(B,r) \ A\B \ refl(A,r)" + unfolding refl_def by blast + +definition + antichain :: "i\i\i\o" where + "antichain(P,leq,A) \ A\P \ (\p\A.\q\A.(\ compat_in(P,leq,p,q)))" + +definition + ccc :: "i \ i \ o" where + "ccc(P,leq) \ \A. antichain(P,leq,A) \ |A| \ nat" + +locale forcing_notion = + fixes P leq one + assumes one_in_P: "one \ P" + and leq_preord: "preorder_on(P,leq)" + and one_max: "\p\P. \p,one\\leq" +begin + +abbreviation Leq :: "[i, i] \ o" (infixl "\" 50) + where "x \ y \ \x,y\\leq" + +lemma refl_leq: + "r\P \ r\r" + using leq_preord unfolding preorder_on_def refl_def by simp + +text\A set $D$ is \<^emph>\dense\ if every element $p\in P$ has a lower +bound in $D$.\ +definition + dense :: "i\o" where + "dense(D) \ \p\P. \d\D . d\p" + +text\There is also a weaker definition which asks for +a lower bound in $D$ only for the elements below some fixed +element $q$.\ +definition + dense_below :: "i\i\o" where + "dense_below(D,q) \ \p\P. p\q \ (\d\D. d\P \ d\p)" + +lemma P_dense: "dense(P)" + by (insert leq_preord, auto simp add: preorder_on_def refl_def dense_def) + +definition + increasing :: "i\o" where + "increasing(F) \ \x\F. \ p \ P . x\p \ p\F" + +definition + compat :: "i\i\o" where + "compat(p,q) \ compat_in(P,leq,p,q)" + +lemma leq_transD: "a\b \ b\c \ a \ P\ b \ P\ c \ P\ a\c" + using leq_preord trans_onD unfolding preorder_on_def by blast + +lemma leq_transD': "A\P \ a\b \ b\c \ a \ A \ b \ P\ c \ P\ a\c" + using leq_preord trans_onD subsetD unfolding preorder_on_def by blast + + +lemma leq_reflI: "p\P \ p\p" + using leq_preord unfolding preorder_on_def refl_def by blast + +lemma compatD[dest!]: "compat(p,q) \ \d\P. d\p \ d\q" + unfolding compat_def compat_in_def . + +abbreviation Incompatible :: "[i, i] \ o" (infixl "\" 50) + where "p \ q \ \ compat(p,q)" + +lemma compatI[intro!]: "d\P \ d\p \ d\q \ compat(p,q)" + unfolding compat_def compat_in_def by blast + +lemma denseD [dest]: "dense(D) \ p\P \ \d\D. d\ p" + unfolding dense_def by blast + +lemma denseI [intro!]: "\ \p. p\P \ \d\D. d\ p \ \ dense(D)" + unfolding dense_def by blast + +lemma dense_belowD [dest]: + assumes "dense_below(D,p)" "q\P" "q\p" + shows "\d\D. d\P \ d\q" + using assms unfolding dense_below_def by simp + (*obtains d where "d\D" "d\P" "d\q" + using assms unfolding dense_below_def by blast *) + +lemma dense_belowI [intro!]: + assumes "\q. q\P \ q\p \ \d\D. d\P \ d\q" + shows "dense_below(D,p)" + using assms unfolding dense_below_def by simp + +lemma dense_below_cong: "p\P \ D = D' \ dense_below(D,p) \ dense_below(D',p)" + by blast + +lemma dense_below_cong': "p\P \ \\x. x\P \ Q(x) \ Q'(x)\ \ + dense_below({q\P. Q(q)},p) \ dense_below({q\P. Q'(q)},p)" + by blast + +lemma dense_below_mono: "p\P \ D \ D' \ dense_below(D,p) \ dense_below(D',p)" + by blast + +lemma dense_below_under: + assumes "dense_below(D,p)" "p\P" "q\P" "q\p" + shows "dense_below(D,q)" + using assms leq_transD by blast + +lemma ideal_dense_below: + assumes "\q. q\P \ q\p \ q\D" + shows "dense_below(D,p)" + using assms leq_reflI by blast + +lemma dense_below_dense_below: + assumes "dense_below({q\P. dense_below(D,q)},p)" "p\P" + shows "dense_below(D,p)" + using assms leq_transD leq_reflI by blast + (* Long proof *) + (* unfolding dense_below_def +proof (intro ballI impI) + fix r + assume "r\P" \r\p\ + with assms + obtain q where "q\P" "q\r" "dense_below(D,q)" + using assms by auto + moreover from this + obtain d where "d\P" "d\q" "d\D" + using assms leq_preord unfolding preorder_on_def refl_def by blast + moreover + note \r\P\ + ultimately + show "\d\D. d \ P \ d\ r" + using leq_preord trans_onD unfolding preorder_on_def by blast +qed *) + +definition + antichain :: "i\o" where + "antichain(A) \ A\P \ (\p\A.\q\A.(\compat(p,q)))" + +text\A filter is an increasing set $G$ with all its elements +being compatible in $G$.\ +definition + filter :: "i\o" where + "filter(G) \ G\P \ increasing(G) \ (\p\G. \q\G. compat_in(G,leq,p,q))" + +lemma filterD : "filter(G) \ x \ G \ x \ P" + by (auto simp add : subsetD filter_def) + +lemma filter_leqD : "filter(G) \ x \ G \ y \ P \ x\y \ y \ G" + by (simp add: filter_def increasing_def) + +lemma filter_imp_compat: "filter(G) \ p\G \ q\G \ compat(p,q)" + unfolding filter_def compat_in_def compat_def by blast + +lemma low_bound_filter: \ \says the compatibility is attained inside G\ + assumes "filter(G)" and "p\G" and "q\G" + shows "\r\G. r\p \ r\q" + using assms + unfolding compat_in_def filter_def by blast + +text\We finally introduce the upward closure of a set +and prove that the closure of $A$ is a filter if its elements are +compatible in $A$.\ +definition + upclosure :: "i\i" where + "upclosure(A) \ {p\P.\a\A. a\p}" + +lemma upclosureI [intro] : "p\P \ a\A \ a\p \ p\upclosure(A)" + by (simp add:upclosure_def, auto) + +lemma upclosureE [elim] : + "p\upclosure(A) \ (\x a. x\P \ a\A \ a\x \ R) \ R" + by (auto simp add:upclosure_def) + +lemma upclosureD [dest] : + "p\upclosure(A) \ \a\A.(a\p) \ p\P" + by (simp add:upclosure_def) + +lemma upclosure_increasing : + assumes "A\P" + shows "increasing(upclosure(A))" + unfolding increasing_def upclosure_def + using leq_transD'[OF \A\P\] by auto + +lemma upclosure_in_P: "A \ P \ upclosure(A) \ P" + using subsetI upclosure_def by simp + +lemma A_sub_upclosure: "A \ P \ A\upclosure(A)" + using subsetI leq_preord + unfolding upclosure_def preorder_on_def refl_def by auto + +lemma elem_upclosure: "A\P \ x\A \ x\upclosure(A)" + by (blast dest:A_sub_upclosure) + +lemma closure_compat_filter: + assumes "A\P" "(\p\A.\q\A. compat_in(A,leq,p,q))" + shows "filter(upclosure(A))" + unfolding filter_def +proof(auto) + show "increasing(upclosure(A))" + using assms upclosure_increasing by simp +next + let ?UA="upclosure(A)" + show "compat_in(upclosure(A), leq, p, q)" if "p\?UA" "q\?UA" for p q + proof - + from that + obtain a b where 1:"a\A" "b\A" "a\p" "b\q" "p\P" "q\P" + using upclosureD[OF \p\?UA\] upclosureD[OF \q\?UA\] by auto + with assms(2) + obtain d where "d\A" "d\a" "d\b" + unfolding compat_in_def by auto + with 1 + have 2:"d\p" "d\q" "d\?UA" + using A_sub_upclosure[THEN subsetD] \A\P\ + leq_transD'[of A d a] leq_transD'[of A d b] by auto + then + show ?thesis unfolding compat_in_def by auto + qed +qed + +lemma aux_RS1: "f \ N \ P \ n\N \ f`n \ upclosure(f ``N)" + using elem_upclosure[OF subset_fun_image] image_fun + by (simp, blast) + +lemma decr_succ_decr: + assumes "f \ nat \ P" "preorder_on(P,leq)" + "\n\nat. \f ` succ(n), f ` n\ \ leq" + "m\nat" + shows "n\nat \ n\m \ \f ` m, f ` n\ \ leq" + using \m\_\ +proof(induct m) + case 0 + then show ?case using assms leq_reflI by simp +next + case (succ x) + then + have 1:"f`succ(x) \ f`x" "f`n\P" "f`x\P" "f`succ(x)\P" + using assms by simp_all + consider (lt) "n nat \ P" + "\n\nat. \f ` succ(n), f ` n\ \ leq" + "trans[P](leq)" + shows "linear(f `` nat, leq)" +proof - + have "preorder_on(P,leq)" + unfolding preorder_on_def using assms by simp + { + fix n m + assume "n\nat" "m\nat" + then + have "f`m \ f`n \ f`n \ f`m" + proof(cases "m\n") + case True + with \n\_\ \m\_\ + show ?thesis + using decr_succ_decr[of f n m] assms leI \preorder_on(P,leq)\ by simp + next + case False + with \n\_\ \m\_\ + show ?thesis + using decr_succ_decr[of f m n] assms leI not_le_iff_lt \preorder_on(P,leq)\ by simp + qed + } + then + show ?thesis + unfolding linear_def using ball_image_simp assms by auto +qed + +end (* forcing_notion *) + +subsection\Towards Rasiowa-Sikorski Lemma (RSL)\ +locale countable_generic = forcing_notion + + fixes \ + assumes countable_subs_of_P: "\ \ nat\Pow(P)" + and seq_of_denses: "\n \ nat. dense(\`n)" + +begin + +definition + D_generic :: "i\o" where + "D_generic(G) \ filter(G) \ (\n\nat.(\`n)\G\0)" + +text\The next lemma identifies a sufficient condition for obtaining +RSL.\ +lemma RS_sequence_imp_rasiowa_sikorski: + assumes + "p\P" "f : nat\P" "f ` 0 = p" + "\n. n\nat \ f ` succ(n)\ f ` n \ f ` succ(n) \ \ ` n" + shows + "\G. p\G \ D_generic(G)" +proof - + note assms + moreover from this + have "f``nat \ P" + by (simp add:subset_fun_image) + moreover from calculation + have "refl(f``nat, leq) \ trans[P](leq)" + using leq_preord unfolding preorder_on_def by (blast intro:refl_monot_domain) + moreover from calculation + have "\n\nat. f ` succ(n)\ f ` n" by (simp) + moreover from calculation + have "linear(f``nat, leq)" + using leq_preord and decr_seq_linear unfolding preorder_on_def by (blast) + moreover from calculation + have "(\p\f``nat.\q\f``nat. compat_in(f``nat,leq,p,q))" + using chain_compat by (auto) + ultimately + have "filter(upclosure(f``nat))" (is "filter(?G)") + using closure_compat_filter by simp + moreover + have "\n\nat. \ ` n \ ?G \ 0" + proof + fix n + assume "n\nat" + with assms + have "f`succ(n) \ ?G \ f`succ(n) \ \ ` n" + using aux_RS1 by simp + then + show "\ ` n \ ?G \ 0" by blast + qed + moreover from assms + have "p \ ?G" + using aux_RS1 by auto + ultimately + show ?thesis unfolding D_generic_def by auto +qed + +end (* countable_generic *) + +text\Now, the following recursive definition will fulfill the +requirements of lemma \<^term>\RS_sequence_imp_rasiowa_sikorski\ \ + +consts RS_seq :: "[i,i,i,i,i,i] \ i" +primrec + "RS_seq(0,P,leq,p,enum,\) = p" + "RS_seq(succ(n),P,leq,p,enum,\) = + enum`(\ m. \enum`m, RS_seq(n,P,leq,p,enum,\)\ \ leq \ enum`m \ \ ` n)" + +context countable_generic +begin + +lemma preimage_rangeD: + assumes "f\Pi(A,B)" "b \ range(f)" + shows "\a\A. f`a = b" + using assms apply_equality[OF _ assms(1), of _ b] domain_type[OF _ assms(1)] by auto + +lemma countable_RS_sequence_aux: + fixes p enum + defines "f(n) \ RS_seq(n,P,leq,p,enum,\)" + and "Q(q,k,m) \ enum`m\ q \ enum`m \ \ ` k" + assumes "n\nat" "p\P" "P \ range(enum)" "enum:nat\M" + "\x k. x\P \ k\nat \ \q\P. q\ x \ q \ \ ` k" + shows + "f(succ(n)) \ P \ f(succ(n))\ f(n) \ f(succ(n)) \ \ ` n" + using \n\nat\ +proof (induct) + case 0 + from assms + obtain q where "q\P" "q\ p" "q \ \ ` 0" by blast + moreover from this and \P \ range(enum)\ + obtain m where "m\nat" "enum`m = q" + using preimage_rangeD[OF \enum:nat\M\] by blast + moreover + have "\`0 \ P" + using apply_funtype[OF countable_subs_of_P] by simp + moreover note \p\P\ + ultimately + show ?case + using LeastI[of "Q(p,0)" m] unfolding Q_def f_def by auto +next + case (succ n) + with assms + obtain q where "q\P" "q\ f(succ(n))" "q \ \ ` succ(n)" by blast + moreover from this and \P \ range(enum)\ + obtain m where "m\nat" "enum`m\ f(succ(n))" "enum`m \ \ ` succ(n)" + using preimage_rangeD[OF \enum:nat\M\] by blast + moreover note succ + moreover from calculation + have "\`succ(n) \ P" + using apply_funtype[OF countable_subs_of_P] by auto + ultimately + show ?case + using LeastI[of "Q(f(succ(n)),succ(n))" m] unfolding Q_def f_def by auto +qed + +lemma countable_RS_sequence: + fixes p enum + defines "f \ \n\nat. RS_seq(n,P,leq,p,enum,\)" + and "Q(q,k,m) \ enum`m\ q \ enum`m \ \ ` k" + assumes "n\nat" "p\P" "P \ range(enum)" "enum:nat\M" + shows + "f`0 = p" "f`succ(n)\ f`n \ f`succ(n) \ \ ` n" "f`succ(n) \ P" +proof - + from assms + show "f`0 = p" by simp + { + fix x k + assume "x\P" "k\nat" + then + have "\q\P. q\ x \ q \ \ ` k" + using seq_of_denses apply_funtype[OF countable_subs_of_P] + unfolding dense_def by blast + } + with assms + show "f`succ(n)\ f`n \ f`succ(n) \ \ ` n" "f`succ(n)\P" + unfolding f_def using countable_RS_sequence_aux by simp_all +qed + +lemma RS_seq_type: + assumes "n \ nat" "p\P" "P \ range(enum)" "enum:nat\M" + shows "RS_seq(n,P,leq,p,enum,\) \ P" + using assms countable_RS_sequence(1,3) + by (induct;simp) + +lemma RS_seq_funtype: + assumes "p\P" "P \ range(enum)" "enum:nat\M" + shows "(\n\nat. RS_seq(n,P,leq,p,enum,\)): nat \ P" + using assms lam_type RS_seq_type by auto + +lemmas countable_rasiowa_sikorski = + RS_sequence_imp_rasiowa_sikorski[OF _ RS_seq_funtype countable_RS_sequence(1,2)] +end (* countable_generic *) + +end diff --git a/thys/Forcing/Forcing_Theorems.thy b/thys/Forcing/Forcing_Theorems.thy new file mode 100644 --- /dev/null +++ b/thys/Forcing/Forcing_Theorems.thy @@ -0,0 +1,1551 @@ +section\The Forcing Theorems\ + +theory Forcing_Theorems + imports + Forces_Definition + +begin + +context forcing_data +begin + +subsection\The forcing relation in context\ + +abbreviation Forces :: "[i, i, i] \ o" ("_ \ _ _" [36,36,36] 60) where + "p \ \ env \ M, ([p,P,leq,one] @ env) \ forces(\)" + +lemma Collect_forces : + assumes + fty: "\\formula" and + far: "arity(\)\length(env)" and + envty: "env\list(M)" + shows + "{p\P . p \ \ env} \ M" +proof - + have "z\P \ z\M" for z + using P_in_M transitivity[of z P] by simp + moreover + have "separation(##M,\p. (p \ \ env))" + using separation_ax arity_forces far fty P_in_M leq_in_M one_in_M envty arity_forces_le + by simp + then + have "Collect(P,\p. (p \ \ env))\M" + using separation_closed P_in_M by simp + then show ?thesis by simp +qed + +lemma forces_mem_iff_dense_below: "p\P \ forces_mem(p,t1,t2) \ dense_below( + {q\P. \s. \r. r\P \ \s,r\ \ t2 \ q\r \ forces_eq(q,t1,s)} + ,p)" + using def_forces_mem[of p t1 t2] by blast + +subsection\Kunen 2013, Lemma IV.2.37(a)\ + +lemma strengthening_eq: + assumes "p\P" "r\P" "r\p" "forces_eq(p,t1,t2)" + shows "forces_eq(r,t1,t2)" + using assms def_forces_eq[of _ t1 t2] leq_transD by blast +(* Long proof *) +(* +proof - + { + fix s q + assume "q\ r" "q\P" + with assms + have "q\p" + using leq_preord unfolding preorder_on_def trans_on_def by blast + moreover + note \q\P\ assms + moreover + assume "s\domain(t1) \ domain(t2)" + ultimately + have "forces_mem(q, s, t1) \ forces_mem(q, s, t2)" + using def_forces_eq[of p t1 t2] by simp + } + with \r\P\ + show ?thesis using def_forces_eq[of r t1 t2] by blast +qed +*) + +subsection\Kunen 2013, Lemma IV.2.37(a)\ +lemma strengthening_mem: + assumes "p\P" "r\P" "r\p" "forces_mem(p,t1,t2)" + shows "forces_mem(r,t1,t2)" + using assms forces_mem_iff_dense_below dense_below_under by auto + +subsection\Kunen 2013, Lemma IV.2.37(b)\ +lemma density_mem: + assumes "p\P" + shows "forces_mem(p,t1,t2) \ dense_below({q\P. forces_mem(q,t1,t2)},p)" +proof + assume "forces_mem(p,t1,t2)" + with assms + show "dense_below({q\P. forces_mem(q,t1,t2)},p)" + using forces_mem_iff_dense_below strengthening_mem[of p] ideal_dense_below by auto +next + assume "dense_below({q \ P . forces_mem(q, t1, t2)}, p)" + with assms + have "dense_below({q\P. + dense_below({q'\P. \s r. r \ P \ \s,r\\t2 \ q'\r \ forces_eq(q',t1,s)},q) + },p)" + using forces_mem_iff_dense_below by simp + with assms + show "forces_mem(p,t1,t2)" + using dense_below_dense_below forces_mem_iff_dense_below[of p t1 t2] by blast +qed + +lemma aux_density_eq: + assumes + "dense_below( + {q'\P. \q. q\P \ q\q' \ forces_mem(q,s,t1) \ forces_mem(q,s,t2)} + ,p)" + "forces_mem(q,s,t1)" "q\P" "p\P" "q\p" + shows + "dense_below({r\P. forces_mem(r,s,t2)},q)" +proof + fix r + assume "r\P" "r\q" + moreover from this and \p\P\ \q\p\ \q\P\ + have "r\p" + using leq_transD by simp + moreover + note \forces_mem(q,s,t1)\ \dense_below(_,p)\ \q\P\ + ultimately + obtain q1 where "q1\r" "q1\P" "forces_mem(q1,s,t2)" + using strengthening_mem[of q _ s t1] leq_reflI leq_transD[of _ r q] by blast + then + show "\d\{r \ P . forces_mem(r, s, t2)}. d \ P \ d\ r" + by blast +qed + +(* Kunen 2013, Lemma IV.2.37(b) *) +lemma density_eq: + assumes "p\P" + shows "forces_eq(p,t1,t2) \ dense_below({q\P. forces_eq(q,t1,t2)},p)" +proof + assume "forces_eq(p,t1,t2)" + with \p\P\ + show "dense_below({q\P. forces_eq(q,t1,t2)},p)" + using strengthening_eq ideal_dense_below by auto +next + assume "dense_below({q\P. forces_eq(q,t1,t2)},p)" + { + fix s q + let ?D1="{q'\P. \s\domain(t1) \ domain(t2). \q. q \ P \ q\q' \ + forces_mem(q,s,t1)\forces_mem(q,s,t2)}" + let ?D2="{q'\P. \q. q\P \ q\q' \ forces_mem(q,s,t1) \ forces_mem(q,s,t2)}" + assume "s\domain(t1) \ domain(t2)" + then + have "?D1\?D2" by blast + with \dense_below(_,p)\ + have "dense_below({q'\P. \s\domain(t1) \ domain(t2). \q. q \ P \ q\q' \ + forces_mem(q,s,t1)\forces_mem(q,s,t2)},p)" + using dense_below_cong'[OF \p\P\ def_forces_eq[of _ t1 t2]] by simp + with \p\P\ \?D1\?D2\ + have "dense_below({q'\P. \q. q\P \ q\q' \ + forces_mem(q,s,t1) \ forces_mem(q,s,t2)},p)" + using dense_below_mono by simp + moreover from this (* Automatic tools can't handle this symmetry + in order to apply aux_density_eq below *) + have "dense_below({q'\P. \q. q\P \ q\q' \ + forces_mem(q,s,t2) \ forces_mem(q,s,t1)},p)" + by blast + moreover + assume "q \ P" "q\p" + moreover + note \p\P\ + ultimately (*We can omit the next step but it is slower *) + have "forces_mem(q,s,t1) \ dense_below({r\P. forces_mem(r,s,t2)},q)" + "forces_mem(q,s,t2) \ dense_below({r\P. forces_mem(r,s,t1)},q)" + using aux_density_eq by simp_all + then + have "forces_mem(q, s, t1) \ forces_mem(q, s, t2)" + using density_mem[OF \q\P\] by blast + } + with \p\P\ + show "forces_eq(p,t1,t2)" using def_forces_eq by blast +qed + +subsection\Kunen 2013, Lemma IV.2.38\ +lemma not_forces_neq: + assumes "p\P" + shows "forces_eq(p,t1,t2) \ \ (\q\P. q\p \ forces_neq(q,t1,t2))" + using assms density_eq unfolding forces_neq_def by blast + + +lemma not_forces_nmem: + assumes "p\P" + shows "forces_mem(p,t1,t2) \ \ (\q\P. q\p \ forces_nmem(q,t1,t2))" + using assms density_mem unfolding forces_nmem_def by blast + + +(* Use the newer versions in Forces_Definition! *) +(* (and adequate the rest of the code to them) *) + +lemma sats_forces_Nand': + assumes + "p\P" "\\formula" "\\formula" "env \ list(M)" + shows + "M, [p,P,leq,one] @ env \ forces(Nand(\,\)) \ + \(\q\M. q\P \ is_leq(##M,leq,q,p) \ + M, [q,P,leq,one] @ env \ forces(\) \ + M, [q,P,leq,one] @ env \ forces(\))" + using assms sats_forces_Nand[OF assms(2-4) transitivity[OF \p\P\]] + P_in_M leq_in_M one_in_M unfolding forces_def + by simp + +lemma sats_forces_Neg': + assumes + "p\P" "env \ list(M)" "\\formula" + shows + "M, [p,P,leq,one] @ env \ forces(Neg(\)) \ + \(\q\M. q\P \ is_leq(##M,leq,q,p) \ + M, [q,P,leq,one]@env \ forces(\))" + using assms sats_forces_Neg transitivity + P_in_M leq_in_M one_in_M unfolding forces_def + by (simp, blast) + +lemma sats_forces_Forall': + assumes + "p\P" "env \ list(M)" "\\formula" + shows + "M,[p,P,leq,one] @ env \ forces(Forall(\)) \ + (\x\M. M, [p,P,leq,one,x] @ env \ forces(\))" + using assms sats_forces_Forall transitivity + P_in_M leq_in_M one_in_M sats_ren_forces_forall unfolding forces_def + by simp + +subsection\The relation of forcing and atomic formulas\ +lemma Forces_Equal: + assumes + "p\P" "t1\M" "t2\M" "env\list(M)" "nth(n,env) = t1" "nth(m,env) = t2" "n\nat" "m\nat" + shows + "(p \ Equal(n,m) env) \ forces_eq(p,t1,t2)" + using assms sats_forces_Equal forces_eq_abs transitivity P_in_M + by simp + +lemma Forces_Member: + assumes + "p\P" "t1\M" "t2\M" "env\list(M)" "nth(n,env) = t1" "nth(m,env) = t2" "n\nat" "m\nat" + shows + "(p \ Member(n,m) env) \ forces_mem(p,t1,t2)" + using assms sats_forces_Member forces_mem_abs transitivity P_in_M + by simp + +lemma Forces_Neg: + assumes + "p\P" "env \ list(M)" "\\formula" + shows + "(p \ Neg(\) env) \ \(\q\M. q\P \ q\p \ (q \ \ env))" + using assms sats_forces_Neg' transitivity + P_in_M pair_in_M_iff leq_in_M leq_abs by simp + +subsection\The relation of forcing and connectives\ + +lemma Forces_Nand: + assumes + "p\P" "env \ list(M)" "\\formula" "\\formula" + shows + "(p \ Nand(\,\) env) \ \(\q\M. q\P \ q\p \ (q \ \ env) \ (q \ \ env))" + using assms sats_forces_Nand' transitivity + P_in_M pair_in_M_iff leq_in_M leq_abs by simp + +lemma Forces_And_aux: + assumes + "p\P" "env \ list(M)" "\\formula" "\\formula" + shows + "p \ And(\,\) env \ + (\q\M. q\P \ q\p \ (\r\M. r\P \ r\q \ (r \ \ env) \ (r \ \ env)))" + unfolding And_def using assms Forces_Neg Forces_Nand by (auto simp only:) + +lemma Forces_And_iff_dense_below: + assumes + "p\P" "env \ list(M)" "\\formula" "\\formula" + shows + "(p \ And(\,\) env) \ dense_below({r\P. (r \ \ env) \ (r \ \ env) },p)" + unfolding dense_below_def using Forces_And_aux assms + by (auto dest:transitivity[OF _ P_in_M]; rename_tac q; drule_tac x=q in bspec)+ + +lemma Forces_Forall: + assumes + "p\P" "env \ list(M)" "\\formula" + shows + "(p \ Forall(\) env) \ (\x\M. (p \ \ ([x] @ env)))" + using sats_forces_Forall' assms by simp + +(* "x\val(G,\) \ \\. \p\G. \\,p\\\ \ val(G,\) = x" *) +bundle some_rules = elem_of_val_pair [dest] SepReplace_iff [simp del] SepReplace_iff[iff] + +context + includes some_rules +begin + +lemma elem_of_valI: "\\. \p\P. p\G \ \\,p\\\ \ val(G,\) = x \ x\val(G,\)" + by (subst def_val, auto) + +lemma GenExtD: "x\M[G] \ (\\\M. x = val(G,\))" + unfolding GenExt_def by simp + +lemma left_in_M : "tau\M \ \a,b\\tau \ a\M" + using fst_snd_closed[of "\a,b\"] transitivity by auto + + +subsection\Kunen 2013, Lemma IV.2.29\ +lemma generic_inter_dense_below: + assumes "D\M" "M_generic(G)" "dense_below(D,p)" "p\G" + shows "D \ G \ 0" +proof - + let ?D="{q\P. p\q \ q\D}" + have "dense(?D)" + proof + fix r + assume "r\P" + show "\d\{q \ P . p \ q \ q \ D}. d \ r" + proof (cases "p \ r") + case True + with \r\P\ + (* Automatic tools can't handle this case for some reason... *) + show ?thesis using leq_reflI[of r] by (intro bexI) (blast+) + next + case False + then + obtain s where "s\P" "s\p" "s\r" by blast + with assms \r\P\ + show ?thesis + using dense_belowD[OF assms(3), of s] leq_transD[of _ s r] + by blast + qed + qed + have "?D\P" by auto + (* D\M *) + let ?d_fm="Or(Neg(compat_in_fm(1,2,3,0)),Member(0,4))" + have 1:"p\M" + using \M_generic(G)\ M_genericD transitivity[OF _ P_in_M] + \p\G\ by simp + moreover + have "?d_fm\formula" by simp + moreover + have "arity(?d_fm) = 5" unfolding compat_in_fm_def pair_fm_def upair_fm_def + by (simp add: nat_union_abs1 Un_commute) + moreover + have "(M, [q,P,leq,p,D] \ ?d_fm) \ (\ is_compat_in(##M,P,leq,p,q) \ q\D)" + if "q\M" for q + using that sats_compat_in_fm P_in_M leq_in_M 1 \D\M\ by simp + moreover + have "(\ is_compat_in(##M,P,leq,p,q) \ q\D) \ p\q \ q\D" if "q\M" for q + unfolding compat_def using that compat_in_abs P_in_M leq_in_M 1 by simp + ultimately + have "?D\M" using Collect_in_M_4p[of ?d_fm _ _ _ _ _"\x y z w h. w\x \ x\h"] + P_in_M leq_in_M \D\M\ by simp + note asm = \M_generic(G)\ \dense(?D)\ \?D\P\ \?D\M\ + obtain x where "x\G" "x\?D" using M_generic_denseD[OF asm] + by force (* by (erule bexE) does it, but the other automatic tools don't *) + moreover from this and \M_generic(G)\ + have "x\D" + using M_generic_compatD[OF _ \p\G\, of x] + leq_reflI compatI[of _ p x] by force + ultimately + show ?thesis by auto +qed + +subsection\Auxiliary results for Lemma IV.2.40(a)\ +lemma IV240a_mem_Collect: + assumes + "\\M" "\\M" + shows + "{q\P. \\. \r. r\P \ \\,r\ \ \ \ q\r \ forces_eq(q,\,\)}\M" +proof - + let ?rel_pred= "\M x a1 a2 a3 a4. \\[M]. \r[M]. \\r[M]. + r\a1 \ pair(M,\,r,\r) \ \r\a4 \ is_leq(M,a2,x,r) \ is_forces_eq'(M,a1,a2,x,a3,\)" + let ?\="Exists(Exists(Exists(And(Member(1,4),And(pair_fm(2,1,0), + And(Member(0,7),And(leq_fm(5,3,1),forces_eq_fm(4,5,3,6,2))))))))" + have "\\M \ r\M" if "\\, r\ \ \" for \ r + using that \\\M\ pair_in_M_iff transitivity[of "\\,r\" \] by simp + then + have "?rel_pred(##M,q,P,leq,\,\) \ (\\. \r. r\P \ \\,r\ \ \ \ q\r \ forces_eq(q,\,\))" + if "q\M" for q + unfolding forces_eq_def using assms that P_in_M leq_in_M leq_abs forces_eq'_abs pair_in_M_iff + by auto + moreover + have "(M, [q,P,leq,\,\] \ ?\) \ ?rel_pred(##M,q,P,leq,\,\)" if "q\M" for q + using assms that sats_forces_eq'_fm sats_leq_fm P_in_M leq_in_M by simp + moreover + have "?\\formula" by simp + moreover + have "arity(?\)=5" + unfolding leq_fm_def pair_fm_def upair_fm_def + using arity_forces_eq_fm by (simp add:nat_simp_union Un_commute) + ultimately + show ?thesis + unfolding forces_eq_def using P_in_M leq_in_M assms + Collect_in_M_4p[of ?\ _ _ _ _ _ + "\q a1 a2 a3 a4. \\. \r. r\a1 \ \\,r\ \ \ \ q\r \ forces_eq'(a1,a2,q,a3,\)"] by simp +qed + +(* Lemma IV.2.40(a), membership *) +lemma IV240a_mem: + assumes + "M_generic(G)" "p\G" "\\M" "\\M" "forces_mem(p,\,\)" + "\q \. q\P \ q\G \ \\domain(\) \ forces_eq(q,\,\) \ + val(G,\) = val(G,\)" (* inductive hypothesis *) + shows + "val(G,\)\val(G,\)" +proof (intro elem_of_valI) + let ?D="{q\P. \\. \r. r\P \ \\,r\ \ \ \ q\r \ forces_eq(q,\,\)}" + from \M_generic(G)\ \p\G\ + have "p\P" by blast + moreover + note \\\M\ \\\M\ + ultimately + have "?D \ M" using IV240a_mem_Collect by simp + moreover from assms \p\P\ + have "dense_below(?D,p)" + using forces_mem_iff_dense_below by simp + moreover + note \M_generic(G)\ \p\G\ + ultimately + obtain q where "q\G" "q\?D" using generic_inter_dense_below by blast + then + obtain \ r where "r\P" "\\,r\ \ \" "q\r" "forces_eq(q,\,\)" by blast + moreover from this and \q\G\ assms + have "r \ G" "val(G,\) = val(G,\)" by blast+ + ultimately + show "\ \. \p\P. p \ G \ \\, p\ \ \ \ val(G, \) = val(G, \)" by auto +qed + +(* Example IV.2.36 (next two lemmas) *) +lemma refl_forces_eq:"p\P \ forces_eq(p,x,x)" + using def_forces_eq by simp + +lemma forces_memI: "\\,r\\\ \ p\P \ r\P \ p\r \ forces_mem(p,\,\)" + using refl_forces_eq[of _ \] leq_transD leq_reflI + by (blast intro:forces_mem_iff_dense_below[THEN iffD2]) + +(* Lemma IV.2.40(a), equality, first inclusion *) +lemma IV240a_eq_1st_incl: + assumes + "M_generic(G)" "p\G" "forces_eq(p,\,\)" + and + IH:"\q \. q\P \ q\G \ \\domain(\) \ domain(\) \ + (forces_mem(q,\,\) \ val(G,\) \ val(G,\)) \ + (forces_mem(q,\,\) \ val(G,\) \ val(G,\))" +(* Strong enough for this case: *) +(* IH:"\q \. q\P \ \\domain(\) \ forces_mem(q,\,\) \ + val(G,\) \ val(G,\)" *) + shows + "val(G,\) \ val(G,\)" +proof + fix x + assume "x\val(G,\)" + then + obtain \ r where "\\,r\\\" "r\G" "val(G,\)=x" by blast + moreover from this and \p\G\ \M_generic(G)\ + obtain q where "q\G" "q\p" "q\r" by force + moreover from this and \p\G\ \M_generic(G)\ + have "q\P" "p\P" by blast+ + moreover from calculation and \M_generic(G)\ + have "forces_mem(q,\,\)" + using forces_memI by blast + moreover + note \forces_eq(p,\,\)\ + ultimately + have "forces_mem(q,\,\)" + using def_forces_eq by blast + with \q\P\ \q\G\ IH[of q \] \\\,r\\\\ \val(G,\) = x\ + show "x\val(G,\)" by (blast) +qed + +(* Lemma IV.2.40(a), equality, second inclusion--- COPY-PASTE *) +lemma IV240a_eq_2nd_incl: + assumes + "M_generic(G)" "p\G" "forces_eq(p,\,\)" + and + IH:"\q \. q\P \ q\G \ \\domain(\) \ domain(\) \ + (forces_mem(q,\,\) \ val(G,\) \ val(G,\)) \ + (forces_mem(q,\,\) \ val(G,\) \ val(G,\))" + shows + "val(G,\) \ val(G,\)" +proof + fix x + assume "x\val(G,\)" + then + obtain \ r where "\\,r\\\" "r\G" "val(G,\)=x" by blast + moreover from this and \p\G\ \M_generic(G)\ + obtain q where "q\G" "q\p" "q\r" by force + moreover from this and \p\G\ \M_generic(G)\ + have "q\P" "p\P" by blast+ + moreover from calculation and \M_generic(G)\ + have "forces_mem(q,\,\)" + using forces_memI by blast + moreover + note \forces_eq(p,\,\)\ + ultimately + have "forces_mem(q,\,\)" + using def_forces_eq by blast + with \q\P\ \q\G\ IH[of q \] \\\,r\\\\ \val(G,\) = x\ + show "x\val(G,\)" by (blast) +qed + +(* Lemma IV.2.40(a), equality, second inclusion--- COPY-PASTE *) +lemma IV240a_eq: + assumes + "M_generic(G)" "p\G" "forces_eq(p,\,\)" + and + IH:"\q \. q\P \ q\G \ \\domain(\) \ domain(\) \ + (forces_mem(q,\,\) \ val(G,\) \ val(G,\)) \ + (forces_mem(q,\,\) \ val(G,\) \ val(G,\))" + shows + "val(G,\) = val(G,\)" + using IV240a_eq_1st_incl[OF assms] IV240a_eq_2nd_incl[OF assms] IH by blast + +subsection\Induction on names\ + +lemma core_induction: + assumes + "\\ \ p. p \ P \ \\q \. \q\P ; \\domain(\)\ \ Q(0,\,\,q)\ \ Q(1,\,\,p)" + "\\ \ p. p \ P \ \\q \. \q\P ; \\domain(\) \ domain(\)\ \ Q(1,\,\,q) \ Q(1,\,\,q)\ \ Q(0,\,\,p)" + "ft \ 2" "p \ P" + shows + "Q(ft,\,\,p)" +proof - + { + fix ft p \ \ + have "Transset(eclose({\,\}))" (is "Transset(?e)") + using Transset_eclose by simp + have "\ \ ?e" "\ \ ?e" + using arg_into_eclose by simp_all + moreover + assume "ft \ 2" "p \ P" + ultimately + have "\ft,\,\,p\\ 2\?e\?e\P" (is "?a\2\?e\?e\P") by simp + then + have "Q(ftype(?a), name1(?a), name2(?a), cond_of(?a))" + using core_induction_aux[of ?e P Q ?a,OF \Transset(?e)\ assms(1,2) \?a\_\] + by (clarify) (blast) + then have "Q(ft,\,\,p)" by (simp add:components_simp) + } + then show ?thesis using assms by simp +qed + +lemma forces_induction_with_conds: + assumes + "\\ \ p. p \ P \ \\q \. \q\P ; \\domain(\)\ \ Q(q,\,\)\ \ R(p,\,\)" + "\\ \ p. p \ P \ \\q \. \q\P ; \\domain(\) \ domain(\)\ \ R(q,\,\) \ R(q,\,\)\ \ Q(p,\,\)" + "p \ P" + shows + "Q(p,\,\) \ R(p,\,\)" +proof - + let ?Q="\ft \ \ p. (ft = 0 \ Q(p,\,\)) \ (ft = 1 \ R(p,\,\))" + from assms(1) + have "\\ \ p. p \ P \ \\q \. \q\P ; \\domain(\)\ \ ?Q(0,\,\,q)\ \ ?Q(1,\,\,p)" + by simp + moreover from assms(2) + have "\\ \ p. p \ P \ \\q \. \q\P ; \\domain(\) \ domain(\)\ \ ?Q(1,\,\,q) \ ?Q(1,\,\,q)\ \ ?Q(0,\,\,p)" + by simp + moreover + note \p\P\ + ultimately + have "?Q(ft,\,\,p)" if "ft\2" for ft + by (rule core_induction[OF _ _ that, of ?Q]) + then + show ?thesis by auto +qed + +lemma forces_induction: + assumes + "\\ \. \\\. \\domain(\) \ Q(\,\)\ \ R(\,\)" + "\\ \. \\\. \\domain(\) \ domain(\) \ R(\,\) \ R(\,\)\ \ Q(\,\)" + shows + "Q(\,\) \ R(\,\)" +proof (intro forces_induction_with_conds[OF _ _ one_in_P ]) + fix \ \ p + assume "q \ P \ \ \ domain(\) \ Q(\, \)" for q \ + with assms(1) + show "R(\,\)" + using one_in_P by simp +next + fix \ \ p + assume "q \ P \ \ \ domain(\) \ domain(\) \ R(\,\) \ R(\,\)" for q \ + with assms(2) + show "Q(\,\)" + using one_in_P by simp +qed + +subsection\Lemma IV.2.40(a), in full\ +lemma IV240a: + assumes + "M_generic(G)" + shows + "(\\M \ \\M \ (\p\G. forces_eq(p,\,\) \ val(G,\) = val(G,\))) \ + (\\M \ \\M \ (\p\G. forces_mem(p,\,\) \ val(G,\) \ val(G,\)))" + (is "?Q(\,\) \ ?R(\,\)") +proof (intro forces_induction[of ?Q ?R] impI) + fix \ \ + assume "\\M" "\\M" "\\domain(\) \ ?Q(\,\)" for \ + moreover from this + have "\\domain(\) \ forces_eq(q, \, \) \ val(G, \) = val(G, \)" + if "q\P" "q\G" for q \ + using that domain_closed[of \] transitivity by auto + moreover + note assms + ultimately + show "\p\G. forces_mem(p,\,\) \ val(G,\) \ val(G,\)" + using IV240a_mem domain_closed transitivity by (simp) +next + fix \ \ + assume "\\M" "\\M" "\ \ domain(\) \ domain(\) \ ?R(\,\) \ ?R(\,\)" for \ + moreover from this + have IH':"\ \ domain(\) \ domain(\) \ q\G \ + (forces_mem(q, \, \) \ val(G, \) \ val(G, \)) \ + (forces_mem(q, \, \) \ val(G, \) \ val(G, \))" for q \ + by (auto intro: transitivity[OF _ domain_closed[simplified]]) + ultimately + show "\p\G. forces_eq(p,\,\) \ val(G,\) = val(G,\)" + using IV240a_eq[OF assms(1) _ _ IH'] by (simp) +qed + +subsection\Lemma IV.2.40(b)\ +(* Lemma IV.2.40(b), membership *) +lemma IV240b_mem: + assumes + "M_generic(G)" "val(G,\)\val(G,\)" "\\M" "\\M" + and + IH:"\\. \\domain(\) \ val(G,\) = val(G,\) \ + \p\G. forces_eq(p,\,\)" (* inductive hypothesis *) + shows + "\p\G. forces_mem(p,\,\)" +proof - + from \val(G,\)\val(G,\)\ + obtain \ r where "r\G" "\\,r\\\" "val(G,\) = val(G,\)" by auto + moreover from this and IH + obtain p' where "p'\G" "forces_eq(p',\,\)" by blast + moreover + note \M_generic(G)\ + ultimately + obtain p where "p\r" "p\G" "forces_eq(p,\,\)" + using M_generic_compatD strengthening_eq[of p'] by blast + moreover + note \M_generic(G)\ + moreover from calculation + have "forces_eq(q,\,\)" if "q\P" "q\p" for q + using that strengthening_eq by blast + moreover + note \\\,r\\\\ \r\G\ + ultimately + have "r\P \ \\,r\ \ \ \ q\r \ forces_eq(q,\,\)" if "q\P" "q\p" for q + using that leq_transD[of _ p r] by blast + then + have "dense_below({q\P. \s r. r\P \ \s,r\ \ \ \ q\r \ forces_eq(q,\,s)},p)" + using leq_reflI by blast + moreover + note \M_generic(G)\ \p\G\ + moreover from calculation + have "forces_mem(p,\,\)" + using forces_mem_iff_dense_below by blast + ultimately + show ?thesis by blast +qed + +end (* includes *) + +lemma Collect_forces_eq_in_M: + assumes "\ \ M" "\ \ M" + shows "{p\P. forces_eq(p,\,\)} \ M" + using assms Collect_in_M_4p[of "forces_eq_fm(1,2,0,3,4)" P leq \ \ + "\A x p l t1 t2. is_forces_eq(x,t1,t2)" + "\ x p l t1 t2. forces_eq(x,t1,t2)" P] + arity_forces_eq_fm P_in_M leq_in_M sats_forces_eq_fm forces_eq_abs forces_eq_fm_type + by (simp add: nat_union_abs1 Un_commute) + +lemma IV240b_eq_Collects: + assumes "\ \ M" "\ \ M" + shows "{p\P. \\\domain(\) \ domain(\). forces_mem(p,\,\) \ forces_nmem(p,\,\)}\M" and + "{p\P. \\\domain(\) \ domain(\). forces_nmem(p,\,\) \ forces_mem(p,\,\)}\M" +proof - + let ?rel_pred="\M x a1 a2 a3 a4. + \\[M]. \u[M]. \da3[M]. \da4[M]. is_domain(M,a3,da3) \ is_domain(M,a4,da4) \ + union(M,da3,da4,u) \ \\u \ is_forces_mem'(M,a1,a2,x,\,a3) \ + is_forces_nmem'(M,a1,a2,x,\,a4)" + let ?\="Exists(Exists(Exists(Exists(And(domain_fm(7,1),And(domain_fm(8,0), + And(union_fm(1,0,2),And(Member(3,2),And(forces_mem_fm(5,6,4,3,7), + forces_nmem_fm(5,6,4,3,8))))))))))" + have 1:"\\M" if "\\,y\\\" "\\M" for \ \ y + using that pair_in_M_iff transitivity[of "\\,y\" \] by simp + have abs1:"?rel_pred(##M,p,P,leq,\,\) \ + (\\\domain(\) \ domain(\). forces_mem'(P,leq,p,\,\) \ forces_nmem'(P,leq,p,\,\))" + if "p\M" for p + unfolding forces_mem_def forces_nmem_def + using assms that forces_mem'_abs forces_nmem'_abs P_in_M leq_in_M + domain_closed Un_closed + by (auto simp add:1[of _ _ \] 1[of _ _ \]) + have abs2:"?rel_pred(##M,p,P,leq,\,\) \ (\\\domain(\) \ domain(\). + forces_nmem'(P,leq,p,\,\) \ forces_mem'(P,leq,p,\,\))" if "p\M" for p + unfolding forces_mem_def forces_nmem_def + using assms that forces_mem'_abs forces_nmem'_abs P_in_M leq_in_M + domain_closed Un_closed + by (auto simp add:1[of _ _ \] 1[of _ _ \]) + have fsats1:"(M,[p,P,leq,\,\] \ ?\) \ ?rel_pred(##M,p,P,leq,\,\)" if "p\M" for p + using that assms sats_forces_mem'_fm sats_forces_nmem'_fm P_in_M leq_in_M + domain_closed Un_closed by simp + have fsats2:"(M,[p,P,leq,\,\] \ ?\) \ ?rel_pred(##M,p,P,leq,\,\)" if "p\M" for p + using that assms sats_forces_mem'_fm sats_forces_nmem'_fm P_in_M leq_in_M + domain_closed Un_closed by simp + have fty:"?\\formula" by simp + have farit:"arity(?\)=5" + unfolding forces_nmem_fm_def domain_fm_def pair_fm_def upair_fm_def union_fm_def + using arity_forces_mem_fm by (simp add:nat_simp_union Un_commute) + show + "{p \ P . \\\domain(\) \ domain(\). forces_mem(p, \, \) \ forces_nmem(p, \, \)} \ M" + and "{p \ P . \\\domain(\) \ domain(\). forces_nmem(p, \, \) \ forces_mem(p, \, \)} \ M" + unfolding forces_mem_def + using abs1 fty fsats1 farit P_in_M leq_in_M assms forces_nmem + Collect_in_M_4p[of ?\ _ _ _ _ _ + "\x p l a1 a2. (\\\domain(a1) \ domain(a2). forces_mem'(p,l,x,\,a1) \ + forces_nmem'(p,l,x,\,a2))"] + using abs2 fty fsats2 farit P_in_M leq_in_M assms forces_nmem domain_closed Un_closed + Collect_in_M_4p[of ?\ P leq \ \ ?rel_pred + "\x p l a2 a1. (\\\domain(a1) \ domain(a2). forces_nmem'(p,l,x,\,a1) \ + forces_mem'(p,l,x,\,a2))" P] + by simp_all +qed + +(* Lemma IV.2.40(b), equality *) +lemma IV240b_eq: + assumes + "M_generic(G)" "val(G,\) = val(G,\)" "\\M" "\\M" + and + IH:"\\. \\domain(\)\domain(\) \ + (val(G,\)\val(G,\) \ (\q\G. forces_mem(q,\,\))) \ + (val(G,\)\val(G,\) \ (\q\G. forces_mem(q,\,\)))" + (* inductive hypothesis *) + shows + "\p\G. forces_eq(p,\,\)" +proof - + let ?D1="{p\P. forces_eq(p,\,\)}" + let ?D2="{p\P. \\\domain(\) \ domain(\). forces_mem(p,\,\) \ forces_nmem(p,\,\)}" + let ?D3="{p\P. \\\domain(\) \ domain(\). forces_nmem(p,\,\) \ forces_mem(p,\,\)}" + let ?D="?D1 \ ?D2 \ ?D3" + note assms + moreover from this + have "domain(\) \ domain(\)\M" (is "?B\M") using domain_closed Un_closed by auto + moreover from calculation + have "?D2\M" and "?D3\M" using IV240b_eq_Collects by simp_all + ultimately + have "?D\M" using Collect_forces_eq_in_M Un_closed by auto + moreover + have "dense(?D)" + proof + fix p + assume "p\P" + have "\d\P. (forces_eq(d, \, \) \ + (\\\domain(\) \ domain(\). forces_mem(d, \, \) \ forces_nmem(d, \, \)) \ + (\\\domain(\) \ domain(\). forces_nmem(d, \, \) \ forces_mem(d, \, \))) \ + d \ p" + proof (cases "forces_eq(p, \, \)") + case True + with \p\P\ + show ?thesis using leq_reflI by blast + next + case False + moreover note \p\P\ + moreover from calculation + obtain \ q where "\\domain(\)\domain(\)" "q\P" "q\p" + "(forces_mem(q, \, \) \ \ forces_mem(q, \, \)) \ + (\ forces_mem(q, \, \) \ forces_mem(q, \, \))" + using def_forces_eq by blast + moreover from this + obtain r where "r\q" "r\P" + "(forces_mem(r, \, \) \ forces_nmem(r, \, \)) \ + (forces_nmem(r, \, \) \ forces_mem(r, \, \))" + using not_forces_nmem strengthening_mem by blast + ultimately + show ?thesis using leq_transD by blast + qed + then + show "\d\?D1 \ ?D2 \ ?D3. d \ p" by blast + qed + moreover + have "?D \ P" + by auto + moreover + note \M_generic(G)\ + ultimately + obtain p where "p\G" "p\?D" + unfolding M_generic_def by blast + then + consider + (1) "forces_eq(p,\,\)" | + (2) "\\\domain(\) \ domain(\). forces_mem(p,\,\) \ forces_nmem(p,\,\)" | + (3) "\\\domain(\) \ domain(\). forces_nmem(p,\,\) \ forces_mem(p,\,\)" + by blast + then + show ?thesis + proof (cases) + case 1 + with \p\G\ + show ?thesis by blast + next + case 2 + then + obtain \ where "\\domain(\) \ domain(\)" "forces_mem(p,\,\)" "forces_nmem(p,\,\)" + by blast + moreover from this and \p\G\ and assms + have "val(G,\)\val(G,\)" + using IV240a[of G \ \] transitivity[OF _ domain_closed[simplified]] by blast + moreover note IH \val(G,\) = _\ + ultimately + obtain q where "q\G" "forces_mem(q, \, \)" by auto + moreover from this and \p\G\ \M_generic(G)\ + obtain r where "r\P" "r\p" "r\q" + by blast + moreover + note \M_generic(G)\ + ultimately + have "forces_mem(r, \, \)" + using strengthening_mem by blast + with \r\p\ \forces_nmem(p,\,\)\ \r\P\ + have "False" + unfolding forces_nmem_def by blast + then + show ?thesis by simp + next (* copy-paste from case 2 mutatis mutandis*) + case 3 + then + obtain \ where "\\domain(\) \ domain(\)" "forces_mem(p,\,\)" "forces_nmem(p,\,\)" + by blast + moreover from this and \p\G\ and assms + have "val(G,\)\val(G,\)" + using IV240a[of G \ \] transitivity[OF _ domain_closed[simplified]] by blast + moreover note IH \val(G,\) = _\ + ultimately + obtain q where "q\G" "forces_mem(q, \, \)" by auto + moreover from this and \p\G\ \M_generic(G)\ + obtain r where "r\P" "r\p" "r\q" + by blast + moreover + note \M_generic(G)\ + ultimately + have "forces_mem(r, \, \)" + using strengthening_mem by blast + with \r\p\ \forces_nmem(p,\,\)\ \r\P\ + have "False" + unfolding forces_nmem_def by blast + then + show ?thesis by simp + qed +qed + +(* Lemma IV.2.40(b), full *) +lemma IV240b: + assumes + "M_generic(G)" + shows + "(\\M\\\M\val(G,\) = val(G,\) \ (\p\G. forces_eq(p,\,\))) \ + (\\M\\\M\val(G,\) \ val(G,\) \ (\p\G. forces_mem(p,\,\)))" + (is "?Q(\,\) \ ?R(\,\)") +proof (intro forces_induction) + fix \ \ p + assume "\\domain(\) \ ?Q(\, \)" for \ + with assms + show "?R(\, \)" + using IV240b_mem domain_closed transitivity by (simp) +next + fix \ \ p + assume "\ \ domain(\) \ domain(\) \ ?R(\,\) \ ?R(\,\)" for \ + moreover from this + have IH':"\\M \ \\M \ \ \ domain(\) \ domain(\) \ + (val(G, \) \ val(G, \) \ (\q\G. forces_mem(q, \, \))) \ + (val(G, \) \ val(G, \) \ (\q\G. forces_mem(q, \, \)))" for \ + by (blast intro:left_in_M) + ultimately + show "?Q(\,\)" + using IV240b_eq[OF assms(1)] by (auto) +qed + +lemma map_val_in_MG: + assumes + "env\list(M)" + shows + "map(val(G),env)\list(M[G])" + unfolding GenExt_def using assms map_type2 by simp + +lemma truth_lemma_mem: + assumes + "env\list(M)" "M_generic(G)" + "n\nat" "m\nat" "np\G. p \ Member(n,m) env) \ M[G], map(val(G),env) \ Member(n,m)" + using assms IV240a[OF assms(2), of "nth(n,env)" "nth(m,env)"] + IV240b[OF assms(2), of "nth(n,env)" "nth(m,env)"] + P_in_M leq_in_M one_in_M + Forces_Member[of _ "nth(n,env)" "nth(m,env)" env n m] map_val_in_MG + by (auto) + +lemma truth_lemma_eq: + assumes + "env\list(M)" "M_generic(G)" + "n\nat" "m\nat" "np\G. p \ Equal(n,m) env) \ M[G], map(val(G),env) \ Equal(n,m)" + using assms IV240a(1)[OF assms(2), of "nth(n,env)" "nth(m,env)"] + IV240b(1)[OF assms(2), of "nth(n,env)" "nth(m,env)"] + P_in_M leq_in_M one_in_M + Forces_Equal[of _ "nth(n,env)" "nth(m,env)" env n m] map_val_in_MG + by (auto) + +lemma arities_at_aux: + assumes + "n \ nat" "m \ nat" "env \ list(M)" "succ(n) \ succ(m) \ length(env)" + shows + "n < length(env)" "m < length(env)" + using assms succ_leE[OF Un_leD1, of n "succ(m)" "length(env)"] + succ_leE[OF Un_leD2, of "succ(n)" m "length(env)"] by auto + +subsection\The Strenghtening Lemma\ + +lemma strengthening_lemma: + assumes + "p\P" "\\formula" "r\P" "r\p" + shows + "\env. env\list(M) \ arity(\)\length(env) \ p \ \ env \ r \ \ env" + using assms(2) +proof (induct) + case (Member n m) + then + have "nlist(M)" + moreover + note assms Member + ultimately + show ?case + using Forces_Member[of _ "nth(n,env)" "nth(m,env)" env n m] + strengthening_mem[of p r "nth(n,env)" "nth(m,env)"] by simp +next + case (Equal n m) + then + have "nlist(M)" + moreover + note assms Equal + ultimately + show ?case + using Forces_Equal[of _ "nth(n,env)" "nth(m,env)" env n m] + strengthening_eq[of p r "nth(n,env)" "nth(m,env)"] by simp +next + case (Nand \ \) + with assms + show ?case + using Forces_Nand transitivity[OF _ P_in_M] pair_in_M_iff + transitivity[OF _ leq_in_M] leq_transD by auto +next + case (Forall \) + with assms + have "p \ \ ([x] @ env)" if "x\M" for x + using that Forces_Forall by simp + with Forall + have "r \ \ ([x] @ env)" if "x\M" for x + using that pred_le2 by (simp) + with assms Forall + show ?case + using Forces_Forall by simp +qed + +subsection\The Density Lemma\ +lemma arity_Nand_le: + assumes "\ \ formula" "\ \ formula" "arity(Nand(\, \)) \ length(env)" "env\list(A)" + shows "arity(\) \ length(env)" "arity(\) \ length(env)" + using assms + by (rule_tac Un_leD1, rule_tac [5] Un_leD2, auto) + +lemma dense_below_imp_forces: + assumes + "p\P" "\\formula" + shows + "\env. env\list(M) \ arity(\)\length(env) \ + dense_below({q\P. (q \ \ env)},p) \ (p \ \ env)" + using assms(2) +proof (induct) + case (Member n m) + then + have "nlist(M)" + moreover + note assms Member + ultimately + show ?case + using Forces_Member[of _ "nth(n,env)" "nth(m,env)" env n m] + density_mem[of p "nth(n,env)" "nth(m,env)"] by simp +next + case (Equal n m) + then + have "nlist(M)" + moreover + note assms Equal + ultimately + show ?case + using Forces_Equal[of _ "nth(n,env)" "nth(m,env)" env n m] + density_eq[of p "nth(n,env)" "nth(m,env)"] by simp +next +case (Nand \ \) + { + fix q + assume "q\M" "q\P" "q\ p" "q \ \ env" + moreover + note Nand + moreover from calculation + obtain d where "d\P" "d \ Nand(\, \) env" "d\ q" + using dense_belowI by auto + moreover from calculation + have "\(d\ \ env)" if "d \ \ env" + using that Forces_Nand leq_reflI transitivity[OF _ P_in_M, of d] by auto + moreover + note arity_Nand_le[of \ \] + moreover from calculation + have "d \ \ env" + using strengthening_lemma[of q \ d env] Un_leD1 by auto + ultimately + have "\ (q \ \ env)" + using strengthening_lemma[of q \ d env] by auto + } + with \p\P\ + show ?case + using Forces_Nand[symmetric, OF _ Nand(5,1,3)] by blast +next + case (Forall \) + have "dense_below({q\P. q \ \ ([a]@env)},p)" if "a\M" for a + proof + fix r + assume "r\P" "r\p" + with \dense_below(_,p)\ + obtain q where "q\P" "q\r" "q \ Forall(\) env" + by blast + moreover + note Forall \a\M\ + moreover from calculation + have "q \ \ ([a]@env)" + using Forces_Forall by simp + ultimately + show "\d \ {q\P. q \ \ ([a]@env)}. d \ P \ d\r" + by auto + qed + moreover + note Forall(2)[of "Cons(_,env)"] Forall(1,3-5) + ultimately + have "p \ \ ([a]@env)" if "a\M" for a + using that pred_le2 by simp + with assms Forall + show ?case using Forces_Forall by simp +qed + +lemma density_lemma: + assumes + "p\P" "\\formula" "env\list(M)" "arity(\)\length(env)" + shows + "p \ \ env \ dense_below({q\P. (q \ \ env)},p)" +proof + assume "dense_below({q\P. (q \ \ env)},p)" + with assms + show "(p \ \ env)" + using dense_below_imp_forces by simp +next + assume "p \ \ env" + with assms + show "dense_below({q\P. q \ \ env},p)" + using strengthening_lemma leq_reflI by auto +qed + +subsection\The Truth Lemma\ +lemma Forces_And: + assumes + "p\P" "env \ list(M)" "\\formula" "\\formula" + "arity(\) \ length(env)" "arity(\) \ length(env)" + shows + "p \ And(\,\) env \ (p \ \ env) \ (p \ \ env)" +proof + assume "p \ And(\, \) env" + with assms + have "dense_below({r \ P . (r \ \ env) \ (r \ \ env)}, p)" + using Forces_And_iff_dense_below by simp + then + have "dense_below({r \ P . (r \ \ env)}, p)" "dense_below({r \ P . (r \ \ env)}, p)" + by blast+ + with assms + show "(p \ \ env) \ (p \ \ env)" + using density_lemma[symmetric] by simp +next + assume "(p \ \ env) \ (p \ \ env)" + have "dense_below({r \ P . (r \ \ env) \ (r \ \ env)}, p)" + proof (intro dense_belowI bexI conjI, assumption) + fix q + assume "q\P" "q\ p" + with assms \(p \ \ env) \ (p \ \ env)\ + show "q\{r \ P . (r \ \ env) \ (r \ \ env)}" "q\ q" + using strengthening_lemma leq_reflI by auto + qed + with assms + show "p \ And(\,\) env" + using Forces_And_iff_dense_below by simp +qed + +lemma Forces_Nand_alt: + assumes + "p\P" "env \ list(M)" "\\formula" "\\formula" + "arity(\) \ length(env)" "arity(\) \ length(env)" + shows + "(p \ Nand(\,\) env) \ (p \ Neg(And(\,\)) env)" + using assms Forces_Nand Forces_And Forces_Neg by auto + +lemma truth_lemma_Neg: + assumes + "\\formula" "M_generic(G)" "env\list(M)" "arity(\)\length(env)" and + IH: "(\p\G. p \ \ env) \ M[G], map(val(G),env) \ \" + shows + "(\p\G. p \ Neg(\) env) \ M[G], map(val(G),env) \ Neg(\)" +proof (intro iffI, elim bexE, rule ccontr) + (* Direct implication by contradiction *) + fix p + assume "p\G" "p \ Neg(\) env" "\(M[G],map(val(G),env) \ Neg(\))" + moreover + note assms + moreover from calculation + have "M[G], map(val(G),env) \ \" + using map_val_in_MG by simp + with IH + obtain r where "r \ \ env" "r\G" by blast + moreover from this and \M_generic(G)\ \p\G\ + obtain q where "q\p" "q\r" "q\G" + by blast + moreover from calculation + have "q \ \ env" + using strengthening_lemma[where \=\] by blast + ultimately + show "False" + using Forces_Neg[where \=\] transitivity[OF _ P_in_M] by blast +next + assume "M[G], map(val(G),env) \ Neg(\)" + with assms + have "\ (M[G], map(val(G),env) \ \)" + using map_val_in_MG by simp + let ?D="{p\P. (p \ \ env) \ (p \ Neg(\) env)}" + have "separation(##M,\p. (p \ \ env))" + using separation_ax arity_forces assms P_in_M leq_in_M one_in_M arity_forces_le + by simp + moreover + have "separation(##M,\p. (p \ Neg(\) env))" + using separation_ax arity_forces assms P_in_M leq_in_M one_in_M arity_forces_le + by simp + ultimately + have "separation(##M,\p. (p \ \ env) \ (p \ Neg(\) env))" + using separation_disj by simp + then + have "?D \ M" + using separation_closed P_in_M by simp + moreover + have "?D \ P" by auto + moreover + have "dense(?D)" + proof + fix q + assume "q\P" + show "\d\{p \ P . (p \ \ env) \ (p \ Neg(\) env)}. d\ q" + proof (cases "q \ Neg(\) env") + case True + with \q\P\ + show ?thesis using leq_reflI by blast + next + case False + with \q\P\ and assms + show ?thesis using Forces_Neg by auto + qed + qed + moreover + note \M_generic(G)\ + ultimately + obtain p where "p\G" "(p \ \ env) \ (p \ Neg(\) env)" + by blast + then + consider (1) "p \ \ env" | (2) "p \ Neg(\) env" by blast + then + show "\p\G. (p \ Neg(\) env)" + proof (cases) + case 1 + with \\ (M[G],map(val(G),env) \ \)\ \p\G\ IH + show ?thesis + by blast + next + case 2 + with \p\G\ + show ?thesis by blast + qed +qed + +lemma truth_lemma_And: + assumes + "env\list(M)" "\\formula" "\\formula" + "arity(\)\length(env)" "arity(\) \ length(env)" "M_generic(G)" + and + IH: "(\p\G. p \ \ env) \ M[G], map(val(G),env) \ \" + "(\p\G. p \ \ env) \ M[G], map(val(G),env) \ \" + shows + "(\p\G. (p \ And(\,\) env)) \ M[G] , map(val(G),env) \ And(\,\)" + using assms map_val_in_MG Forces_And[OF M_genericD assms(1-5)] +proof (intro iffI, elim bexE) + fix p + assume "p\G" "p \ And(\,\) env" + with assms + show "M[G], map(val(G),env) \ And(\,\)" + using Forces_And[OF M_genericD, of _ _ _ \ \] map_val_in_MG by auto +next + assume "M[G], map(val(G),env) \ And(\,\)" + moreover + note assms + moreover from calculation + obtain q r where "q \ \ env" "r \ \ env" "q\G" "r\G" + using map_val_in_MG Forces_And[OF M_genericD assms(1-5)] by auto + moreover from calculation + obtain p where "p\q" "p\r" "p\G" + by blast + moreover from calculation + have "(p \ \ env) \ (p \ \ env)" (* can't solve as separate goals *) + using strengthening_lemma by (blast) + ultimately + show "\p\G. (p \ And(\,\) env)" + using Forces_And[OF M_genericD assms(1-5)] by auto +qed + +definition + ren_truth_lemma :: "i\i" where + "ren_truth_lemma(\) \ + Exists(Exists(Exists(Exists(Exists( + And(Equal(0,5),And(Equal(1,8),And(Equal(2,9),And(Equal(3,10),And(Equal(4,6), + iterates(\p. incr_bv(p)`5 , 6, \)))))))))))" + +lemma ren_truth_lemma_type[TC] : + "\\formula \ ren_truth_lemma(\) \formula" + unfolding ren_truth_lemma_def + by simp + +lemma arity_ren_truth : + assumes "\\formula" + shows "arity(ren_truth_lemma(\)) \ 6 \ succ(arity(\))" +proof - + consider (lt) "5 )" | (ge) "\ 5 < arity(\)" + by auto + then + show ?thesis + proof cases + case lt + consider (a) "5)#+5" | (b) "arity(\)#+5 \ 5" + using not_lt_iff_le \\\_\ by force + then + show ?thesis + proof cases + case a + with \\\_\ lt + have "5 < succ(arity(\))" "5)#+2" "5)#+3" "5)#+4" + using succ_ltI by auto + with \\\_\ + have c:"arity(iterates(\p. incr_bv(p)`5,5,\)) = 5#+arity(\)" (is "arity(?\') = _") + using arity_incr_bv_lemma lt a + by simp + with \\\_\ + have "arity(incr_bv(?\')`5) = 6#+arity(\)" + using arity_incr_bv_lemma[of ?\' 5] a by auto + with \\\_\ + show ?thesis + unfolding ren_truth_lemma_def + using pred_Un_distrib nat_union_abs1 Un_assoc[symmetric] a c nat_union_abs2 + by simp + next + case b + with \\\_\ lt + have "5 < succ(arity(\))" "5)#+2" "5)#+3" "5)#+4" "5)#+5" + using succ_ltI by auto + with \\\_\ + have "arity(iterates(\p. incr_bv(p)`5,6,\)) = 6#+arity(\)" (is "arity(?\') = _") + using arity_incr_bv_lemma lt + by simp + with \\\_\ + show ?thesis + unfolding ren_truth_lemma_def + using pred_Un_distrib nat_union_abs1 Un_assoc[symmetric] nat_union_abs2 + by simp + qed + next + case ge + with \\\_\ + have "arity(\) \ 5" "pred^5(arity(\)) \ 5" + using not_lt_iff_le le_trans[OF le_pred] + by auto + with \\\_\ + have "arity(iterates(\p. incr_bv(p)`5,6,\)) = arity(\)" "arity(\)\6" "pred^5(arity(\)) \ 6" + using arity_incr_bv_lemma ge le_trans[OF \arity(\)\5\] le_trans[OF \pred^5(arity(\))\5\] + by auto + with \arity(\) \ 5\ \\\_\ \pred^5(_) \ 5\ + show ?thesis + unfolding ren_truth_lemma_def + using pred_Un_distrib nat_union_abs1 Un_assoc[symmetric] nat_union_abs2 + by simp + qed +qed + +lemma sats_ren_truth_lemma: + "[q,b,d,a1,a2,a3] @ env \ list(M) \ \ \ formula \ + (M, [q,b,d,a1,a2,a3] @ env \ ren_truth_lemma(\) ) \ + (M, [q,a1,a2,a3,b] @ env \ \)" + unfolding ren_truth_lemma_def + by (insert sats_incr_bv_iff [of _ _ M _ "[q,a1,a2,a3,b]"], simp) + +lemma truth_lemma' : + assumes + "\\formula" "env\list(M)" "arity(\) \ succ(length(env))" + shows + "separation(##M,\d. \b\M. \q\P. q\d \ \(q \ \ ([b]@env)))" +proof - + let ?rel_pred="\M x a1 a2 a3. \b\M. \q\M. q\a1 \ is_leq(##M,a2,q,x) \ + \(M, [q,a1,a2,a3,b] @ env \ forces(\))" + let ?\="Exists(Forall(Implies(And(Member(0,3),leq_fm(4,0,2)), + Neg(ren_truth_lemma(forces(\))))))" + have "q\M" if "q\P" for q using that transitivity[OF _ P_in_M] by simp + then + have 1:"\q\M. q\P \ R(q) \ Q(q) \ (\q\P. R(q) \ Q(q))" for R Q + by auto + then + have "\b \ M; \q\M. q \ P \ q \ d \ \(q \ \ ([b]@env))\ \ + \c\M. \q\P. q \ d \ \(q \ \ ([c]@env))" for b d + by (rule bexI,simp_all) + then + have "?rel_pred(M,d,P,leq,one) \ (\b\M. \q\P. q\d \ \(q \ \ ([b]@env)))" if "d\M" for d + using that leq_abs leq_in_M P_in_M one_in_M assms + by auto + moreover + have "?\\formula" using assms by simp + moreover + have "(M, [d,P,leq,one]@env \ ?\) \ ?rel_pred(M,d,P,leq,one)" if "d\M" for d + using assms that P_in_M leq_in_M one_in_M sats_leq_fm sats_ren_truth_lemma + by simp + moreover + have "arity(?\) \ 4#+length(env)" + proof - + have eq:"arity(leq_fm(4, 0, 2)) = 5" + using arity_leq_fm succ_Un_distrib nat_simp_union + by simp + with \\\_\ + have "arity(?\) = 3 \ (pred^2(arity(ren_truth_lemma(forces(\)))))" + using nat_union_abs1 pred_Un_distrib by simp + moreover + have "... \ 3 \ (pred(pred(6 \ succ(arity(forces(\))))))" (is "_ \ ?r") + using \\\_\ Un_le_compat[OF le_refl[of 3]] + le_imp_subset arity_ren_truth[of "forces(\)"] + pred_mono + by auto + finally + have "arity(?\) \ ?r" by simp + have i:"?r \ 4 \ pred(arity(forces(\)))" + using pred_Un_distrib pred_succ_eq \\\_\ Un_assoc[symmetric] nat_union_abs1 by simp + have h:"4 \ pred(arity(forces(\))) \ 4 \ (4#+length(env))" + using \env\_\ add_commute \\\_\ + Un_le_compat[of 4 4,OF _ pred_mono[OF _ arity_forces_le[OF _ _ \arity(\)\_\]] ] + \env\_\ by auto + with \\\_\ \env\_\ + show ?thesis + using le_trans[OF \arity(?\) \ ?r\ le_trans[OF i h]] nat_simp_union by simp + qed + ultimately + show ?thesis using assms P_in_M leq_in_M one_in_M + separation_ax[of "?\" "[P,leq,one]@env"] + separation_cong[of "##M" "\y. (M, [y,P,leq,one]@env \?\)"] + by simp +qed + + +lemma truth_lemma: + assumes + "\\formula" "M_generic(G)" + shows + "\env. env\list(M) \ arity(\)\length(env) \ + (\p\G. p \ \ env) \ M[G], map(val(G),env) \ \" + using assms(1) +proof (induct) + case (Member x y) + then + show ?case + using assms truth_lemma_mem[OF \env\list(M)\ assms(2) \x\nat\ \y\nat\] + arities_at_aux by simp +next + case (Equal x y) + then + show ?case + using assms truth_lemma_eq[OF \env\list(M)\ assms(2) \x\nat\ \y\nat\] + arities_at_aux by simp +next + case (Nand \ \) + moreover + note \M_generic(G)\ + ultimately + show ?case + using truth_lemma_And truth_lemma_Neg Forces_Nand_alt + M_genericD map_val_in_MG arity_Nand_le[of \ \] by auto +next + case (Forall \) + with \M_generic(G)\ + show ?case + proof (intro iffI) + assume "\p\G. (p \ Forall(\) env)" + with \M_generic(G)\ + obtain p where "p\G" "p\M" "p\P" "p \ Forall(\) env" + using transitivity[OF _ P_in_M] by auto + with \env\list(M)\ \\\formula\ + have "p \ \ ([x]@env)" if "x\M" for x + using that Forces_Forall by simp + with \p\G\ \\\formula\ \env\_\ \arity(Forall(\)) \ length(env)\ + Forall(2)[of "Cons(_,env)"] + show "M[G], map(val(G),env) \ Forall(\)" + using pred_le2 map_val_in_MG + by (auto iff:GenExtD) + next + assume "M[G], map(val(G),env) \ Forall(\)" + let ?D1="{d\P. (d \ Forall(\) env)}" + let ?D2="{d\P. \b\M. \q\P. q\d \ \(q \ \ ([b]@env))}" + define D where "D \ ?D1 \ ?D2" + have ar\:"arity(\)\succ(length(env))" + using assms \arity(Forall(\)) \ length(env)\ \\\formula\ \env\list(M)\ pred_le2 + by simp + then + have "arity(Forall(\)) \ length(env)" + using pred_le \\\formula\ \env\list(M)\ by simp + then + have "?D1\M" using Collect_forces ar\ \\\formula\ \env\list(M)\ by simp + moreover + have "?D2\M" using \env\list(M)\ \\\formula\ truth_lemma' separation_closed ar\ + P_in_M + by simp + ultimately + have "D\M" unfolding D_def using Un_closed by simp + moreover + have "D \ P" unfolding D_def by auto + moreover + have "dense(D)" + proof + fix p + assume "p\P" + show "\d\D. d\ p" + proof (cases "p \ Forall(\) env") + case True + with \p\P\ + show ?thesis unfolding D_def using leq_reflI by blast + next + case False + with Forall \p\P\ + obtain b where "b\M" "\(p \ \ ([b]@env))" + using Forces_Forall by blast + moreover from this \p\P\ Forall + have "\dense_below({q\P. q \ \ ([b]@env)},p)" + using density_lemma pred_le2 by auto + moreover from this + obtain d where "d\p" "\q\P. q\d \ \(q \ \ ([b] @ env))" + "d\P" by blast + ultimately + show ?thesis unfolding D_def by auto + qed + qed + moreover + note \M_generic(G)\ + ultimately + obtain d where "d \ D" "d \ G" by blast + then + consider (1) "d\?D1" | (2) "d\?D2" unfolding D_def by blast + then + show "\p\G. (p \ Forall(\) env)" + proof (cases) + case 1 + with \d\G\ + show ?thesis by blast + next + case 2 + then + obtain b where "b\M" "\q\P. q\d \\(q \ \ ([b] @ env))" + by blast + moreover from this(1) and \M[G], _ \ Forall(\)\ and + Forall(2)[of "Cons(b,env)"] Forall(1,3-4) \M_generic(G)\ + obtain p where "p\G" "p\P" "p \ \ ([b] @ env)" + using pred_le2 using map_val_in_MG by (auto iff:GenExtD) + moreover + note \d\G\ \M_generic(G)\ + ultimately + obtain q where "q\G" "q\P" "q\d" "q\p" by blast + moreover from this and \p \ \ ([b] @ env)\ + Forall \b\M\ \p\P\ + have "q \ \ ([b] @ env)" + using pred_le2 strengthening_lemma by simp + moreover + note \\q\P. q\d \\(q \ \ ([b] @ env))\ + ultimately + show ?thesis by simp + qed + qed +qed +subsection\The ``Definition of forcing''\ +lemma definition_of_forcing: + assumes + "p\P" "\\formula" "env\list(M)" "arity(\)\length(env)" + shows + "(p \ \ env) \ + (\G. M_generic(G) \ p\G \ M[G], map(val(G),env) \ \)" +proof (intro iffI allI impI, elim conjE) + fix G + assume "(p \ \ env)" "M_generic(G)" "p \ G" + with assms + show "M[G], map(val(G),env) \ \" + using truth_lemma by blast +next + assume 1: "\G.(M_generic(G)\ p\G)\ M[G] , map(val(G),env) \ \" + { + fix r + assume 2: "r\P" "r\p" + then + obtain G where "r\G" "M_generic(G)" + using generic_filter_existence by auto + moreover from calculation 2 \p\P\ + have "p\G" + unfolding M_generic_def using filter_leqD by simp + moreover note 1 + ultimately + have "M[G], map(val(G),env) \ \" + by simp + with assms \M_generic(G)\ + obtain s where "s\G" "(s \ \ env)" + using truth_lemma by blast + moreover from this and \M_generic(G)\ \r\G\ + obtain q where "q\G" "q\s" "q\r" + by blast + moreover from calculation \s\G\ \M_generic(G)\ + have "s\P" "q\P" + unfolding M_generic_def filter_def by auto + moreover + note assms + ultimately + have "\q\P. q\r \ (q \ \ env)" + using strengthening_lemma by blast + } + then + have "dense_below({q\P. (q \ \ env)},p)" + unfolding dense_below_def by blast + with assms + show "(p \ \ env)" + using density_lemma by blast +qed + +lemmas definability = forces_type +end (* forcing_data *) + +end \ No newline at end of file diff --git a/thys/Forcing/Foundation_Axiom.thy b/thys/Forcing/Foundation_Axiom.thy new file mode 100644 --- /dev/null +++ b/thys/Forcing/Foundation_Axiom.thy @@ -0,0 +1,35 @@ +section\The Axiom of Foundation in $M[G]$\ +theory Foundation_Axiom +imports + Names +begin + +context forcing_data +begin + +(* Slick proof essentially by Paulson (adapted from L) *) +lemma foundation_in_MG : "foundation_ax(##(M[G]))" + unfolding foundation_ax_def + by (rule rallI, cut_tac A=x in foundation, auto intro: transitivity_MG) + +(* Same theorem as above, declarative proof, + without using transitivity *) +lemma "foundation_ax(##(M[G]))" +proof - + { + fix x + assume "x\M[G]" "\y\M[G] . y\x" + then + have "\y\M[G] . y\x\M[G]" by simp + then + obtain y where "y\x\M[G]" "\z\y. z \ x\M[G]" + using foundation[of "x\M[G]"] by blast + then + have "\y\M[G] . y \ x \ (\z\M[G] . z \ x \ z \ y)"by auto + } + then show ?thesis + unfolding foundation_ax_def by auto +qed + +end (* context forcing_data *) +end \ No newline at end of file diff --git a/thys/Forcing/FrecR.thy b/thys/Forcing/FrecR.thy new file mode 100644 --- /dev/null +++ b/thys/Forcing/FrecR.thy @@ -0,0 +1,680 @@ +section\Well-founded relation on names\ +theory FrecR imports Names Synthetic_Definition begin + +lemmas sep_rules' = nth_0 nth_ConsI FOL_iff_sats function_iff_sats + fun_plus_iff_sats omega_iff_sats FOL_sats_iff + +text\\<^term>\frecR\ is the well-founded relation on names that allows +us to define forcing for atomic formulas.\ + +(* MOVE THIS. absoluteness of higher-order composition *) +definition + is_hcomp :: "[i\o,i\i\o,i\i\o,i,i] \ o" where + "is_hcomp(M,is_f,is_g,a,w) \ \z[M]. is_g(a,z) \ is_f(z,w)" + +lemma (in M_trivial) hcomp_abs: + assumes + is_f_abs:"\a z. M(a) \ M(z) \ is_f(a,z) \ z = f(a)" and + is_g_abs:"\a z. M(a) \ M(z) \ is_g(a,z) \ z = g(a)" and + g_closed:"\a. M(a) \ M(g(a))" + "M(a)" "M(w)" + shows + "is_hcomp(M,is_f,is_g,a,w) \ w = f(g(a))" + unfolding is_hcomp_def using assms by simp + +definition + hcomp_fm :: "[i\i\i,i\i\i,i,i] \ i" where + "hcomp_fm(pf,pg,a,w) \ Exists(And(pg(succ(a),0),pf(0,succ(w))))" + +lemma sats_hcomp_fm: + assumes + f_iff_sats:"\a b z. a\nat \ b\nat \ z\M \ + is_f(nth(a,Cons(z,env)),nth(b,Cons(z,env))) \ sats(M,pf(a,b),Cons(z,env))" + and + g_iff_sats:"\a b z. a\nat \ b\nat \ z\M \ + is_g(nth(a,Cons(z,env)),nth(b,Cons(z,env))) \ sats(M,pg(a,b),Cons(z,env))" + and + "a\nat" "w\nat" "env\list(M)" + shows + "sats(M,hcomp_fm(pf,pg,a,w),env) \ is_hcomp(##M,is_f,is_g,nth(a,env),nth(w,env))" +proof - + have "sats(M, pf(0, succ(w)), Cons(x, env)) \ is_f(x,nth(w,env))" if "x\M" "w\nat" for x w + using f_iff_sats[of 0 "succ(w)" x] that by simp + moreover + have "sats(M, pg(succ(a), 0), Cons(x, env)) \ is_g(nth(a,env),x)" if "x\M" "a\nat" for x a + using g_iff_sats[of "succ(a)" 0 x] that by simp + ultimately + show ?thesis unfolding hcomp_fm_def is_hcomp_def using assms by simp +qed + + +(* Preliminary *) +definition + ftype :: "i\i" where + "ftype \ fst" + +definition + name1 :: "i\i" where + "name1(x) \ fst(snd(x))" + +definition + name2 :: "i\i" where + "name2(x) \ fst(snd(snd(x)))" + +definition + cond_of :: "i\i" where + "cond_of(x) \ snd(snd(snd((x))))" + +lemma components_simp: + "ftype(\f,n1,n2,c\) = f" + "name1(\f,n1,n2,c\) = n1" + "name2(\f,n1,n2,c\) = n2" + "cond_of(\f,n1,n2,c\) = c" + unfolding ftype_def name1_def name2_def cond_of_def + by simp_all + +definition eclose_n :: "[i\i,i] \ i" where + "eclose_n(name,x) = eclose({name(x)})" + +definition + ecloseN :: "i \ i" where + "ecloseN(x) = eclose_n(name1,x) \ eclose_n(name2,x)" + +lemma components_in_eclose : + "n1 \ ecloseN(\f,n1,n2,c\)" + "n2 \ ecloseN(\f,n1,n2,c\)" + unfolding ecloseN_def eclose_n_def + using components_simp arg_into_eclose by auto + +lemmas names_simp = components_simp(2) components_simp(3) + +lemma ecloseNI1 : + assumes "x \ eclose(n1) \ x\eclose(n2)" + shows "x \ ecloseN(\f,n1,n2,c\)" + unfolding ecloseN_def eclose_n_def + using assms eclose_sing names_simp + by auto + +lemmas ecloseNI = ecloseNI1 + +lemma ecloseN_mono : + assumes "u \ ecloseN(x)" "name1(x) \ ecloseN(y)" "name2(x) \ ecloseN(y)" + shows "u \ ecloseN(y)" +proof - + from \u\_\ + consider (a) "u\eclose({name1(x)})" | (b) "u \ eclose({name2(x)})" + unfolding ecloseN_def eclose_n_def by auto + then + show ?thesis + proof cases + case a + with \name1(x) \ _\ + show ?thesis + unfolding ecloseN_def eclose_n_def + using eclose_singE[OF a] mem_eclose_trans[of u "name1(x)" ] by auto + next + case b + with \name2(x) \ _\ + show ?thesis + unfolding ecloseN_def eclose_n_def + using eclose_singE[OF b] mem_eclose_trans[of u "name2(x)"] by auto + qed +qed + + +(* ftype(p) \ THE a. \b. p = \a, b\ *) + +definition + is_fst :: "(i\o)\i\i\o" where + "is_fst(M,x,t) \ (\z[M]. pair(M,t,z,x)) \ + (\(\z[M]. \w[M]. pair(M,w,z,x)) \ empty(M,t))" + +definition + fst_fm :: "[i,i] \ i" where + "fst_fm(x,t) \ Or(Exists(pair_fm(succ(t),0,succ(x))), + And(Neg(Exists(Exists(pair_fm(0,1,2 #+ x)))),empty_fm(t)))" + +lemma sats_fst_fm : + "\ x \ nat; y \ nat;env \ list(A) \ + \ sats(A, fst_fm(x,y), env) \ + is_fst(##A, nth(x,env), nth(y,env))" + by (simp add: fst_fm_def is_fst_def) + +definition + is_ftype :: "(i\o)\i\i\o" where + "is_ftype \ is_fst" + +definition + ftype_fm :: "[i,i] \ i" where + "ftype_fm \ fst_fm" + +lemma sats_ftype_fm : + "\ x \ nat; y \ nat;env \ list(A) \ + \ sats(A, ftype_fm(x,y), env) \ + is_ftype(##A, nth(x,env), nth(y,env))" + unfolding ftype_fm_def is_ftype_def + by (simp add:sats_fst_fm) + +lemma is_ftype_iff_sats: + assumes + "nth(a,env) = aa" "nth(b,env) = bb" "a\nat" "b\nat" "env \ list(A)" + shows + "is_ftype(##A,aa,bb) \ sats(A,ftype_fm(a,b), env)" + using assms + by (simp add:sats_ftype_fm) + +definition + is_snd :: "(i\o)\i\i\o" where + "is_snd(M,x,t) \ (\z[M]. pair(M,z,t,x)) \ + (\(\z[M]. \w[M]. pair(M,z,w,x)) \ empty(M,t))" + +definition + snd_fm :: "[i,i] \ i" where + "snd_fm(x,t) \ Or(Exists(pair_fm(0,succ(t),succ(x))), + And(Neg(Exists(Exists(pair_fm(1,0,2 #+ x)))),empty_fm(t)))" + +lemma sats_snd_fm : + "\ x \ nat; y \ nat;env \ list(A) \ + \ sats(A, snd_fm(x,y), env) \ + is_snd(##A, nth(x,env), nth(y,env))" + by (simp add: snd_fm_def is_snd_def) + +definition + is_name1 :: "(i\o)\i\i\o" where + "is_name1(M,x,t2) \ is_hcomp(M,is_fst(M),is_snd(M),x,t2)" + +definition + name1_fm :: "[i,i] \ i" where + "name1_fm(x,t) \ hcomp_fm(fst_fm,snd_fm,x,t)" + +lemma sats_name1_fm : + "\ x \ nat; y \ nat;env \ list(A) \ + \ sats(A, name1_fm(x,y), env) \ + is_name1(##A, nth(x,env), nth(y,env))" + unfolding name1_fm_def is_name1_def using sats_fst_fm sats_snd_fm + sats_hcomp_fm[of A "is_fst(##A)" _ fst_fm "is_snd(##A)"] by simp + +lemma is_name1_iff_sats: + assumes + "nth(a,env) = aa" "nth(b,env) = bb" "a\nat" "b\nat" "env \ list(A)" + shows + "is_name1(##A,aa,bb) \ sats(A,name1_fm(a,b), env)" + using assms + by (simp add:sats_name1_fm) + +definition + is_snd_snd :: "(i\o)\i\i\o" where + "is_snd_snd(M,x,t) \ is_hcomp(M,is_snd(M),is_snd(M),x,t)" + +definition + snd_snd_fm :: "[i,i]\i" where + "snd_snd_fm(x,t) \ hcomp_fm(snd_fm,snd_fm,x,t)" + +lemma sats_snd2_fm : + "\ x \ nat; y \ nat;env \ list(A) \ + \ sats(A,snd_snd_fm(x,y), env) \ + is_snd_snd(##A, nth(x,env), nth(y,env))" + unfolding snd_snd_fm_def is_snd_snd_def using sats_snd_fm + sats_hcomp_fm[of A "is_snd(##A)" _ snd_fm "is_snd(##A)"] by simp + +definition + is_name2 :: "(i\o)\i\i\o" where + "is_name2(M,x,t3) \ is_hcomp(M,is_fst(M),is_snd_snd(M),x,t3)" + +definition + name2_fm :: "[i,i] \ i" where + "name2_fm(x,t3) \ hcomp_fm(fst_fm,snd_snd_fm,x,t3)" + +lemma sats_name2_fm : + "\ x \ nat; y \ nat;env \ list(A) \ + \ sats(A,name2_fm(x,y), env) \ + is_name2(##A, nth(x,env), nth(y,env))" + unfolding name2_fm_def is_name2_def using sats_fst_fm sats_snd2_fm + sats_hcomp_fm[of A "is_fst(##A)" _ fst_fm "is_snd_snd(##A)"] by simp + +lemma is_name2_iff_sats: + assumes + "nth(a,env) = aa" "nth(b,env) = bb" "a\nat" "b\nat" "env \ list(A)" + shows + "is_name2(##A,aa,bb) \ sats(A,name2_fm(a,b), env)" + using assms + by (simp add:sats_name2_fm) + +definition + is_cond_of :: "(i\o)\i\i\o" where + "is_cond_of(M,x,t4) \ is_hcomp(M,is_snd(M),is_snd_snd(M),x,t4)" + +definition + cond_of_fm :: "[i,i] \ i" where + "cond_of_fm(x,t4) \ hcomp_fm(snd_fm,snd_snd_fm,x,t4)" + +lemma sats_cond_of_fm : + "\ x \ nat; y \ nat;env \ list(A) \ + \ sats(A,cond_of_fm(x,y), env) \ + is_cond_of(##A, nth(x,env), nth(y,env))" + unfolding cond_of_fm_def is_cond_of_def using sats_snd_fm sats_snd2_fm + sats_hcomp_fm[of A "is_snd(##A)" _ snd_fm "is_snd_snd(##A)"] by simp + +lemma is_cond_of_iff_sats: + assumes + "nth(a,env) = aa" "nth(b,env) = bb" "a\nat" "b\nat" "env \ list(A)" + shows + "is_cond_of(##A,aa,bb) \ sats(A,cond_of_fm(a,b), env)" + using assms + by (simp add:sats_cond_of_fm) + +lemma components_type[TC] : + assumes "a\nat" "b\nat" + shows + "ftype_fm(a,b)\formula" + "name1_fm(a,b)\formula" + "name2_fm(a,b)\formula" + "cond_of_fm(a,b)\formula" + using assms + unfolding ftype_fm_def fst_fm_def snd_fm_def snd_snd_fm_def name1_fm_def name2_fm_def + cond_of_fm_def hcomp_fm_def + by simp_all + +lemmas sats_components_fm[simp] = sats_ftype_fm sats_name1_fm sats_name2_fm sats_cond_of_fm + +lemmas components_iff_sats = is_ftype_iff_sats is_name1_iff_sats is_name2_iff_sats + is_cond_of_iff_sats + +lemmas components_defs = fst_fm_def ftype_fm_def snd_fm_def snd_snd_fm_def hcomp_fm_def + name1_fm_def name2_fm_def cond_of_fm_def + + +definition + is_eclose_n :: "[i\o,[i\o,i,i]\o,i,i] \ o" where + "is_eclose_n(N,is_name,en,t) \ + \n1[N].\s1[N]. is_name(N,t,n1) \ is_singleton(N,n1,s1) \ is_eclose(N,s1,en)" + +definition + eclose_n1_fm :: "[i,i] \ i" where + "eclose_n1_fm(m,t) \ Exists(Exists(And(And(name1_fm(t#+2,0),singleton_fm(0,1)), + is_eclose_fm(1,m#+2))))" + +definition + eclose_n2_fm :: "[i,i] \ i" where + "eclose_n2_fm(m,t) \ Exists(Exists(And(And(name2_fm(t#+2,0),singleton_fm(0,1)), + is_eclose_fm(1,m#+2))))" + +definition + is_ecloseN :: "[i\o,i,i] \ o" where + "is_ecloseN(N,en,t) \ \en1[N].\en2[N]. + is_eclose_n(N,is_name1,en1,t) \ is_eclose_n(N,is_name2,en2,t)\ + union(N,en1,en2,en)" + +definition + ecloseN_fm :: "[i,i] \ i" where + "ecloseN_fm(en,t) \ Exists(Exists(And(eclose_n1_fm(1,t#+2), + And(eclose_n2_fm(0,t#+2),union_fm(1,0,en#+2)))))" +lemma ecloseN_fm_type [TC] : + "\ en \ nat ; t \ nat \ \ ecloseN_fm(en,t) \ formula" + unfolding ecloseN_fm_def eclose_n1_fm_def eclose_n2_fm_def by simp + +lemma sats_ecloseN_fm [simp]: + "\ en \ nat; t \ nat ; env \ list(A) \ + \ sats(A, ecloseN_fm(en,t), env) \ is_ecloseN(##A,nth(en,env),nth(t,env))" + unfolding ecloseN_fm_def is_ecloseN_def eclose_n1_fm_def eclose_n2_fm_def is_eclose_n_def + using nth_0 nth_ConsI sats_name1_fm sats_name2_fm + is_singleton_iff_sats[symmetric] + by auto + +(* Relation of forces *) +definition + frecR :: "i \ i \ o" where + "frecR(x,y) \ + (ftype(x) = 1 \ ftype(y) = 0 + \ (name1(x) \ domain(name1(y)) \ domain(name2(y)) \ (name2(x) = name1(y) \ name2(x) = name2(y)))) + \ (ftype(x) = 0 \ ftype(y) = 1 \ name1(x) = name1(y) \ name2(x) \ domain(name2(y)))" + +lemma frecR_ftypeD : + assumes "frecR(x,y)" + shows "(ftype(x) = 0 \ ftype(y) = 1) \ (ftype(x) = 1 \ ftype(y) = 0)" + using assms unfolding frecR_def by auto + +lemma frecRI1: "s \ domain(n1) \ s \ domain(n2) \ frecR(\1, s, n1, q\, \0, n1, n2, q'\)" + unfolding frecR_def by (simp add:components_simp) + +lemma frecRI1': "s \ domain(n1) \ domain(n2) \ frecR(\1, s, n1, q\, \0, n1, n2, q'\)" + unfolding frecR_def by (simp add:components_simp) + +lemma frecRI2: "s \ domain(n1) \ s \ domain(n2) \ frecR(\1, s, n2, q\, \0, n1, n2, q'\)" + unfolding frecR_def by (simp add:components_simp) + +lemma frecRI2': "s \ domain(n1) \ domain(n2) \ frecR(\1, s, n2, q\, \0, n1, n2, q'\)" + unfolding frecR_def by (simp add:components_simp) + + +lemma frecRI3: "\s, r\ \ n2 \ frecR(\0, n1, s, q\, \1, n1, n2, q'\)" + unfolding frecR_def by (auto simp add:components_simp) + +lemma frecRI3': "s \ domain(n2) \ frecR(\0, n1, s, q\, \1, n1, n2, q'\)" + unfolding frecR_def by (auto simp add:components_simp) + +lemma frecR_iff : + "frecR(x,y) \ + (ftype(x) = 1 \ ftype(y) = 0 + \ (name1(x) \ domain(name1(y)) \ domain(name2(y)) \ (name2(x) = name1(y) \ name2(x) = name2(y)))) + \ (ftype(x) = 0 \ ftype(y) = 1 \ name1(x) = name1(y) \ name2(x) \ domain(name2(y)))" + unfolding frecR_def .. + +lemma frecR_D1 : + "frecR(x,y) \ ftype(y) = 0 \ ftype(x) = 1 \ + (name1(x) \ domain(name1(y)) \ domain(name2(y)) \ (name2(x) = name1(y) \ name2(x) = name2(y)))" + using frecR_iff + by auto + +lemma frecR_D2 : + "frecR(x,y) \ ftype(y) = 1 \ ftype(x) = 0 \ + ftype(x) = 0 \ ftype(y) = 1 \ name1(x) = name1(y) \ name2(x) \ domain(name2(y))" + using frecR_iff + by auto + +lemma frecR_DI : + assumes "frecR(\a,b,c,d\,\ftype(y),name1(y),name2(y),cond_of(y)\)" + shows "frecR(\a,b,c,d\,y)" + using assms unfolding frecR_def by (force simp add:components_simp) + +(* +name1(x) \ domain(name1(y)) \ domain(name2(y)) \ + (name2(x) = name1(y) \ name2(x) = name2(y)) + \ name1(x) = name1(y) \ name2(x) \ domain(name2(y))*) +definition + is_frecR :: "[i\o,i,i] \ o" where + "is_frecR(M,x,y) \ \ ftx[M]. \ n1x[M]. \n2x[M]. \fty[M]. \n1y[M]. \n2y[M]. \dn1[M]. \dn2[M]. + is_ftype(M,x,ftx) \ is_name1(M,x,n1x)\ is_name2(M,x,n2x) \ + is_ftype(M,y,fty) \ is_name1(M,y,n1y) \ is_name2(M,y,n2y) + \ is_domain(M,n1y,dn1) \ is_domain(M,n2y,dn2) \ + ( (number1(M,ftx) \ empty(M,fty) \ (n1x \ dn1 \ n1x \ dn2) \ (n2x = n1y \ n2x = n2y)) + \ (empty(M,ftx) \ number1(M,fty) \ n1x = n1y \ n2x \ dn2))" + +schematic_goal sats_frecR_fm_auto: + assumes + "i\nat" "j\nat" "env\list(A)" "nth(i,env) = a" "nth(j,env) = b" + shows + "is_frecR(##A,a,b) \ sats(A,?fr_fm(i,j),env)" + unfolding is_frecR_def is_Collect_def + by (insert assms ; (rule sep_rules' cartprod_iff_sats components_iff_sats + | simp del:sats_cartprod_fm)+) + +synthesize "frecR_fm" from_schematic sats_frecR_fm_auto + +(* Third item of Kunen observations about the trcl relation in p. 257. *) +lemma eq_ftypep_not_frecrR: + assumes "ftype(x) = ftype(y)" + shows "\ frecR(x,y)" + using assms frecR_ftypeD by force + + +definition + rank_names :: "i \ i" where + "rank_names(x) \ max(rank(name1(x)),rank(name2(x)))" + +lemma rank_names_types [TC]: + shows "Ord(rank_names(x))" + unfolding rank_names_def max_def using Ord_rank Ord_Un by auto + +definition + mtype_form :: "i \ i" where + "mtype_form(x) \ if rank(name1(x)) < rank(name2(x)) then 0 else 2" + +definition + type_form :: "i \ i" where + "type_form(x) \ if ftype(x) = 0 then 1 else mtype_form(x)" + +lemma type_form_tc [TC]: + shows "type_form(x) \ 3" + unfolding type_form_def mtype_form_def by auto + +lemma frecR_le_rnk_names : + assumes "frecR(x,y)" + shows "rank_names(x)\rank_names(y)" +proof - + obtain a b c d where + H: "a = name1(x)" "b = name2(x)" + "c = name1(y)" "d = name2(y)" + "(a \ domain(c)\domain(d) \ (b=c \ b = d)) \ (a = c \ b \ domain(d))" + using assms unfolding frecR_def by force + then + consider + (m) "a \ domain(c) \ (b = c \ b = d) " + | (n) "a \ domain(d) \ (b = c \ b = d)" + | (o) "b \ domain(d) \ a = c" + by auto + then show ?thesis proof(cases) + case m + then + have "rank(a) < rank(c)" + using eclose_rank_lt in_dom_in_eclose by simp + with \rank(a) < rank(c)\ H m + show ?thesis unfolding rank_names_def using Ord_rank max_cong max_cong2 leI by auto + next + case n + then + have "rank(a) < rank(d)" + using eclose_rank_lt in_dom_in_eclose by simp + with \rank(a) < rank(d)\ H n + show ?thesis unfolding rank_names_def + using Ord_rank max_cong2 max_cong max_commutes[of "rank(c)" "rank(d)"] leI by auto + next + case o + then + have "rank(b) < rank(d)" (is "?b < ?d") "rank(a) = rank(c)" (is "?a = _") + using eclose_rank_lt in_dom_in_eclose by simp_all + with H + show ?thesis unfolding rank_names_def + using Ord_rank max_commutes max_cong2[OF leI[OF \?b < ?d\], of ?a] by simp + qed +qed + + +definition + \ :: "i \ i" where + "\(x) = 3 ** rank_names(x) ++ type_form(x)" + +lemma \_type [TC]: + shows "Ord(\(x))" + unfolding \_def by simp + + +lemma \_mono : + assumes "frecR(x,y)" + shows "\(x) < \(y)" +proof - + have F: "type_form(x) < 3" "type_form(y) < 3" + using ltI by simp_all + from assms + have A: "rank_names(x) \ rank_names(y)" (is "?x \ ?y") + using frecR_le_rnk_names by simp + then + have "Ord(?y)" unfolding rank_names_def using Ord_rank max_def by simp + note leE[OF \?x\?y\] + then + show ?thesis + proof(cases) + case 1 + then + show ?thesis unfolding \_def using oadd_lt_mono2 \?x < ?y\ F by auto + next + case 2 + consider (a) "ftype(x) = 0 \ ftype(y) = 1" | (b) "ftype(x) = 1 \ ftype(y) = 0" + using frecR_ftypeD[OF \frecR(x,y)\] by auto + then show ?thesis proof(cases) + case b + then + have "type_form(y) = 1" + using type_form_def by simp + from b + have H: "name2(x) = name1(y) \ name2(x) = name2(y) " (is "?\ = ?\' \ ?\ = ?\'") + "name1(x) \ domain(name1(y)) \ domain(name2(y))" + (is "?\ \ domain(?\') \ domain(?\')") + using assms unfolding type_form_def frecR_def by auto + then + have E: "rank(?\) = rank(?\') \ rank(?\) = rank(?\')" by auto + from H + consider (a) "rank(?\) < rank(?\')" | (b) "rank(?\) < rank(?\')" + using eclose_rank_lt in_dom_in_eclose by force + then + have "rank(?\) < rank(?\)" proof (cases) + case a + with \rank_names(x) = rank_names(y) \ + show ?thesis unfolding rank_names_def mtype_form_def type_form_def using max_D2[OF E a] + E assms Ord_rank by simp + next + case b + with \rank_names(x) = rank_names(y) \ + show ?thesis unfolding rank_names_def mtype_form_def type_form_def + using max_D2[OF _ b] max_commutes E assms Ord_rank disj_commute by auto + qed + with b + have "type_form(x) = 0" unfolding type_form_def mtype_form_def by simp + with \rank_names(x) = rank_names(y) \ \type_form(y) = 1\ \type_form(x) = 0\ + show ?thesis + unfolding \_def by auto + next + case a + then + have "name1(x) = name1(y)" (is "?\ = ?\'") + "name2(x) \ domain(name2(y))" (is "?\ \ domain(?\')") + "type_form(x) = 1" + using assms unfolding type_form_def frecR_def by auto + then + have "rank(?\) = rank(?\')" "rank(?\) < rank(?\')" + using eclose_rank_lt in_dom_in_eclose by simp_all + with \rank_names(x) = rank_names(y) \ + have "rank(?\') \ rank(?\')" + unfolding rank_names_def using Ord_rank max_D1 by simp + with a + have "type_form(y) = 2" + unfolding type_form_def mtype_form_def using not_lt_iff_le assms by simp + with \rank_names(x) = rank_names(y) \ \type_form(y) = 2\ \type_form(x) = 1\ + show ?thesis + unfolding \_def by auto + qed + qed +qed + +definition + frecrel :: "i \ i" where + "frecrel(A) \ Rrel(frecR,A)" + +lemma frecrelI : + assumes "x \ A" "y\A" "frecR(x,y)" + shows "\x,y\\frecrel(A)" + using assms unfolding frecrel_def Rrel_def by auto + +lemma frecrelD : + assumes "\x,y\ \ frecrel(A1\A2\A3\A4)" + shows "ftype(x) \ A1" "ftype(x) \ A1" + "name1(x) \ A2" "name1(y) \ A2" "name2(x) \ A3" "name2(x) \ A3" + "cond_of(x) \ A4" "cond_of(y) \ A4" + "frecR(x,y)" + using assms unfolding frecrel_def Rrel_def ftype_def by (auto simp add:components_simp) + +lemma wf_frecrel : + shows "wf(frecrel(A))" +proof - + have "frecrel(A) \ measure(A,\)" + unfolding frecrel_def Rrel_def measure_def + using \_mono by force + then show ?thesis using wf_subset wf_measure by auto +qed + +lemma core_induction_aux: + fixes A1 A2 :: "i" + assumes + "Transset(A1)" + "\\ \ p. p \ A2 \ \\q \. \ q\A2 ; \\domain(\)\ \ Q(0,\,\,q)\ \ Q(1,\,\,p)" + "\\ \ p. p \ A2 \ \\q \. \ q\A2 ; \\domain(\) \ domain(\)\ \ Q(1,\,\,q) \ Q(1,\,\,q)\ \ Q(0,\,\,p)" + shows "a\2\A1\A1\A2 \ Q(ftype(a),name1(a),name2(a),cond_of(a))" +proof (induct a rule:wf_induct[OF wf_frecrel[of "2\A1\A1\A2"]]) + case (1 x) + let ?\ = "name1(x)" + let ?\ = "name2(x)" + let ?D = "2\A1\A1\A2" + assume "x \ ?D" + then + have "cond_of(x)\A2" + by (auto simp add:components_simp) + from \x\?D\ + consider (eq) "ftype(x)=0" | (mem) "ftype(x)=1" + by (auto simp add:components_simp) + then + show ?case + proof cases + case eq + then + have "Q(1, \, ?\, q) \ Q(1, \, ?\, q)" if "\ \ domain(?\) \ domain(?\)" and "q\A2" for q \ + proof - + from 1 + have A: "?\\A1" "?\\A1" "?\\eclose(A1)" "?\\eclose(A1)" + using arg_into_eclose by (auto simp add:components_simp) + with \Transset(A1)\ that(1) + have "\\eclose(?\) \ eclose(?\)" + using in_dom_in_eclose by auto + then + have "\\A1" + using mem_eclose_subset[OF \?\\A1\] mem_eclose_subset[OF \?\\A1\] + Transset_eclose_eq_arg[OF \Transset(A1)\] + by auto + with \q\A2\ \?\ \ A1\ \cond_of(x)\A2\ \?\\A1\ + have "frecR(\1, \, ?\, q\, x)" (is "frecR(?T,_)") + "frecR(\1, \, ?\, q\, x)" (is "frecR(?U,_)") + using frecRI1'[OF that(1)] frecR_DI \ftype(x) = 0\ + frecRI2'[OF that(1)] + by (auto simp add:components_simp) + with \x\?D\ \\\A1\ \q\A2\ + have "\?T,x\\ frecrel(?D)" "\?U,x\\ frecrel(?D)" + using frecrelI[of ?T ?D x] frecrelI[of ?U ?D x] by (auto simp add:components_simp) + with \q\A2\ \\\A1\ \?\\A1\ \?\\A1\ + have "Q(1, \, ?\, q)" using 1 by (force simp add:components_simp) + moreover from \q\A2\ \\\A1\ \?\\A1\ \?\\A1\ \\?U,x\\ frecrel(?D)\ + have "Q(1, \, ?\, q)" using 1 by (force simp add:components_simp) + ultimately + show ?thesis using A by simp + qed + then show ?thesis using assms(3) \ftype(x) = 0\ \cond_of(x)\A2\ by auto + next + case mem + have "Q(0, ?\, \, q)" if "\ \ domain(?\)" and "q\A2" for q \ + proof - + from 1 assms + have "?\\A1" "?\\A1" "cond_of(x)\A2" "?\\eclose(A1)" "?\\eclose(A1)" + using arg_into_eclose by (auto simp add:components_simp) + with \Transset(A1)\ that(1) + have "\\ eclose(?\)" + using in_dom_in_eclose by auto + then + have "\\A1" + using mem_eclose_subset[OF \?\\A1\] Transset_eclose_eq_arg[OF \Transset(A1)\] + by auto + with \q\A2\ \?\ \ A1\ \cond_of(x)\A2\ \?\\A1\ + have "frecR(\0, ?\, \, q\, x)" (is "frecR(?T,_)") + using frecRI3'[OF that(1)] frecR_DI \ftype(x) = 1\ + by (auto simp add:components_simp) + with \x\?D\ \\\A1\ \q\A2\ \?\\A1\ + have "\?T,x\\ frecrel(?D)" "?T\?D" + using frecrelI[of ?T ?D x] by (auto simp add:components_simp) + with \q\A2\ \\\A1\ \?\\A1\ \?\\A1\ 1 + show ?thesis by (force simp add:components_simp) + qed + then show ?thesis using assms(2) \ftype(x) = 1\ \cond_of(x)\A2\ by auto + qed +qed + +lemma def_frecrel : "frecrel(A) = {z\A\A. \x y. z = \x, y\ \ frecR(x,y)}" + unfolding frecrel_def Rrel_def .. + +lemma frecrel_fst_snd: + "frecrel(A) = {z \ A\A . + ftype(fst(z)) = 1 \ + ftype(snd(z)) = 0 \ name1(fst(z)) \ domain(name1(snd(z))) \ domain(name2(snd(z))) \ + (name2(fst(z)) = name1(snd(z)) \ name2(fst(z)) = name2(snd(z))) + \ (ftype(fst(z)) = 0 \ + ftype(snd(z)) = 1 \ name1(fst(z)) = name1(snd(z)) \ name2(fst(z)) \ domain(name2(snd(z))))}" + unfolding def_frecrel frecR_def + by (intro equalityI subsetI CollectI; elim CollectE; auto) + +end \ No newline at end of file diff --git a/thys/Forcing/Infinity_Axiom.thy b/thys/Forcing/Infinity_Axiom.thy new file mode 100644 --- /dev/null +++ b/thys/Forcing/Infinity_Axiom.thy @@ -0,0 +1,37 @@ +section\The Axiom of Infinity in $M[G]$\ +theory Infinity_Axiom + imports Pairing_Axiom Union_Axiom Separation_Axiom +begin + +context G_generic begin + +interpretation mg_triv: M_trivial"##M[G]" + using transitivity_MG zero_in_MG generic Union_MG pairing_in_MG + by unfold_locales auto + +lemma infinity_in_MG : "infinity_ax(##M[G])" +proof - + from infinity_ax obtain I where + Eq1: "I\M" "0 \ I" "\y\M. y \ I \ succ(y) \ I" + unfolding infinity_ax_def by auto + then + have "check(I) \ M" + using check_in_M by simp + then + have "I\ M[G]" + using valcheck generic one_in_G one_in_P GenExtI[of "check(I)" G] by simp + with \0\I\ + have "0\M[G]" using transitivity_MG by simp + with \I\M\ + have "y \ M" if "y \ I" for y + using transitivity[OF _ \I\M\] that by simp + with \I\M[G]\ + have "succ(y) \ I \ M[G]" if "y \ I" for y + using that Eq1 transitivity_MG by blast + with Eq1 \I\M[G]\ \0\M[G]\ + show ?thesis + unfolding infinity_ax_def by auto +qed + +end (* G_generic' *) +end \ No newline at end of file diff --git a/thys/Forcing/Interface.thy b/thys/Forcing/Interface.thy new file mode 100644 --- /dev/null +++ b/thys/Forcing/Interface.thy @@ -0,0 +1,1376 @@ +section\Interface between set models and Constructibility\ + +text\This theory provides an interface between Paulson's +relativization results and set models of ZFC. In particular, +it is used to prove that the locale \<^term>\forcing_data\ is +a sublocale of all relevant locales in ZF-Constructibility +(\<^term>\M_trivial\, \<^term>\M_basic\, \<^term>\M_eclose\, etc).\ + +theory Interface + imports + Nat_Miscellanea + Relative_Univ + Synthetic_Definition +begin + +syntax + "_sats" :: "[i, i, i] \ o" ("(_, _ \ _)" [36,36,36] 60) +translations + "(M,env \ \)" \ "CONST sats(M,\,env)" + +abbreviation + dec10 :: i ("10") where "10 \ succ(9)" + +abbreviation + dec11 :: i ("11") where "11 \ succ(10)" + +abbreviation + dec12 :: i ("12") where "12 \ succ(11)" + +abbreviation + dec13 :: i ("13") where "13 \ succ(12)" + +abbreviation + dec14 :: i ("14") where "14 \ succ(13)" + + +definition + infinity_ax :: "(i \ o) \ o" where + "infinity_ax(M) \ + (\I[M]. (\z[M]. empty(M,z) \ z\I) \ (\y[M]. y\I \ (\sy[M]. successor(M,y,sy) \ sy\I)))" + +definition + choice_ax :: "(i\o) \ o" where + "choice_ax(M) \ \x[M]. \a[M]. \f[M]. ordinal(M,a) \ surjection(M,a,x,f)" + +context M_basic begin + +lemma choice_ax_abs : + "choice_ax(M) \ (\x[M]. \a[M]. \f[M]. Ord(a) \ f \ surj(a,x))" + unfolding choice_ax_def + by (simp) + +end (* M_basic *) + +definition + wellfounded_trancl :: "[i=>o,i,i,i] => o" where + "wellfounded_trancl(M,Z,r,p) \ + \w[M]. \wx[M]. \rp[M]. + w \ Z & pair(M,w,p,wx) & tran_closure(M,r,rp) & wx \ rp" + +lemma empty_intf : + "infinity_ax(M) \ + (\z[M]. empty(M,z))" + by (auto simp add: empty_def infinity_ax_def) + +lemma Transset_intf : + "Transset(M) \ y\x \ x \ M \ y \ M" + by (simp add: Transset_def,auto) + +locale M_ZF_trans = + fixes M + assumes + upair_ax: "upair_ax(##M)" + and Union_ax: "Union_ax(##M)" + and power_ax: "power_ax(##M)" + and extensionality: "extensionality(##M)" + and foundation_ax: "foundation_ax(##M)" + and infinity_ax: "infinity_ax(##M)" + and separation_ax: "\\formula \ env\list(M) \ arity(\) \ 1 #+ length(env) \ + separation(##M,\x. sats(M,\,[x] @ env))" + and replacement_ax: "\\formula \ env\list(M) \ arity(\) \ 2 #+ length(env) \ + strong_replacement(##M,\x y. sats(M,\,[x,y] @ env))" + and trans_M: "Transset(M)" +begin + + +lemma TranssetI : + "(\y x. y\x \ x\M \ y\M) \ Transset(M)" + by (auto simp add: Transset_def) + +lemma zero_in_M: "0 \ M" +proof - + from infinity_ax have + "(\z[##M]. empty(##M,z))" + by (rule empty_intf) + then obtain z where + zm: "empty(##M,z)" "z\M" + by auto + with trans_M have "z=0" + by (simp add: empty_def, blast intro: Transset_intf ) + with zm show ?thesis + by simp +qed + +subsection\Interface with \<^term>\M_trivial\\ +lemma mtrans : + "M_trans(##M)" + using Transset_intf[OF trans_M] zero_in_M exI[of "\x. x\M"] + by unfold_locales auto + + +lemma mtriv : + "M_trivial(##M)" + using trans_M M_trivial.intro mtrans M_trivial_axioms.intro upair_ax Union_ax + by simp + +end + +sublocale M_ZF_trans \ M_trivial "##M" + by (rule mtriv) + +context M_ZF_trans +begin + +subsection\Interface with \<^term>\M_basic\\ + +(* Inter_separation: "M(A) \ separation(M, \x. \ y[M]. y\A \ x\y)" *) +schematic_goal inter_fm_auto: + assumes + "nth(i,env) = x" "nth(j,env) = B" + "i \ nat" "j \ nat" "env \ list(A)" + shows + "(\y\A . y\B \ x\y) \ sats(A,?ifm(i,j),env)" + by (insert assms ; (rule sep_rules | simp)+) + +lemma inter_sep_intf : + assumes + "A\M" + shows + "separation(##M,\x . \y\M . y\A \ x\y)" +proof - + obtain ifm where + fmsats:"\env. env\list(M) \ (\ y\M. y\(nth(1,env)) \ nth(0,env)\y) + \ sats(M,ifm(0,1),env)" + and + "ifm(0,1) \ formula" + and + "arity(ifm(0,1)) = 2" + using \A\M\ inter_fm_auto + by (simp del:FOL_sats_iff add: nat_simp_union) + then + have "\a\M. separation(##M, \x. sats(M,ifm(0,1) , [x, a]))" + using separation_ax by simp + moreover + have "(\y\M . y\a \ x\y) \ sats(M,ifm(0,1),[x,a])" + if "a\M" "x\M" for a x + using that fmsats[of "[x,a]"] by simp + ultimately + have "\a\M. separation(##M, \x . \y\M . y\a \ x\y)" + unfolding separation_def by simp + with \A\M\ show ?thesis by simp +qed + + +(* Diff_separation: "M(B) \ separation(M, \x. x \ B)" *) +schematic_goal diff_fm_auto: + assumes + "nth(i,env) = x" "nth(j,env) = B" + "i \ nat" "j \ nat" "env \ list(A)" + shows + "x\B \ sats(A,?dfm(i,j),env)" + by (insert assms ; (rule sep_rules | simp)+) + +lemma diff_sep_intf : + assumes + "B\M" + shows + "separation(##M,\x . x\B)" +proof - + obtain dfm where + fmsats:"\env. env\list(M) \ nth(0,env)\nth(1,env) + \ sats(M,dfm(0,1),env)" + and + "dfm(0,1) \ formula" + and + "arity(dfm(0,1)) = 2" + using \B\M\ diff_fm_auto + by (simp del:FOL_sats_iff add: nat_simp_union) + then + have "\b\M. separation(##M, \x. sats(M,dfm(0,1) , [x, b]))" + using separation_ax by simp + moreover + have "x\b \ sats(M,dfm(0,1),[x,b])" + if "b\M" "x\M" for b x + using that fmsats[of "[x,b]"] by simp + ultimately + have "\b\M. separation(##M, \x . x\b)" + unfolding separation_def by simp + with \B\M\ show ?thesis by simp +qed + +schematic_goal cprod_fm_auto: + assumes + "nth(i,env) = z" "nth(j,env) = B" "nth(h,env) = C" + "i \ nat" "j \ nat" "h \ nat" "env \ list(A)" + shows + "(\x\A. x\B \ (\y\A. y\C \ pair(##A,x,y,z))) \ sats(A,?cpfm(i,j,h),env)" + by (insert assms ; (rule sep_rules | simp)+) + + +lemma cartprod_sep_intf : + assumes + "A\M" + and + "B\M" + shows + "separation(##M,\z. \x\M. x\A \ (\y\M. y\B \ pair(##M,x,y,z)))" +proof - + obtain cpfm where + fmsats:"\env. env\list(M) \ + (\x\M. x\nth(1,env) \ (\y\M. y\nth(2,env) \ pair(##M,x,y,nth(0,env)))) + \ sats(M,cpfm(0,1,2),env)" + and + "cpfm(0,1,2) \ formula" + and + "arity(cpfm(0,1,2)) = 3" + using cprod_fm_auto by (simp del:FOL_sats_iff add: fm_defs nat_simp_union) + then + have "\a\M. \b\M. separation(##M, \z. sats(M,cpfm(0,1,2) , [z, a, b]))" + using separation_ax by simp + moreover + have "(\x\M. x\a \ (\y\M. y\b \ pair(##M,x,y,z))) \ sats(M,cpfm(0,1,2),[z,a,b])" + if "a\M" "b\M" "z\M" for a b z + using that fmsats[of "[z,a,b]"] by simp + ultimately + have "\a\M. \b\M. separation(##M, \z . (\x\M. x\a \ (\y\M. y\b \ pair(##M,x,y,z))))" + unfolding separation_def by simp + with \A\M\ \B\M\ show ?thesis by simp +qed + +schematic_goal im_fm_auto: + assumes + "nth(i,env) = y" "nth(j,env) = r" "nth(h,env) = B" + "i \ nat" "j \ nat" "h \ nat" "env \ list(A)" + shows + "(\p\A. p\r & (\x\A. x\B & pair(##A,x,y,p))) \ sats(A,?imfm(i,j,h),env)" + by (insert assms ; (rule sep_rules | simp)+) + +lemma image_sep_intf : + assumes + "A\M" + and + "r\M" + shows + "separation(##M, \y. \p\M. p\r & (\x\M. x\A & pair(##M,x,y,p)))" +proof - + obtain imfm where + fmsats:"\env. env\list(M) \ + (\p\M. p\nth(1,env) & (\x\M. x\nth(2,env) & pair(##M,x,nth(0,env),p))) + \ sats(M,imfm(0,1,2),env)" + and + "imfm(0,1,2) \ formula" + and + "arity(imfm(0,1,2)) = 3" + using im_fm_auto by (simp del:FOL_sats_iff pair_abs add: fm_defs nat_simp_union) + then + have "\r\M. \a\M. separation(##M, \y. sats(M,imfm(0,1,2) , [y,r,a]))" + using separation_ax by simp + moreover + have "(\p\M. p\k & (\x\M. x\a & pair(##M,x,y,p))) \ sats(M,imfm(0,1,2),[y,k,a])" + if "k\M" "a\M" "y\M" for k a y + using that fmsats[of "[y,k,a]"] by simp + ultimately + have "\k\M. \a\M. separation(##M, \y . \p\M. p\k & (\x\M. x\a & pair(##M,x,y,p)))" + unfolding separation_def by simp + with \r\M\ \A\M\ show ?thesis by simp +qed + +schematic_goal con_fm_auto: + assumes + "nth(i,env) = z" "nth(j,env) = R" + "i \ nat" "j \ nat" "env \ list(A)" + shows + "(\p\A. p\R & (\x\A.\y\A. pair(##A,x,y,p) & pair(##A,y,x,z))) + \ sats(A,?cfm(i,j),env)" + by (insert assms ; (rule sep_rules | simp)+) + + +lemma converse_sep_intf : + assumes + "R\M" + shows + "separation(##M,\z. \p\M. p\R & (\x\M.\y\M. pair(##M,x,y,p) & pair(##M,y,x,z)))" +proof - + obtain cfm where + fmsats:"\env. env\list(M) \ + (\p\M. p\nth(1,env) & (\x\M.\y\M. pair(##M,x,y,p) & pair(##M,y,x,nth(0,env)))) + \ sats(M,cfm(0,1),env)" + and + "cfm(0,1) \ formula" + and + "arity(cfm(0,1)) = 2" + using con_fm_auto by (simp del:FOL_sats_iff pair_abs add: fm_defs nat_simp_union) + then + have "\r\M. separation(##M, \z. sats(M,cfm(0,1) , [z,r]))" + using separation_ax by simp + moreover + have "(\p\M. p\r & (\x\M.\y\M. pair(##M,x,y,p) & pair(##M,y,x,z))) \ + sats(M,cfm(0,1),[z,r])" + if "z\M" "r\M" for z r + using that fmsats[of "[z,r]"] by simp + ultimately + have "\r\M. separation(##M, \z . \p\M. p\r & (\x\M.\y\M. pair(##M,x,y,p) & pair(##M,y,x,z)))" + unfolding separation_def by simp + with \R\M\ show ?thesis by simp +qed + + +schematic_goal rest_fm_auto: + assumes + "nth(i,env) = z" "nth(j,env) = C" + "i \ nat" "j \ nat" "env \ list(A)" + shows + "(\x\A. x\C & (\y\A. pair(##A,x,y,z))) + \ sats(A,?rfm(i,j),env)" + by (insert assms ; (rule sep_rules | simp)+) + + +lemma restrict_sep_intf : + assumes + "A\M" + shows + "separation(##M,\z. \x\M. x\A & (\y\M. pair(##M,x,y,z)))" +proof - + obtain rfm where + fmsats:"\env. env\list(M) \ + (\x\M. x\nth(1,env) & (\y\M. pair(##M,x,y,nth(0,env)))) + \ sats(M,rfm(0,1),env)" + and + "rfm(0,1) \ formula" + and + "arity(rfm(0,1)) = 2" + using rest_fm_auto by (simp del:FOL_sats_iff pair_abs add: fm_defs nat_simp_union) + then + have "\a\M. separation(##M, \z. sats(M,rfm(0,1) , [z,a]))" + using separation_ax by simp + moreover + have "(\x\M. x\a & (\y\M. pair(##M,x,y,z))) \ + sats(M,rfm(0,1),[z,a])" + if "z\M" "a\M" for z a + using that fmsats[of "[z,a]"] by simp + ultimately + have "\a\M. separation(##M, \z . \x\M. x\a & (\y\M. pair(##M,x,y,z)))" + unfolding separation_def by simp + with \A\M\ show ?thesis by simp +qed + +schematic_goal comp_fm_auto: + assumes + "nth(i,env) = xz" "nth(j,env) = S" "nth(h,env) = R" + "i \ nat" "j \ nat" "h \ nat" "env \ list(A)" + shows + "(\x\A. \y\A. \z\A. \xy\A. \yz\A. + pair(##A,x,z,xz) & pair(##A,x,y,xy) & pair(##A,y,z,yz) & xy\S & yz\R) + \ sats(A,?cfm(i,j,h),env)" + by (insert assms ; (rule sep_rules | simp)+) + + +lemma comp_sep_intf : + assumes + "R\M" + and + "S\M" + shows + "separation(##M,\xz. \x\M. \y\M. \z\M. \xy\M. \yz\M. + pair(##M,x,z,xz) & pair(##M,x,y,xy) & pair(##M,y,z,yz) & xy\S & yz\R)" +proof - + obtain cfm where + fmsats:"\env. env\list(M) \ + (\x\M. \y\M. \z\M. \xy\M. \yz\M. pair(##M,x,z,nth(0,env)) & + pair(##M,x,y,xy) & pair(##M,y,z,yz) & xy\nth(1,env) & yz\nth(2,env)) + \ sats(M,cfm(0,1,2),env)" + and + "cfm(0,1,2) \ formula" + and + "arity(cfm(0,1,2)) = 3" + using comp_fm_auto by (simp del:FOL_sats_iff pair_abs add: fm_defs nat_simp_union) + then + have "\r\M. \s\M. separation(##M, \y. sats(M,cfm(0,1,2) , [y,s,r]))" + using separation_ax by simp + moreover + have "(\x\M. \y\M. \z\M. \xy\M. \yz\M. + pair(##M,x,z,xz) & pair(##M,x,y,xy) & pair(##M,y,z,yz) & xy\s & yz\r) + \ sats(M,cfm(0,1,2) , [xz,s,r])" + if "xz\M" "s\M" "r\M" for xz s r + using that fmsats[of "[xz,s,r]"] by simp + ultimately + have "\s\M. \r\M. separation(##M, \xz . \x\M. \y\M. \z\M. \xy\M. \yz\M. + pair(##M,x,z,xz) & pair(##M,x,y,xy) & pair(##M,y,z,yz) & xy\s & yz\r)" + unfolding separation_def by simp + with \S\M\ \R\M\ show ?thesis by simp +qed + + +schematic_goal pred_fm_auto: + assumes + "nth(i,env) = y" "nth(j,env) = R" "nth(h,env) = X" + "i \ nat" "j \ nat" "h \ nat" "env \ list(A)" + shows + "(\p\A. p\R & pair(##A,y,X,p)) \ sats(A,?pfm(i,j,h),env)" + by (insert assms ; (rule sep_rules | simp)+) + + +lemma pred_sep_intf: + assumes + "R\M" + and + "X\M" + shows + "separation(##M, \y. \p\M. p\R & pair(##M,y,X,p))" +proof - + obtain pfm where + fmsats:"\env. env\list(M) \ + (\p\M. p\nth(1,env) & pair(##M,nth(0,env),nth(2,env),p)) \ sats(M,pfm(0,1,2),env)" + and + "pfm(0,1,2) \ formula" + and + "arity(pfm(0,1,2)) = 3" + using pred_fm_auto by (simp del:FOL_sats_iff pair_abs add: fm_defs nat_simp_union) + then + have "\x\M. \r\M. separation(##M, \y. sats(M,pfm(0,1,2) , [y,r,x]))" + using separation_ax by simp + moreover + have "(\p\M. p\r & pair(##M,y,x,p)) + \ sats(M,pfm(0,1,2) , [y,r,x])" + if "y\M" "r\M" "x\M" for y x r + using that fmsats[of "[y,r,x]"] by simp + ultimately + have "\x\M. \r\M. separation(##M, \ y . \p\M. p\r & pair(##M,y,x,p))" + unfolding separation_def by simp + with \X\M\ \R\M\ show ?thesis by simp +qed + +(* Memrel_separation: + "separation(M, \z. \x[M]. \y[M]. pair(M,x,y,z) & x \ y)" +*) +schematic_goal mem_fm_auto: + assumes + "nth(i,env) = z" "i \ nat" "env \ list(A)" + shows + "(\x\A. \y\A. pair(##A,x,y,z) & x \ y) \ sats(A,?mfm(i),env)" + by (insert assms ; (rule sep_rules | simp)+) + +lemma memrel_sep_intf: + "separation(##M, \z. \x\M. \y\M. pair(##M,x,y,z) & x \ y)" +proof - + obtain mfm where + fmsats:"\env. env\list(M) \ + (\x\M. \y\M. pair(##M,x,y,nth(0,env)) & x \ y) \ sats(M,mfm(0),env)" + and + "mfm(0) \ formula" + and + "arity(mfm(0)) = 1" + using mem_fm_auto by (simp del:FOL_sats_iff pair_abs add: fm_defs nat_simp_union) + then + have "separation(##M, \z. sats(M,mfm(0) , [z]))" + using separation_ax by simp + moreover + have "(\x\M. \y\M. pair(##M,x,y,z) & x \ y) \ sats(M,mfm(0),[z])" + if "z\M" for z + using that fmsats[of "[z]"] by simp + ultimately + have "separation(##M, \z . \x\M. \y\M. pair(##M,x,y,z) & x \ y)" + unfolding separation_def by simp + then show ?thesis by simp +qed + +schematic_goal recfun_fm_auto: + assumes + "nth(i1,env) = x" "nth(i2,env) = r" "nth(i3,env) = f" "nth(i4,env) = g" "nth(i5,env) = a" + "nth(i6,env) = b" "i1\nat" "i2\nat" "i3\nat" "i4\nat" "i5\nat" "i6\nat" "env \ list(A)" + shows + "(\xa\A. \xb\A. pair(##A,x,a,xa) & xa \ r & pair(##A,x,b,xb) & xb \ r & + (\fx\A. \gx\A. fun_apply(##A,f,x,fx) & fun_apply(##A,g,x,gx) & fx \ gx)) + \ sats(A,?rffm(i1,i2,i3,i4,i5,i6),env)" + by (insert assms ; (rule sep_rules | simp)+) + + +lemma is_recfun_sep_intf : + assumes + "r\M" "f\M" "g\M" "a\M" "b\M" + shows + "separation(##M,\x. \xa\M. \xb\M. + pair(##M,x,a,xa) & xa \ r & pair(##M,x,b,xb) & xb \ r & + (\fx\M. \gx\M. fun_apply(##M,f,x,fx) & fun_apply(##M,g,x,gx) & + fx \ gx))" +proof - + obtain rffm where + fmsats:"\env. env\list(M) \ + (\xa\M. \xb\M. pair(##M,nth(0,env),nth(4,env),xa) & xa \ nth(1,env) & + pair(##M,nth(0,env),nth(5,env),xb) & xb \ nth(1,env) & (\fx\M. \gx\M. + fun_apply(##M,nth(2,env),nth(0,env),fx) & fun_apply(##M,nth(3,env),nth(0,env),gx) & fx \ gx)) + \ sats(M,rffm(0,1,2,3,4,5),env)" + and + "rffm(0,1,2,3,4,5) \ formula" + and + "arity(rffm(0,1,2,3,4,5)) = 6" + using recfun_fm_auto by (simp del:FOL_sats_iff pair_abs add: fm_defs nat_simp_union) + then + have "\a1\M. \a2\M. \a3\M. \a4\M. \a5\M. + separation(##M, \x. sats(M,rffm(0,1,2,3,4,5) , [x,a1,a2,a3,a4,a5]))" + using separation_ax by simp + moreover + have "(\xa\M. \xb\M. pair(##M,x,a4,xa) & xa \ a1 & pair(##M,x,a5,xb) & xb \ a1 & + (\fx\M. \gx\M. fun_apply(##M,a2,x,fx) & fun_apply(##M,a3,x,gx) & fx \ gx)) + \ sats(M,rffm(0,1,2,3,4,5) , [x,a1,a2,a3,a4,a5])" + if "x\M" "a1\M" "a2\M" "a3\M" "a4\M" "a5\M" for x a1 a2 a3 a4 a5 + using that fmsats[of "[x,a1,a2,a3,a4,a5]"] by simp + ultimately + have "\a1\M. \a2\M. \a3\M. \a4\M. \a5\M. separation(##M, \ x . + \xa\M. \xb\M. pair(##M,x,a4,xa) & xa \ a1 & pair(##M,x,a5,xb) & xb \ a1 & + (\fx\M. \gx\M. fun_apply(##M,a2,x,fx) & fun_apply(##M,a3,x,gx) & fx \ gx))" + unfolding separation_def by simp + with \r\M\ \f\M\ \g\M\ \a\M\ \b\M\ show ?thesis by simp +qed + + +(* Instance of Replacement for M_basic *) + +schematic_goal funsp_fm_auto: + assumes + "nth(i,env) = p" "nth(j,env) = z" "nth(h,env) = n" + "i \ nat" "j \ nat" "h \ nat" "env \ list(A)" + shows + "(\f\A. \b\A. \nb\A. \cnbf\A. pair(##A,f,b,p) & pair(##A,n,b,nb) & is_cons(##A,nb,f,cnbf) & + upair(##A,cnbf,cnbf,z)) \ sats(A,?fsfm(i,j,h),env)" + by (insert assms ; (rule sep_rules | simp)+) + + +lemma funspace_succ_rep_intf : + assumes + "n\M" + shows + "strong_replacement(##M, + \p z. \f\M. \b\M. \nb\M. \cnbf\M. + pair(##M,f,b,p) & pair(##M,n,b,nb) & is_cons(##M,nb,f,cnbf) & + upair(##M,cnbf,cnbf,z))" +proof - + obtain fsfm where + fmsats:"env\list(M) \ + (\f\M. \b\M. \nb\M. \cnbf\M. pair(##M,f,b,nth(0,env)) & pair(##M,nth(2,env),b,nb) + & is_cons(##M,nb,f,cnbf) & upair(##M,cnbf,cnbf,nth(1,env))) + \ sats(M,fsfm(0,1,2),env)" + and "fsfm(0,1,2) \ formula" and "arity(fsfm(0,1,2)) = 3" for env + using funsp_fm_auto[of concl:M] by (simp del:FOL_sats_iff pair_abs add: fm_defs nat_simp_union) + then + have "\n0\M. strong_replacement(##M, \p z. sats(M,fsfm(0,1,2) , [p,z,n0]))" + using replacement_ax by simp + moreover + have "(\f\M. \b\M. \nb\M. \cnbf\M. pair(##M,f,b,p) & pair(##M,n0,b,nb) & + is_cons(##M,nb,f,cnbf) & upair(##M,cnbf,cnbf,z)) + \ sats(M,fsfm(0,1,2) , [p,z,n0])" + if "p\M" "z\M" "n0\M" for p z n0 + using that fmsats[of "[p,z,n0]"] by simp + ultimately + have "\n0\M. strong_replacement(##M, \ p z. + \f\M. \b\M. \nb\M. \cnbf\M. pair(##M,f,b,p) & pair(##M,n0,b,nb) & + is_cons(##M,nb,f,cnbf) & upair(##M,cnbf,cnbf,z))" + unfolding strong_replacement_def univalent_def by simp + with \n\M\ show ?thesis by simp +qed + + +(* Interface with M_basic *) + +lemmas M_basic_sep_instances = + inter_sep_intf diff_sep_intf cartprod_sep_intf + image_sep_intf converse_sep_intf restrict_sep_intf + pred_sep_intf memrel_sep_intf comp_sep_intf is_recfun_sep_intf + +lemma mbasic : "M_basic(##M)" + using trans_M zero_in_M power_ax M_basic_sep_instances funspace_succ_rep_intf mtriv + by unfold_locales auto + +end + +sublocale M_ZF_trans \ M_basic "##M" + by (rule mbasic) + +subsection\Interface with \<^term>\M_trancl\\ + +(* rtran_closure_mem *) +schematic_goal rtran_closure_mem_auto: + assumes + "nth(i,env) = p" "nth(j,env) = r" "nth(k,env) = B" + "i \ nat" "j \ nat" "k \ nat" "env \ list(A)" + shows + "rtran_closure_mem(##A,B,r,p) \ sats(A,?rcfm(i,j,k),env)" + unfolding rtran_closure_mem_def + by (insert assms ; (rule sep_rules | simp)+) + + +lemma (in M_ZF_trans) rtrancl_separation_intf: + assumes + "r\M" + and + "A\M" + shows + "separation (##M, rtran_closure_mem(##M,A,r))" +proof - + obtain rcfm where + fmsats:"\env. env\list(M) \ + (rtran_closure_mem(##M,nth(2,env),nth(1,env),nth(0,env))) \ sats(M,rcfm(0,1,2),env)" + and + "rcfm(0,1,2) \ formula" + and + "arity(rcfm(0,1,2)) = 3" + using rtran_closure_mem_auto by (simp del:FOL_sats_iff pair_abs add: fm_defs nat_simp_union) + then + have "\x\M. \a\M. separation(##M, \y. sats(M,rcfm(0,1,2) , [y,x,a]))" + using separation_ax by simp + moreover + have "(rtran_closure_mem(##M,a,x,y)) + \ sats(M,rcfm(0,1,2) , [y,x,a])" + if "y\M" "x\M" "a\M" for y x a + using that fmsats[of "[y,x,a]"] by simp + ultimately + have "\x\M. \a\M. separation(##M, rtran_closure_mem(##M,a,x))" + unfolding separation_def by simp + with \r\M\ \A\M\ show ?thesis by simp +qed + +schematic_goal rtran_closure_fm_auto: + assumes + "nth(i,env) = r" "nth(j,env) = rp" + "i \ nat" "j \ nat" "env \ list(A)" + shows + "rtran_closure(##A,r,rp) \ sats(A,?rtc(i,j),env)" + unfolding rtran_closure_def + by (insert assms ; (rule sep_rules rtran_closure_mem_auto | simp)+) + +schematic_goal trans_closure_fm_auto: + assumes + "nth(i,env) = r" "nth(j,env) = rp" + "i \ nat" "j \ nat" "env \ list(A)" + shows + "tran_closure(##A,r,rp) \ sats(A,?tc(i,j),env)" + unfolding tran_closure_def + by (insert assms ; (rule sep_rules rtran_closure_fm_auto | simp))+ + +synthesize "trans_closure_fm" from_schematic trans_closure_fm_auto + +schematic_goal wellfounded_trancl_fm_auto: + assumes + "nth(i,env) = p" "nth(j,env) = r" "nth(k,env) = B" + "i \ nat" "j \ nat" "k \ nat" "env \ list(A)" + shows + "wellfounded_trancl(##A,B,r,p) \ sats(A,?wtf(i,j,k),env)" + unfolding wellfounded_trancl_def + by (insert assms ; (rule sep_rules trans_closure_fm_iff_sats | simp)+) + +lemma (in M_ZF_trans) wftrancl_separation_intf: + assumes + "r\M" + and + "Z\M" + shows + "separation (##M, wellfounded_trancl(##M,Z,r))" +proof - + obtain rcfm where + fmsats:"\env. env\list(M) \ + (wellfounded_trancl(##M,nth(2,env),nth(1,env),nth(0,env))) \ sats(M,rcfm(0,1,2),env)" + and + "rcfm(0,1,2) \ formula" + and + "arity(rcfm(0,1,2)) = 3" + using wellfounded_trancl_fm_auto[of concl:M "nth(2,_)"] unfolding fm_defs trans_closure_fm_def + by (simp del:FOL_sats_iff pair_abs add: fm_defs nat_simp_union) + then + have "\x\M. \z\M. separation(##M, \y. sats(M,rcfm(0,1,2) , [y,x,z]))" + using separation_ax by simp + moreover + have "(wellfounded_trancl(##M,z,x,y)) + \ sats(M,rcfm(0,1,2) , [y,x,z])" + if "y\M" "x\M" "z\M" for y x z + using that fmsats[of "[y,x,z]"] by simp + ultimately + have "\x\M. \z\M. separation(##M, wellfounded_trancl(##M,z,x))" + unfolding separation_def by simp + with \r\M\ \Z\M\ show ?thesis by simp +qed + +(* nat \ M *) + +lemma (in M_ZF_trans) finite_sep_intf: + "separation(##M, \x. x\nat)" +proof - + have "arity(finite_ordinal_fm(0)) = 1 " + unfolding finite_ordinal_fm_def limit_ordinal_fm_def empty_fm_def succ_fm_def cons_fm_def + union_fm_def upair_fm_def + by (simp add: nat_union_abs1 Un_commute) + with separation_ax + have "(\v\M. separation(##M,\x. sats(M,finite_ordinal_fm(0),[x,v])))" + by simp + then have "(\v\M. separation(##M,finite_ordinal(##M)))" + unfolding separation_def by simp + then have "separation(##M,finite_ordinal(##M))" + using zero_in_M by auto + then show ?thesis unfolding separation_def by simp +qed + + +lemma (in M_ZF_trans) nat_subset_I' : + "\ I\M ; 0\I ; \x. x\I \ succ(x)\I \ \ nat \ I" + by (rule subsetI,induct_tac x,simp+) + + +lemma (in M_ZF_trans) nat_subset_I : + "\I\M. nat \ I" +proof - + have "\I\M. 0\I \ (\x\M. x\I \ succ(x)\I)" + using infinity_ax unfolding infinity_ax_def by auto + then obtain I where + "I\M" "0\I" "(\x\M. x\I \ succ(x)\I)" + by auto + then have "\x. x\I \ succ(x)\I" + using Transset_intf[OF trans_M] by simp + then have "nat\I" + using \I\M\ \0\I\ nat_subset_I' by simp + then show ?thesis using \I\M\ by auto +qed + +lemma (in M_ZF_trans) nat_in_M : + "nat \ M" +proof - + have 1:"{x\B . x\A}=A" if "A\B" for A B + using that by auto + obtain I where + "I\M" "nat\I" + using nat_subset_I by auto + then have "{x\I . x\nat} \ M" + using finite_sep_intf separation_closed[of "\x . x\nat"] by simp + then show ?thesis + using \nat\I\ 1 by simp +qed + (* end nat \ M *) + + +lemma (in M_ZF_trans) mtrancl : "M_trancl(##M)" + using mbasic rtrancl_separation_intf wftrancl_separation_intf nat_in_M + wellfounded_trancl_def + by unfold_locales auto + +sublocale M_ZF_trans \ M_trancl "##M" + by (rule mtrancl) + +subsection\Interface with \<^term>\M_eclose\\ + +lemma repl_sats: + assumes + sat:"\x z. x\M \ z\M \ sats(M,\,Cons(x,Cons(z,env))) \ P(x,z)" + shows + "strong_replacement(##M,\x z. sats(M,\,Cons(x,Cons(z,env)))) \ + strong_replacement(##M,P)" + by (rule strong_replacement_cong,simp add:sat) + +lemma (in M_ZF_trans) nat_trans_M : + "n\M" if "n\nat" for n + using that nat_in_M Transset_intf[OF trans_M] by simp + +lemma (in M_ZF_trans) list_repl1_intf: + assumes + "A\M" + shows + "iterates_replacement(##M, is_list_functor(##M,A), 0)" +proof - + { + fix n + assume "n\nat" + have "succ(n)\M" + using \n\nat\ nat_trans_M by simp + then have 1:"Memrel(succ(n))\M" + using \n\nat\ Memrel_closed by simp + have "0\M" + using nat_0I nat_trans_M by simp + then have "is_list_functor(##M, A, a, b) + \ sats(M, list_functor_fm(13,1,0), [b,a,c,d,a0,a1,a2,a3,a4,y,x,z,Memrel(succ(n)),A,0])" + if "a\M" "b\M" "c\M" "d\M" "a0\M" "a1\M" "a2\M" "a3\M" "a4\M" "y\M" "x\M" "z\M" + for a b c d a0 a1 a2 a3 a4 y x z + using that 1 \A\M\ list_functor_iff_sats by simp + then have "sats(M, iterates_MH_fm(list_functor_fm(13,1,0),10,2,1,0), [a0,a1,a2,a3,a4,y,x,z,Memrel(succ(n)),A,0]) + \ iterates_MH(##M,is_list_functor(##M,A),0,a2, a1, a0)" + if "a0\M" "a1\M" "a2\M" "a3\M" "a4\M" "y\M" "x\M" "z\M" + for a0 a1 a2 a3 a4 y x z + using that sats_iterates_MH_fm[of M "is_list_functor(##M,A)" _] 1 \0\M\ \A\M\ by simp + then have 2:"sats(M, is_wfrec_fm(iterates_MH_fm(list_functor_fm(13,1,0),10,2,1,0),3,1,0), + [y,x,z,Memrel(succ(n)),A,0]) + \ + is_wfrec(##M, iterates_MH(##M,is_list_functor(##M,A),0) , Memrel(succ(n)), x, y)" + if "y\M" "x\M" "z\M" for y x z + using that sats_is_wfrec_fm 1 \0\M\ \A\M\ by simp + let + ?f="Exists(And(pair_fm(1,0,2), + is_wfrec_fm(iterates_MH_fm(list_functor_fm(13,1,0),10,2,1,0),3,1,0)))" + have satsf:"sats(M, ?f, [x,z,Memrel(succ(n)),A,0]) + \ + (\y\M. pair(##M,x,y,z) & + is_wfrec(##M, iterates_MH(##M,is_list_functor(##M,A),0) , Memrel(succ(n)), x, y))" + if "x\M" "z\M" for x z + using that 2 1 \0\M\ \A\M\ by (simp del:pair_abs) + have "arity(?f) = 5" + unfolding iterates_MH_fm_def is_wfrec_fm_def is_recfun_fm_def is_nat_case_fm_def + restriction_fm_def list_functor_fm_def number1_fm_def cartprod_fm_def + sum_fm_def quasinat_fm_def pre_image_fm_def fm_defs + by (simp add:nat_simp_union) + then + have "strong_replacement(##M,\x z. sats(M,?f,[x,z,Memrel(succ(n)),A,0]))" + using replacement_ax 1 \A\M\ \0\M\ by simp + then + have "strong_replacement(##M,\x z. + \y\M. pair(##M,x,y,z) & is_wfrec(##M, iterates_MH(##M,is_list_functor(##M,A),0) , + Memrel(succ(n)), x, y))" + using repl_sats[of M ?f "[Memrel(succ(n)),A,0]"] satsf by (simp del:pair_abs) + } + then + show ?thesis unfolding iterates_replacement_def wfrec_replacement_def by simp +qed + + + +(* Iterates_replacement para predicados sin parámetros *) +lemma (in M_ZF_trans) iterates_repl_intf : + assumes + "v\M" and + isfm:"is_F_fm \ formula" and + arty:"arity(is_F_fm)=2" and + satsf: "\a b env'. \ a\M ; b\M ; env'\list(M) \ + \ is_F(a,b) \ sats(M, is_F_fm, [b,a]@env')" + shows + "iterates_replacement(##M,is_F,v)" +proof - + { + fix n + assume "n\nat" + have "succ(n)\M" + using \n\nat\ nat_trans_M by simp + then have 1:"Memrel(succ(n))\M" + using \n\nat\ Memrel_closed by simp + { + fix a0 a1 a2 a3 a4 y x z + assume as:"a0\M" "a1\M" "a2\M" "a3\M" "a4\M" "y\M" "x\M" "z\M" + have "sats(M, is_F_fm, Cons(b,Cons(a,Cons(c,Cons(d,[a0,a1,a2,a3,a4,y,x,z,Memrel(succ(n)),v]))))) + \ is_F(a,b)" + if "a\M" "b\M" "c\M" "d\M" for a b c d + using as that 1 satsf[of a b "[c,d,a0,a1,a2,a3,a4,y,x,z,Memrel(succ(n)),v]"] \v\M\ by simp + then + have "sats(M, iterates_MH_fm(is_F_fm,9,2,1,0), [a0,a1,a2,a3,a4,y,x,z,Memrel(succ(n)),v]) + \ iterates_MH(##M,is_F,v,a2, a1, a0)" + using as + sats_iterates_MH_fm[of M "is_F" "is_F_fm"] 1 \v\M\ by simp + } + then have 2:"sats(M, is_wfrec_fm(iterates_MH_fm(is_F_fm,9,2,1,0),3,1,0), + [y,x,z,Memrel(succ(n)),v]) + \ + is_wfrec(##M, iterates_MH(##M,is_F,v),Memrel(succ(n)), x, y)" + if "y\M" "x\M" "z\M" for y x z + using that sats_is_wfrec_fm 1 \v\M\ by simp + let + ?f="Exists(And(pair_fm(1,0,2), + is_wfrec_fm(iterates_MH_fm(is_F_fm,9,2,1,0),3,1,0)))" + have satsf:"sats(M, ?f, [x,z,Memrel(succ(n)),v]) + \ + (\y\M. pair(##M,x,y,z) & + is_wfrec(##M, iterates_MH(##M,is_F,v) , Memrel(succ(n)), x, y))" + if "x\M" "z\M" for x z + using that 2 1 \v\M\ by (simp del:pair_abs) + have "arity(?f) = 4" + unfolding iterates_MH_fm_def is_wfrec_fm_def is_recfun_fm_def is_nat_case_fm_def + restriction_fm_def pre_image_fm_def quasinat_fm_def fm_defs + using arty by (simp add:nat_simp_union) + then + have "strong_replacement(##M,\x z. sats(M,?f,[x,z,Memrel(succ(n)),v]))" + using replacement_ax 1 \v\M\ \is_F_fm\formula\ by simp + then + have "strong_replacement(##M,\x z. + \y\M. pair(##M,x,y,z) & is_wfrec(##M, iterates_MH(##M,is_F,v) , + Memrel(succ(n)), x, y))" + using repl_sats[of M ?f "[Memrel(succ(n)),v]"] satsf by (simp del:pair_abs) + } + then + show ?thesis unfolding iterates_replacement_def wfrec_replacement_def by simp +qed + +lemma (in M_ZF_trans) formula_repl1_intf : + "iterates_replacement(##M, is_formula_functor(##M), 0)" +proof - + have "0\M" + using nat_0I nat_trans_M by simp + have 1:"arity(formula_functor_fm(1,0)) = 2" + unfolding formula_functor_fm_def fm_defs sum_fm_def cartprod_fm_def number1_fm_def + by (simp add:nat_simp_union) + have 2:"formula_functor_fm(1,0)\formula" by simp + have "is_formula_functor(##M,a,b) \ + sats(M, formula_functor_fm(1,0), [b,a])" + if "a\M" "b\M" for a b + using that by simp + then show ?thesis using \0\M\ 1 2 iterates_repl_intf by simp +qed + +lemma (in M_ZF_trans) nth_repl_intf: + assumes + "l \ M" + shows + "iterates_replacement(##M,\l' t. is_tl(##M,l',t),l)" +proof - + have 1:"arity(tl_fm(1,0)) = 2" + unfolding tl_fm_def fm_defs quasilist_fm_def Cons_fm_def Nil_fm_def Inr_fm_def number1_fm_def + Inl_fm_def by (simp add:nat_simp_union) + have 2:"tl_fm(1,0)\formula" by simp + have "is_tl(##M,a,b) \ sats(M, tl_fm(1,0), [b,a])" + if "a\M" "b\M" for a b + using that by simp + then show ?thesis using \l\M\ 1 2 iterates_repl_intf by simp +qed + + +lemma (in M_ZF_trans) eclose_repl1_intf: + assumes + "A\M" + shows + "iterates_replacement(##M, big_union(##M), A)" +proof - + have 1:"arity(big_union_fm(1,0)) = 2" + unfolding big_union_fm_def fm_defs by (simp add:nat_simp_union) + have 2:"big_union_fm(1,0)\formula" by simp + have "big_union(##M,a,b) \ sats(M, big_union_fm(1,0), [b,a])" + if "a\M" "b\M" for a b + using that by simp + then show ?thesis using \A\M\ 1 2 iterates_repl_intf by simp +qed + +(* + and list_replacement2: + "M(A) \ strong_replacement(M, + \n y. n\nat & is_iterates(M, is_list_functor(M,A), 0, n, y))" + +*) +lemma (in M_ZF_trans) list_repl2_intf: + assumes + "A\M" + shows + "strong_replacement(##M,\n y. n\nat & is_iterates(##M, is_list_functor(##M,A), 0, n, y))" +proof - + have "0\M" + using nat_0I nat_trans_M by simp + have "is_list_functor(##M,A,a,b) \ + sats(M,list_functor_fm(13,1,0),[b,a,c,d,e,f,g,h,i,j,k,n,y,A,0,nat])" + if "a\M" "b\M" "c\M" "d\M" "e\M" "f\M""g\M""h\M""i\M""j\M" "k\M" "n\M" "y\M" + for a b c d e f g h i j k n y + using that \0\M\ nat_in_M \A\M\ by simp + then + have 1:"sats(M, is_iterates_fm(list_functor_fm(13,1,0),3,0,1),[n,y,A,0,nat] ) \ + is_iterates(##M, is_list_functor(##M,A), 0, n , y)" + if "n\M" "y\M" for n y + using that \0\M\ \A\M\ nat_in_M + sats_is_iterates_fm[of M "is_list_functor(##M,A)"] by simp + let ?f = "And(Member(0,4),is_iterates_fm(list_functor_fm(13,1,0),3,0,1))" + have satsf:"sats(M, ?f,[n,y,A,0,nat] ) \ + n\nat & is_iterates(##M, is_list_functor(##M,A), 0, n, y)" + if "n\M" "y\M" for n y + using that \0\M\ \A\M\ nat_in_M 1 by simp + have "arity(?f) = 5" + unfolding is_iterates_fm_def restriction_fm_def list_functor_fm_def number1_fm_def Memrel_fm_def + cartprod_fm_def sum_fm_def quasinat_fm_def pre_image_fm_def fm_defs is_wfrec_fm_def + is_recfun_fm_def iterates_MH_fm_def is_nat_case_fm_def + by (simp add:nat_simp_union) + then + have "strong_replacement(##M,\n y. sats(M,?f,[n,y,A,0,nat]))" + using replacement_ax 1 nat_in_M \A\M\ \0\M\ by simp + then + show ?thesis using repl_sats[of M ?f "[A,0,nat]"] satsf by simp +qed + +lemma (in M_ZF_trans) formula_repl2_intf: + "strong_replacement(##M,\n y. n\nat & is_iterates(##M, is_formula_functor(##M), 0, n, y))" +proof - + have "0\M" + using nat_0I nat_trans_M by simp + have "is_formula_functor(##M,a,b) \ + sats(M,formula_functor_fm(1,0),[b,a,c,d,e,f,g,h,i,j,k,n,y,0,nat])" + if "a\M" "b\M" "c\M" "d\M" "e\M" "f\M""g\M""h\M""i\M""j\M" "k\M" "n\M" "y\M" + for a b c d e f g h i j k n y + using that \0\M\ nat_in_M by simp + then + have 1:"sats(M, is_iterates_fm(formula_functor_fm(1,0),2,0,1),[n,y,0,nat] ) \ + is_iterates(##M, is_formula_functor(##M), 0, n , y)" + if "n\M" "y\M" for n y + using that \0\M\ nat_in_M + sats_is_iterates_fm[of M "is_formula_functor(##M)"] by simp + let ?f = "And(Member(0,3),is_iterates_fm(formula_functor_fm(1,0),2,0,1))" + have satsf:"sats(M, ?f,[n,y,0,nat] ) \ + n\nat & is_iterates(##M, is_formula_functor(##M), 0, n, y)" + if "n\M" "y\M" for n y + using that \0\M\ nat_in_M 1 by simp + have artyf:"arity(?f) = 4" + unfolding is_iterates_fm_def formula_functor_fm_def fm_defs sum_fm_def quasinat_fm_def + cartprod_fm_def number1_fm_def Memrel_fm_def ordinal_fm_def transset_fm_def + is_wfrec_fm_def is_recfun_fm_def iterates_MH_fm_def is_nat_case_fm_def subset_fm_def + pre_image_fm_def restriction_fm_def + by (simp add:nat_simp_union) + then + have "strong_replacement(##M,\n y. sats(M,?f,[n,y,0,nat]))" + using replacement_ax 1 artyf \0\M\ nat_in_M by simp + then + show ?thesis using repl_sats[of M ?f "[0,nat]"] satsf by simp +qed + + +(* + "M(A) \ strong_replacement(M, + \n y. n\nat & is_iterates(M, big_union(M), A, n, y))" +*) + +lemma (in M_ZF_trans) eclose_repl2_intf: + assumes + "A\M" + shows + "strong_replacement(##M,\n y. n\nat & is_iterates(##M, big_union(##M), A, n, y))" +proof - + have "big_union(##M,a,b) \ + sats(M,big_union_fm(1,0),[b,a,c,d,e,f,g,h,i,j,k,n,y,A,nat])" + if "a\M" "b\M" "c\M" "d\M" "e\M" "f\M""g\M""h\M""i\M""j\M" "k\M" "n\M" "y\M" + for a b c d e f g h i j k n y + using that \A\M\ nat_in_M by simp + then + have 1:"sats(M, is_iterates_fm(big_union_fm(1,0),2,0,1),[n,y,A,nat] ) \ + is_iterates(##M, big_union(##M), A, n , y)" + if "n\M" "y\M" for n y + using that \A\M\ nat_in_M + sats_is_iterates_fm[of M "big_union(##M)"] by simp + let ?f = "And(Member(0,3),is_iterates_fm(big_union_fm(1,0),2,0,1))" + have satsf:"sats(M, ?f,[n,y,A,nat] ) \ + n\nat & is_iterates(##M, big_union(##M), A, n, y)" + if "n\M" "y\M" for n y + using that \A\M\ nat_in_M 1 by simp + have artyf:"arity(?f) = 4" + unfolding is_iterates_fm_def formula_functor_fm_def fm_defs sum_fm_def quasinat_fm_def + cartprod_fm_def number1_fm_def Memrel_fm_def ordinal_fm_def transset_fm_def + is_wfrec_fm_def is_recfun_fm_def iterates_MH_fm_def is_nat_case_fm_def subset_fm_def + pre_image_fm_def restriction_fm_def + by (simp add:nat_simp_union) + then + have "strong_replacement(##M,\n y. sats(M,?f,[n,y,A,nat]))" + using replacement_ax 1 artyf \A\M\ nat_in_M by simp + then + show ?thesis using repl_sats[of M ?f "[A,nat]"] satsf by simp +qed + +lemma (in M_ZF_trans) mdatatypes : "M_datatypes(##M)" + using mtrancl list_repl1_intf list_repl2_intf formula_repl1_intf + formula_repl2_intf nth_repl_intf + by unfold_locales auto + +sublocale M_ZF_trans \ M_datatypes "##M" + by (rule mdatatypes) + +lemma (in M_ZF_trans) meclose : "M_eclose(##M)" + using mdatatypes eclose_repl1_intf eclose_repl2_intf + by unfold_locales auto + +sublocale M_ZF_trans \ M_eclose "##M" + by (rule meclose) + +(* Interface with locale M_eclose_pow *) + +(* "powerset(M,A,z) \ \x[M]. x \ z \ subset(M,x,A)" *) +definition + powerset_fm :: "[i,i] \ i" where + "powerset_fm(A,z) \ Forall(Iff(Member(0,succ(z)),subset_fm(0,succ(A))))" + +lemma powerset_type [TC]: + "\ x \ nat; y \ nat \ \ powerset_fm(x,y) \ formula" + by (simp add:powerset_fm_def) + +definition + is_powapply_fm :: "[i,i,i] \ i" where + "is_powapply_fm(f,y,z) \ + Exists(And(fun_apply_fm(succ(f), succ(y), 0), + Forall(Iff(Member(0, succ(succ(z))), + Forall(Implies(Member(0, 1), Member(0, 2)))))))" + +lemma is_powapply_type [TC] : + "\f\nat ; y\nat; z\nat\ \ is_powapply_fm(f,y,z)\formula" + unfolding is_powapply_fm_def by simp + +lemma sats_is_powapply_fm : + assumes + "f\nat" "y\nat" "z\nat" "env\list(A)" "0\A" + shows + "is_powapply(##A,nth(f, env),nth(y, env),nth(z, env)) + \ sats(A,is_powapply_fm(f,y,z),env)" + unfolding is_powapply_def is_powapply_fm_def is_Collect_def powerset_def subset_def + using nth_closed assms by simp + + +lemma (in M_ZF_trans) powapply_repl : + assumes + "f\M" + shows + "strong_replacement(##M,is_powapply(##M,f))" +proof - + have "arity(is_powapply_fm(2,0,1)) = 3" + unfolding is_powapply_fm_def + by (simp add: fm_defs nat_simp_union) + then + have "\f0\M. strong_replacement(##M, \p z. sats(M,is_powapply_fm(2,0,1) , [p,z,f0]))" + using replacement_ax by simp + moreover + have "is_powapply(##M,f0,p,z) \ sats(M,is_powapply_fm(2,0,1) , [p,z,f0])" + if "p\M" "z\M" "f0\M" for p z f0 + using that zero_in_M sats_is_powapply_fm[of 2 0 1 "[p,z,f0]" M] by simp + ultimately + have "\f0\M. strong_replacement(##M, is_powapply(##M,f0))" + unfolding strong_replacement_def univalent_def by simp + with \f\M\ show ?thesis by simp +qed + + +(*"PHrank(M,f,y,z) \ M(z) \ (\fy[M]. fun_apply(M,f,y,fy) \ successor(M,fy,z))"*) +definition + PHrank_fm :: "[i,i,i] \ i" where + "PHrank_fm(f,y,z) \ Exists(And(fun_apply_fm(succ(f),succ(y),0) + ,succ_fm(0,succ(z))))" + +lemma PHrank_type [TC]: + "\ x \ nat; y \ nat; z \ nat \ \ PHrank_fm(x,y,z) \ formula" + by (simp add:PHrank_fm_def) + + +lemma (in M_ZF_trans) sats_PHrank_fm [simp]: + "\ x \ nat; y \ nat; z \ nat; env \ list(M) \ + \ sats(M,PHrank_fm(x,y,z),env) \ + PHrank(##M,nth(x,env),nth(y,env),nth(z,env))" + using zero_in_M Internalizations.nth_closed by (simp add: PHrank_def PHrank_fm_def) + + +lemma (in M_ZF_trans) phrank_repl : + assumes + "f\M" + shows + "strong_replacement(##M,PHrank(##M,f))" +proof - + have "arity(PHrank_fm(2,0,1)) = 3" + unfolding PHrank_fm_def + by (simp add: fm_defs nat_simp_union) + then + have "\f0\M. strong_replacement(##M, \p z. sats(M,PHrank_fm(2,0,1) , [p,z,f0]))" + using replacement_ax by simp + then + have "\f0\M. strong_replacement(##M, PHrank(##M,f0))" + unfolding strong_replacement_def univalent_def by simp + with \f\M\ show ?thesis by simp +qed + + +(*"is_Hrank(M,x,f,hc) \ (\R[M]. big_union(M,R,hc) \is_Replace(M,x,PHrank(M,f),R)) "*) +definition + is_Hrank_fm :: "[i,i,i] \ i" where + "is_Hrank_fm(x,f,hc) \ Exists(And(big_union_fm(0,succ(hc)), + Replace_fm(succ(x),PHrank_fm(succ(succ(succ(f))),0,1),0)))" + +lemma is_Hrank_type [TC]: + "\ x \ nat; y \ nat; z \ nat \ \ is_Hrank_fm(x,y,z) \ formula" + by (simp add:is_Hrank_fm_def) + +lemma (in M_ZF_trans) sats_is_Hrank_fm [simp]: + "\ x \ nat; y \ nat; z \ nat; env \ list(M)\ + \ sats(M,is_Hrank_fm(x,y,z),env) \ + is_Hrank(##M,nth(x,env),nth(y,env),nth(z,env))" + using zero_in_M is_Hrank_def is_Hrank_fm_def sats_Replace_fm + by simp + +(* M(x) \ wfrec_replacement(M,is_Hrank(M),rrank(x)) *) +lemma (in M_ZF_trans) wfrec_rank : + assumes + "X\M" + shows + "wfrec_replacement(##M,is_Hrank(##M),rrank(X))" +proof - + have + "is_Hrank(##M,a2, a1, a0) \ + sats(M, is_Hrank_fm(2,1,0), [a0,a1,a2,a3,a4,y,x,z,rrank(X)])" + if "a4\M" "a3\M" "a2\M" "a1\M" "a0\M" "y\M" "x\M" "z\M" for a4 a3 a2 a1 a0 y x z + using that rrank_in_M \X\M\ by simp + then + have + 1:"sats(M, is_wfrec_fm(is_Hrank_fm(2,1,0),3,1,0),[y,x,z,rrank(X)]) + \ is_wfrec(##M, is_Hrank(##M) ,rrank(X), x, y)" + if "y\M" "x\M" "z\M" for y x z + using that \X\M\ rrank_in_M sats_is_wfrec_fm by simp + let + ?f="Exists(And(pair_fm(1,0,2),is_wfrec_fm(is_Hrank_fm(2,1,0),3,1,0)))" + have satsf:"sats(M, ?f, [x,z,rrank(X)]) + \ (\y\M. pair(##M,x,y,z) & is_wfrec(##M, is_Hrank(##M) , rrank(X), x, y))" + if "x\M" "z\M" for x z + using that 1 \X\M\ rrank_in_M by (simp del:pair_abs) + have "arity(?f) = 3" + unfolding is_wfrec_fm_def is_recfun_fm_def is_nat_case_fm_def is_Hrank_fm_def PHrank_fm_def + restriction_fm_def list_functor_fm_def number1_fm_def cartprod_fm_def + sum_fm_def quasinat_fm_def pre_image_fm_def fm_defs + by (simp add:nat_simp_union) + then + have "strong_replacement(##M,\x z. sats(M,?f,[x,z,rrank(X)]))" + using replacement_ax 1 \X\M\ rrank_in_M by simp + then + have "strong_replacement(##M,\x z. + \y\M. pair(##M,x,y,z) & is_wfrec(##M, is_Hrank(##M) , rrank(X), x, y))" + using repl_sats[of M ?f "[rrank(X)]"] satsf by (simp del:pair_abs) + then + show ?thesis unfolding wfrec_replacement_def by simp +qed + +(*"is_HVfrom(M,A,x,f,h) \ \U[M]. \R[M]. union(M,A,U,h) + \ big_union(M,R,U) \ is_Replace(M,x,is_powapply(M,f),R)"*) +definition + is_HVfrom_fm :: "[i,i,i,i] \ i" where + "is_HVfrom_fm(A,x,f,h) \ Exists(Exists(And(union_fm(A #+ 2,1,h #+ 2), + And(big_union_fm(0,1), + Replace_fm(x #+ 2,is_powapply_fm(f #+ 4,0,1),0)))))" + +lemma is_HVfrom_type [TC]: + "\ A\nat; x \ nat; f \ nat; h \ nat \ \ is_HVfrom_fm(A,x,f,h) \ formula" + by (simp add:is_HVfrom_fm_def) + +lemma sats_is_HVfrom_fm : + "\ a\nat; x \ nat; f \ nat; h \ nat; env \ list(A); 0\A\ + \ sats(A,is_HVfrom_fm(a,x,f,h),env) \ + is_HVfrom(##A,nth(a,env),nth(x,env),nth(f,env),nth(h,env))" + using is_HVfrom_def is_HVfrom_fm_def sats_Replace_fm[OF sats_is_powapply_fm] + by simp + +lemma is_HVfrom_iff_sats: + assumes + "nth(a,env) = aa" "nth(x,env) = xx" "nth(f,env) = ff" "nth(h,env) = hh" + "a\nat" "x\nat" "f\nat" "h\nat" "env\list(A)" "0\A" + shows + "is_HVfrom(##A,aa,xx,ff,hh) \ sats(A, is_HVfrom_fm(a,x,f,h), env)" + using assms sats_is_HVfrom_fm by simp + +(* FIX US *) +schematic_goal sats_is_Vset_fm_auto: + assumes + "i\nat" "v\nat" "env\list(A)" "0\A" + "i < length(env)" "v < length(env)" + shows + "is_Vset(##A,nth(i, env),nth(v, env)) + \ sats(A,?ivs_fm(i,v),env)" + unfolding is_Vset_def is_Vfrom_def + by (insert assms; (rule sep_rules is_HVfrom_iff_sats is_transrec_iff_sats | simp)+) + +schematic_goal is_Vset_iff_sats: + assumes + "nth(i,env) = ii" "nth(v,env) = vv" + "i\nat" "v\nat" "env\list(A)" "0\A" + "i < length(env)" "v < length(env)" + shows + "is_Vset(##A,ii,vv) \ sats(A, ?ivs_fm(i,v), env)" + unfolding \nth(i,env) = ii\[symmetric] \nth(v,env) = vv\[symmetric] + by (rule sats_is_Vset_fm_auto(1); simp add:assms) + + +lemma (in M_ZF_trans) memrel_eclose_sing : + "a\M \ \sa\M. \esa\M. \mesa\M. + upair(##M,a,a,sa) & is_eclose(##M,sa,esa) & membership(##M,esa,mesa)" + using upair_ax eclose_closed Memrel_closed unfolding upair_ax_def + by (simp del:upair_abs) + + +lemma (in M_ZF_trans) trans_repl_HVFrom : + assumes + "A\M" "i\M" + shows + "transrec_replacement(##M,is_HVfrom(##M,A),i)" +proof - + { fix mesa + assume "mesa\M" + have + 0:"is_HVfrom(##M,A,a2, a1, a0) \ + sats(M, is_HVfrom_fm(8,2,1,0), [a0,a1,a2,a3,a4,y,x,z,A,mesa])" + if "a4\M" "a3\M" "a2\M" "a1\M" "a0\M" "y\M" "x\M" "z\M" for a4 a3 a2 a1 a0 y x z + using that zero_in_M sats_is_HVfrom_fm \mesa\M\ \A\M\ by simp + have + 1:"sats(M, is_wfrec_fm(is_HVfrom_fm(8,2,1,0),4,1,0),[y,x,z,A,mesa]) + \ is_wfrec(##M, is_HVfrom(##M,A),mesa, x, y)" + if "y\M" "x\M" "z\M" for y x z + using that \A\M\ \mesa\M\ sats_is_wfrec_fm[OF 0] by simp + let + ?f="Exists(And(pair_fm(1,0,2),is_wfrec_fm(is_HVfrom_fm(8,2,1,0),4,1,0)))" + have satsf:"sats(M, ?f, [x,z,A,mesa]) + \ (\y\M. pair(##M,x,y,z) & is_wfrec(##M, is_HVfrom(##M,A) , mesa, x, y))" + if "x\M" "z\M" for x z + using that 1 \A\M\ \mesa\M\ by (simp del:pair_abs) + have "arity(?f) = 4" + unfolding is_HVfrom_fm_def is_wfrec_fm_def is_recfun_fm_def is_nat_case_fm_def + restriction_fm_def list_functor_fm_def number1_fm_def cartprod_fm_def + is_powapply_fm_def sum_fm_def quasinat_fm_def pre_image_fm_def fm_defs + by (simp add:nat_simp_union) + then + have "strong_replacement(##M,\x z. sats(M,?f,[x,z,A,mesa]))" + using replacement_ax 1 \A\M\ \mesa\M\ by simp + then + have "strong_replacement(##M,\x z. + \y\M. pair(##M,x,y,z) & is_wfrec(##M, is_HVfrom(##M,A) , mesa, x, y))" + using repl_sats[of M ?f "[A,mesa]"] satsf by (simp del:pair_abs) + then + have "wfrec_replacement(##M,is_HVfrom(##M,A),mesa)" + unfolding wfrec_replacement_def by simp + } + then show ?thesis unfolding transrec_replacement_def + using \i\M\ memrel_eclose_sing by simp +qed + + +lemma (in M_ZF_trans) meclose_pow : "M_eclose_pow(##M)" + using meclose power_ax powapply_repl phrank_repl trans_repl_HVFrom wfrec_rank + by unfold_locales auto + +sublocale M_ZF_trans \ M_eclose_pow "##M" + by (rule meclose_pow) + +lemma (in M_ZF_trans) repl_gen : + assumes + f_abs: "\x y. \ x\M; y\M \ \ is_F(##M,x,y) \ y = f(x)" + and + f_sats: "\x y. \x\M ; y\M \ \ + sats(M,f_fm,Cons(x,Cons(y,env))) \ is_F(##M,x,y)" + and + f_form: "f_fm \ formula" + and + f_arty: "arity(f_fm) = 2" + and + "env\list(M)" + shows + "strong_replacement(##M, \x y. y = f(x))" +proof - + have "sats(M,f_fm,[x,y]@env) \ is_F(##M,x,y)" if "x\M" "y\M" for x y + using that f_sats[of x y] by simp + moreover + from f_form f_arty + have "strong_replacement(##M, \x y. sats(M,f_fm,[x,y]@env))" + using \env\list(M)\ replacement_ax by simp + ultimately + have "strong_replacement(##M, is_F(##M))" + using strong_replacement_cong[of "##M" "\x y. sats(M,f_fm,[x,y]@env)" "is_F(##M)"] by simp + with f_abs show ?thesis + using strong_replacement_cong[of "##M" "is_F(##M)" "\x y. y = f(x)"] by simp +qed + +(* Proof Scheme for instances of separation *) +lemma (in M_ZF_trans) sep_in_M : + assumes + "\ \ formula" "env\list(M)" + "arity(\) \ 1 #+ length(env)" "A\M" and + satsQ: "\x. x\M \ sats(M,\,[x]@env) \ Q(x)" + shows + "{y\A . Q(y)}\M" +proof - + have "separation(##M,\x. sats(M,\,[x] @ env))" + using assms separation_ax by simp + then show ?thesis using + \A\M\ satsQ trans_M + separation_cong[of "##M" "\y. sats(M,\,[y]@env)" "Q"] + separation_closed by simp +qed + +end \ No newline at end of file diff --git a/thys/Forcing/Internal_ZFC_Axioms.thy b/thys/Forcing/Internal_ZFC_Axioms.thy new file mode 100644 --- /dev/null +++ b/thys/Forcing/Internal_ZFC_Axioms.thy @@ -0,0 +1,523 @@ +section\The ZFC axioms, internalized\ +theory Internal_ZFC_Axioms + imports + Forcing_Data + +begin + +schematic_goal ZF_union_auto: + "Union_ax(##A) \ (A, [] \ ?zfunion)" + unfolding Union_ax_def + by ((rule sep_rules | simp)+) + +synthesize "ZF_union_fm" from_schematic ZF_union_auto + +schematic_goal ZF_power_auto: + "power_ax(##A) \ (A, [] \ ?zfpow)" + unfolding power_ax_def powerset_def subset_def + by ((rule sep_rules | simp)+) + +synthesize "ZF_power_fm" from_schematic ZF_power_auto + +schematic_goal ZF_pairing_auto: + "upair_ax(##A) \ (A, [] \ ?zfpair)" + unfolding upair_ax_def + by ((rule sep_rules | simp)+) + +synthesize "ZF_pairing_fm" from_schematic ZF_pairing_auto + +schematic_goal ZF_foundation_auto: + "foundation_ax(##A) \ (A, [] \ ?zfpow)" + unfolding foundation_ax_def + by ((rule sep_rules | simp)+) + +synthesize "ZF_foundation_fm" from_schematic ZF_foundation_auto + +schematic_goal ZF_extensionality_auto: + "extensionality(##A) \ (A, [] \ ?zfpow)" + unfolding extensionality_def + by ((rule sep_rules | simp)+) + +synthesize "ZF_extensionality_fm" from_schematic ZF_extensionality_auto + +schematic_goal ZF_infinity_auto: + "infinity_ax(##A) \ (A, [] \ (?\(i,j,h)))" + unfolding infinity_ax_def + by ((rule sep_rules | simp)+) + +synthesize "ZF_infinity_fm" from_schematic ZF_infinity_auto + +schematic_goal ZF_choice_auto: + "choice_ax(##A) \ (A, [] \ (?\(i,j,h)))" + unfolding choice_ax_def + by ((rule sep_rules | simp)+) + +synthesize "ZF_choice_fm" from_schematic ZF_choice_auto + +syntax + "_choice" :: "i" ("AC") +translations + "AC" \ "CONST ZF_choice_fm" + +lemmas ZFC_fm_defs = ZF_extensionality_fm_def ZF_foundation_fm_def ZF_pairing_fm_def + ZF_union_fm_def ZF_infinity_fm_def ZF_power_fm_def ZF_choice_fm_def + +lemmas ZFC_fm_sats = ZF_extensionality_auto ZF_foundation_auto ZF_pairing_auto + ZF_union_auto ZF_infinity_auto ZF_power_auto ZF_choice_auto + +definition + ZF_fin :: "i" where + "ZF_fin \ { ZF_extensionality_fm, ZF_foundation_fm, ZF_pairing_fm, + ZF_union_fm, ZF_infinity_fm, ZF_power_fm }" + +definition + ZFC_fin :: "i" where + "ZFC_fin \ ZF_fin \ {ZF_choice_fm}" + +lemma ZFC_fin_type : "ZFC_fin \ formula" + unfolding ZFC_fin_def ZF_fin_def ZFC_fm_defs by (auto) + +subsection\The Axiom of Separation, internalized\ +lemma iterates_Forall_type [TC]: + "\ n \ nat; p \ formula \ \ Forall^n(p) \ formula" + by (induct set:nat, auto) + +lemma last_init_eq : + assumes "l \ list(A)" "length(l) = succ(n)" + shows "\ a\A. \l'\list(A). l = l'@[a]" +proof- + from \l\_\ \length(_) = _\ + have "rev(l) \ list(A)" "length(rev(l)) = succ(n)" + by simp_all + then + obtain a l' where "a\A" "l'\list(A)" "rev(l) = Cons(a,l')" + by (cases;simp) + then + have "l = rev(l') @ [a]" "rev(l') \ list(A)" + using rev_rev_ident[OF \l\_\] by auto + with \a\_\ + show ?thesis by blast +qed + +lemma take_drop_eq : + assumes "l\list(M)" + shows "\ n . n < succ(length(l)) \ l = take(n,l) @ drop(n,l)" + using \l\list(M)\ +proof induct + case Nil + then show ?case by auto +next + case (Cons a l) + then show ?case + proof - + { + fix i + assume "il\list(M)\ + consider (lt) "i = 0" | (eq) "\k\nat. i = succ(k) \ k < succ(length(l))" + using \l\list(M)\ le_natI nat_imp_quasinat + by (cases rule:nat_cases[of i];auto) + then + have "take(i,Cons(a,l)) @ drop(i,Cons(a,l)) = Cons(a,l)" + using Cons + by (cases;auto) + } + then show ?thesis using Cons by auto + qed +qed + +lemma list_split : +assumes "n \ succ(length(rest))" "rest \ list(M)" +shows "\re\list(M). \st\list(M). rest = re @ st \ length(re) = pred(n)" +proof - + from assms + have "pred(n) \ length(rest)" + using pred_mono[OF _ \n\_\] pred_succ_eq by auto + with \rest\_\ + have "pred(n)\nat" "rest = take(pred(n),rest) @ drop(pred(n),rest)" (is "_ = ?re @ ?st") + using take_drop_eq[OF \rest\_\] le_natI by auto + then + have "length(?re) = pred(n)" "?re\list(M)" "?st\list(M)" + using length_take[rule_format,OF _ \pred(n)\_\] \pred(n) \ _\ \rest\_\ + unfolding min_def + by auto + then + show ?thesis + using rev_bexI[of _ _ "\ re. \st\list(M). rest = re @ st \ length(re) = pred(n)"] + \length(?re) = _\ \rest = _\ + by auto +qed + +lemma sats_nForall: + assumes + "\ \ formula" + shows + "n\nat \ ms \ list(M) \ + M, ms \ (Forall^n(\)) \ + (\rest \ list(M). length(rest) = n \ M, rest @ ms \ \)" +proof (induct n arbitrary:ms set:nat) + case 0 + with assms + show ?case by simp +next + case (succ n) + have "(\rest\list(M). length(rest) = succ(n) \ P(rest,n)) \ + (\t\M. \res\list(M). length(res) = n \ P(res @ [t],n))" + if "n\nat" for n P + using that last_init_eq by force + from this[of _ "\rest _. (M, rest @ ms \ \)"] \n\nat\ + have "(\rest\list(M). length(rest) = succ(n) \ M, rest @ ms \ \) \ + (\t\M. \res\list(M). length(res) = n \ M, (res @ [t]) @ ms \ \)" + by simp + with assms succ(1,3) succ(2)[of "Cons(_,ms)"] + show ?case + using arity_sats_iff[of \ _ M "Cons(_, ms @ _)"] app_assoc + by (simp) +qed + +definition + sep_body_fm :: "i \ i" where + "sep_body_fm(p) \ Forall(Exists(Forall( + Iff(Member(0,1),And(Member(0,2), + incr_bv1^2(p))))))" + +lemma sep_body_fm_type [TC]: "p \ formula \ sep_body_fm(p) \ formula" + by (simp add: sep_body_fm_def) + +lemma sats_sep_body_fm: + assumes + "\ \ formula" "ms\list(M)" "rest\list(M)" + shows + "M, rest @ ms \ sep_body_fm(\) \ + separation(##M,\x. M, [x] @ rest @ ms \ \)" + using assms formula_add_params1[of _ 2 _ _ "[_,_]" ] + unfolding sep_body_fm_def separation_def by simp + +definition + ZF_separation_fm :: "i \ i" where + "ZF_separation_fm(p) \ Forall^(pred(arity(p)))(sep_body_fm(p))" + +lemma ZF_separation_fm_type [TC]: "p \ formula \ ZF_separation_fm(p) \ formula" + by (simp add: ZF_separation_fm_def) + +lemma sats_ZF_separation_fm_iff: + assumes + "\\formula" + shows + "(M, [] \ (ZF_separation_fm(\))) + \ + (\env\list(M). arity(\) \ 1 #+ length(env) \ + separation(##M,\x. M, [x] @ env \ \))" +proof (intro iffI ballI impI) + let ?n="Arith.pred(arity(\))" + fix env + assume "M, [] \ ZF_separation_fm(\)" + assume "arity(\) \ 1 #+ length(env)" "env\list(M)" + moreover from this + have "arity(\) \ succ(length(env))" by simp + then + obtain some rest where "some\list(M)" "rest\list(M)" + "env = some @ rest" "length(some) = Arith.pred(arity(\))" + using list_split[OF \arity(\) \ succ(_)\ \env\_\] by force + moreover from \\\_\ + have "arity(\) \ succ(Arith.pred(arity(\)))" + using succpred_leI by simp + moreover + note assms + moreover + assume "M, [] \ ZF_separation_fm(\)" + moreover from calculation + have "M, some \ sep_body_fm(\)" + using sats_nForall[of "sep_body_fm(\)" ?n] + unfolding ZF_separation_fm_def by simp + ultimately + show "separation(##M, \x. M, [x] @ env \ \)" + unfolding ZF_separation_fm_def + using sats_sep_body_fm[of \ "[]" M some] + arity_sats_iff[of \ rest M "[_] @ some"] + separation_cong[of "##M" "\x. M, Cons(x, some @ rest) \ \" _ ] + by simp +next \ \almost equal to the previous implication\ + let ?n="Arith.pred(arity(\))" + assume asm:"\env\list(M). arity(\) \ 1 #+ length(env) \ + separation(##M, \x. M, [x] @ env \ \)" + { + fix some + assume "some\list(M)" "length(some) = Arith.pred(arity(\))" + moreover + note \\\_\ + moreover from calculation + have "arity(\) \ 1 #+ length(some)" + using le_trans[OF succpred_leI] succpred_leI by simp + moreover from calculation and asm + have "separation(##M, \x. M, [x] @ some \ \)" by blast + ultimately + have "M, some \ sep_body_fm(\)" + using sats_sep_body_fm[of \ "[]" M some] + arity_sats_iff[of \ _ M "[_,_] @ some"] + strong_replacement_cong[of "##M" "\x y. M, Cons(x, Cons(y, some @ _)) \ \" _ ] + by simp + } + with \\\_\ + show "M, [] \ ZF_separation_fm(\)" + using sats_nForall[of "sep_body_fm(\)" ?n] + unfolding ZF_separation_fm_def + by simp +qed + +subsection\The Axiom of Replacement, internalized\ +schematic_goal sats_univalent_fm_auto: + assumes + (* Q_iff_sats:"\a b z env aa bb. nth(a,Cons(z,env)) = aa \ nth(b,Cons(z,env)) = bb \ z\A + \ aa \ A \ bb \ A \ env\ list(A) \ + Q(aa,bb) \ (A, Cons(z,env) \ (Q_fm(a,b)))" \ \using only one formula\ *) + Q_iff_sats:"\x y z. x \ A \ y \ A \ z\A \ + Q(x,z) \ (A,Cons(z,Cons(y,Cons(x,env))) \ Q1_fm)" + "\x y z. x \ A \ y \ A \ z\A \ + Q(x,y) \ (A,Cons(z,Cons(y,Cons(x,env))) \ Q2_fm)" + and + asms: "nth(i,env) = B" "i \ nat" "env \ list(A)" + shows + "univalent(##A,B,Q) \ A,env \ ?ufm(i)" + unfolding univalent_def + by (insert asms; (rule sep_rules Q_iff_sats | simp)+) + +synthesize_notc "univalent_fm" from_schematic sats_univalent_fm_auto + +lemma univalent_fm_type [TC]: "q1\ formula \ q2\formula \ i\nat \ + univalent_fm(q2,q1,i) \formula" + by (simp add:univalent_fm_def) + +lemma sats_univalent_fm : + assumes + Q_iff_sats:"\x y z. x \ A \ y \ A \ z\A \ + Q(x,z) \ (A,Cons(z,Cons(y,Cons(x,env))) \ Q1_fm)" + "\x y z. x \ A \ y \ A \ z\A \ + Q(x,y) \ (A,Cons(z,Cons(y,Cons(x,env))) \ Q2_fm)" + and + asms: "nth(i,env) = B" "i \ nat" "env \ list(A)" + shows + "A,env \ univalent_fm(Q1_fm,Q2_fm,i) \ univalent(##A,B,Q)" + unfolding univalent_fm_def using asms sats_univalent_fm_auto[OF Q_iff_sats] by simp + +definition + swap_vars :: "i\i" where + "swap_vars(\) \ + Exists(Exists(And(Equal(0,3),And(Equal(1,2),iterates(\p. incr_bv(p)`2 , 2, \)))))" + +lemma swap_vars_type[TC] : + "\\formula \ swap_vars(\) \formula" + unfolding swap_vars_def by simp + +lemma sats_swap_vars : + "[x,y] @ env \ list(M) \ \\formula \ + M, [x,y] @ env \ swap_vars(\)\ M,[y,x] @ env \ \" + unfolding swap_vars_def + using sats_incr_bv_iff [of _ _ M _ "[y,x]"] by simp + +definition + univalent_Q1 :: "i \ i" where + "univalent_Q1(\) \ incr_bv1(swap_vars(\))" + +definition + univalent_Q2 :: "i \ i" where + "univalent_Q2(\) \ incr_bv(swap_vars(\))`0" + +lemma univalent_Qs_type [TC]: + assumes "\\formula" + shows "univalent_Q1(\) \ formula" "univalent_Q2(\) \ formula" + unfolding univalent_Q1_def univalent_Q2_def using assms by simp_all + +lemma sats_univalent_fm_assm: + assumes + "x \ A" "y \ A" "z\A" "env\ list(A)" "\ \ formula" + shows + "(A, ([x,z] @ env) \ \) \ (A, Cons(z,Cons(y,Cons(x,env))) \ (univalent_Q1(\)))" + "(A, ([x,y] @ env) \ \) \ (A, Cons(z,Cons(y,Cons(x,env))) \ (univalent_Q2(\)))" + unfolding univalent_Q1_def univalent_Q2_def + using + sats_incr_bv_iff[of _ _ A _ "[]"] \ \simplifies iterates of \<^term>\\x. incr_bv(x)`0\\ + sats_incr_bv1_iff[of _ "Cons(x,env)" A z y] + sats_swap_vars assms + by simp_all + +definition + rep_body_fm :: "i \ i" where + "rep_body_fm(p) \ Forall(Implies( + univalent_fm(univalent_Q1(incr_bv(p)`2),univalent_Q2(incr_bv(p)`2),0), + Exists(Forall( + Iff(Member(0,1),Exists(And(Member(0,3),incr_bv(incr_bv(p)`2)`2)))))))" + +lemma rep_body_fm_type [TC]: "p \ formula \ rep_body_fm(p) \ formula" + by (simp add: rep_body_fm_def) + +lemmas ZF_replacement_simps = formula_add_params1[of \ 2 _ M "[_,_]" ] + sats_incr_bv_iff[of _ _ M _ "[]"] \ \simplifies iterates of \<^term>\\x. incr_bv(x)`0\\ + sats_incr_bv_iff[of _ _ M _ "[_,_]"]\ \simplifies \<^term>\\x. incr_bv(x)`2\\ + sats_incr_bv1_iff[of _ _ M] sats_swap_vars for \ M + +lemma sats_rep_body_fm: + assumes + "\ \ formula" "ms\list(M)" "rest\list(M)" + shows + "M, rest @ ms \ rep_body_fm(\) \ + strong_replacement(##M,\x y. M, [x,y] @ rest @ ms \ \)" + using assms ZF_replacement_simps + unfolding rep_body_fm_def strong_replacement_def univalent_def + unfolding univalent_fm_def univalent_Q1_def univalent_Q2_def + by simp + +definition + ZF_replacement_fm :: "i \ i" where + "ZF_replacement_fm(p) \ Forall^(pred(pred(arity(p))))(rep_body_fm(p))" + +lemma ZF_replacement_fm_type [TC]: "p \ formula \ ZF_replacement_fm(p) \ formula" + by (simp add: ZF_replacement_fm_def) + +lemma sats_ZF_replacement_fm_iff: + assumes + "\\formula" + shows + "(M, [] \ (ZF_replacement_fm(\))) + \ + (\env\list(M). arity(\) \ 2 #+ length(env) \ + strong_replacement(##M,\x y. M,[x,y] @ env \ \))" +proof (intro iffI ballI impI) + let ?n="Arith.pred(Arith.pred(arity(\)))" + fix env + assume "M, [] \ ZF_replacement_fm(\)" "arity(\) \ 2 #+ length(env)" "env\list(M)" + moreover from this + have "arity(\) \ succ(succ(length(env)))" by (simp) + moreover from calculation + have "pred(arity(\)) \ succ(length(env))" + using pred_mono[OF _ \arity(\)\succ(_)\] pred_succ_eq by simp + moreover from calculation + obtain some rest where "some\list(M)" "rest\list(M)" + "env = some @ rest" "length(some) = Arith.pred(Arith.pred(arity(\)))" + using list_split[OF \pred(_) \ _\ \env\_\] by auto + moreover + note \\\_\ + moreover from this + have "arity(\) \ succ(succ(Arith.pred(Arith.pred(arity(\)))))" + using le_trans[OF succpred_leI] succpred_leI by simp + moreover from calculation + have "M, some \ rep_body_fm(\)" + using sats_nForall[of "rep_body_fm(\)" ?n] + unfolding ZF_replacement_fm_def + by simp + ultimately + show "strong_replacement(##M, \x y. M, [x, y] @ env \ \)" + using sats_rep_body_fm[of \ "[]" M some] + arity_sats_iff[of \ rest M "[_,_] @ some"] + strong_replacement_cong[of "##M" "\x y. M, Cons(x, Cons(y, some @ rest)) \ \" _ ] + by simp +next \ \almost equal to the previous implication\ + let ?n="Arith.pred(Arith.pred(arity(\)))" + assume asm:"\env\list(M). arity(\) \ 2 #+ length(env) \ + strong_replacement(##M, \x y. M, [x, y] @ env \ \)" + { + fix some + assume "some\list(M)" "length(some) = Arith.pred(Arith.pred(arity(\)))" + moreover + note \\\_\ + moreover from calculation + have "arity(\) \ 2 #+ length(some)" + using le_trans[OF succpred_leI] succpred_leI by simp + moreover from calculation and asm + have "strong_replacement(##M, \x y. M, [x, y] @ some \ \)" by blast + ultimately + have "M, some \ rep_body_fm(\)" + using sats_rep_body_fm[of \ "[]" M some] + arity_sats_iff[of \ _ M "[_,_] @ some"] + strong_replacement_cong[of "##M" "\x y. M, Cons(x, Cons(y, some @ _)) \ \" _ ] + by simp + } + with \\\_\ + show "M, [] \ ZF_replacement_fm(\)" + using sats_nForall[of "rep_body_fm(\)" ?n] + unfolding ZF_replacement_fm_def + by simp +qed + +definition + ZF_inf :: "i" where + "ZF_inf \ {ZF_separation_fm(p) . p \ formula } \ {ZF_replacement_fm(p) . p \ formula }" + +lemma Un_subset_formula: "A\formula \ B\formula \ A\B \ formula" + by auto + +lemma ZF_inf_subset_formula : "ZF_inf \ formula" + unfolding ZF_inf_def by auto + +definition + ZFC :: "i" where + "ZFC \ ZF_inf \ ZFC_fin" + +definition + ZF :: "i" where + "ZF \ ZF_inf \ ZF_fin" + +definition + ZF_minus_P :: "i" where + "ZF_minus_P \ ZF - { ZF_power_fm }" + +lemma ZFC_subset_formula: "ZFC \ formula" + by (simp add:ZFC_def Un_subset_formula ZF_inf_subset_formula ZFC_fin_type) + +txt\Satisfaction of a set of sentences\ +definition + satT :: "[i,i] \ o" ("_ \ _" [36,36] 60) where + "A \ \ \ \\\\. (A,[] \ \)" + +lemma satTI [intro!]: + assumes "\\. \\\ \ A,[] \ \" + shows "A \ \" + using assms unfolding satT_def by simp + +lemma satTD [dest] :"A \ \ \ \\\ \ A,[] \ \" + unfolding satT_def by simp + +lemma sats_ZFC_iff_sats_ZF_AC: + "(N \ ZFC) \ (N \ ZF) \ (N, [] \ AC)" + unfolding ZFC_def ZFC_fin_def ZF_def by auto + +lemma M_ZF_iff_M_satT: "M_ZF(M) \ (M \ ZF)" +proof + assume "M \ ZF" + then + have fin: "upair_ax(##M)" "Union_ax(##M)" "power_ax(##M)" + "extensionality(##M)" "foundation_ax(##M)" "infinity_ax(##M)" + unfolding ZF_def ZF_fin_def ZFC_fm_defs satT_def + using ZFC_fm_sats[of M] by simp_all + { + fix \ env + assume "\ \ formula" "env\list(M)" + moreover from \M \ ZF\ + have "\p\formula. (M, [] \ (ZF_separation_fm(p)))" + "\p\formula. (M, [] \ (ZF_replacement_fm(p)))" + unfolding ZF_def ZF_inf_def by auto + moreover from calculation + have "arity(\) \ succ(length(env)) \ separation(##M, \x. (M, Cons(x, env) \ \))" + "arity(\) \ succ(succ(length(env))) \ strong_replacement(##M,\x y. sats(M,\,Cons(x,Cons(y, env))))" + using sats_ZF_separation_fm_iff sats_ZF_replacement_fm_iff by simp_all + } + with fin + show "M_ZF(M)" + unfolding M_ZF_def by simp +next + assume \M_ZF(M)\ + then + have "M \ ZF_fin" + unfolding M_ZF_def ZF_fin_def ZFC_fm_defs satT_def + using ZFC_fm_sats[of M] by blast + moreover from \M_ZF(M)\ + have "\p\formula. (M, [] \ (ZF_separation_fm(p)))" + "\p\formula. (M, [] \ (ZF_replacement_fm(p)))" + unfolding M_ZF_def using sats_ZF_separation_fm_iff + sats_ZF_replacement_fm_iff by simp_all + ultimately + show "M \ ZF" + unfolding ZF_def ZF_inf_def by blast +qed + +end diff --git a/thys/Forcing/Internalizations.thy b/thys/Forcing/Internalizations.thy new file mode 100644 --- /dev/null +++ b/thys/Forcing/Internalizations.thy @@ -0,0 +1,34 @@ +section\Aids to internalize formulas\ +theory Internalizations + imports + "ZF-Constructible.DPow_absolute" +begin + +text\We found it useful to have slightly different versions of some +results in ZF-Constructible:\ +lemma nth_closed : + assumes "0\A" "env\list(A)" + shows "nth(n,env)\A" + using assms(2,1) unfolding nth_def by (induct env; simp) + +lemmas FOL_sats_iff = sats_Nand_iff sats_Forall_iff sats_Neg_iff sats_And_iff + sats_Or_iff sats_Implies_iff sats_Iff_iff sats_Exists_iff + +lemma nth_ConsI: "\nth(n,l) = x; n \ nat\ \ nth(succ(n), Cons(a,l)) = x" +by simp + +lemmas nth_rules = nth_0 nth_ConsI nat_0I nat_succI +lemmas sep_rules = nth_0 nth_ConsI FOL_iff_sats function_iff_sats + fun_plus_iff_sats successor_iff_sats + omega_iff_sats FOL_sats_iff Replace_iff_sats + +text\Also a different compilation of lemmas (term\sep_rules\) used in formula + synthesis\ +lemmas fm_defs = omega_fm_def limit_ordinal_fm_def empty_fm_def typed_function_fm_def + pair_fm_def upair_fm_def domain_fm_def function_fm_def succ_fm_def + cons_fm_def fun_apply_fm_def image_fm_def big_union_fm_def union_fm_def + relation_fm_def composition_fm_def field_fm_def ordinal_fm_def range_fm_def + transset_fm_def subset_fm_def Replace_fm_def + + +end diff --git a/thys/Forcing/Least.thy b/thys/Forcing/Least.thy new file mode 100644 --- /dev/null +++ b/thys/Forcing/Least.thy @@ -0,0 +1,133 @@ +section\The binder \<^term>\Least\\ +theory Least + imports + Names + +begin + +text\We have some basic results on the least ordinal satisfying +a predicate.\ + +lemma Least_Ord: "(\ \. R(\)) = (\ \. Ord(\) \ R(\))" + unfolding Least_def by (simp add:lt_Ord) + +lemma Ord_Least_cong: + assumes "\y. Ord(y) \ R(y) \ Q(y)" + shows "(\ \. R(\)) = (\ \. Q(\))" +proof - + from assms + have "(\ \. Ord(\) \ R(\)) = (\ \. Ord(\) \ Q(\))" + by simp + then + show ?thesis using Least_Ord by simp +qed + +definition + least :: "[i\o,i\o,i] \ o" where + "least(M,Q,i) \ ordinal(M,i) \ ( + (empty(M,i) \ (\b[M]. ordinal(M,b) \ \Q(b))) + \ (Q(i) \ (\b[M]. ordinal(M,b) \ b\i\ \Q(b))))" + +definition + least_fm :: "[i,i] \ i" where + "least_fm(q,i) \ And(ordinal_fm(i), + Or(And(empty_fm(i),Forall(Implies(ordinal_fm(0),Neg(q)))), + And(Exists(And(q,Equal(0,succ(i)))), + Forall(Implies(And(ordinal_fm(0),Member(0,succ(i))),Neg(q))))))" + +lemma least_fm_type[TC] :"i \ nat \ q\formula \ least_fm(q,i) \ formula" + unfolding least_fm_def + by simp + +(* Refactorize Formula and Relative to include the following three lemmas *) +lemmas basic_fm_simps = sats_subset_fm' sats_transset_fm' sats_ordinal_fm' + +lemma sats_least_fm : + assumes p_iff_sats: + "\a. a \ A \ P(a) \ sats(A, p, Cons(a, env))" + shows + "\y \ nat; env \ list(A) ; 0\A\ + \ sats(A, least_fm(p,y), env) \ + least(##A, P, nth(y,env))" + using nth_closed p_iff_sats unfolding least_def least_fm_def + by (simp add:basic_fm_simps) + +lemma least_iff_sats: + assumes is_Q_iff_sats: + "\a. a \ A \ is_Q(a) \ sats(A, q, Cons(a,env))" + shows + "\nth(j,env) = y; j \ nat; env \ list(A); 0\A\ + \ least(##A, is_Q, y) \ sats(A, least_fm(q,j), env)" + using sats_least_fm [OF is_Q_iff_sats, of j , symmetric] + by simp + +lemma least_conj: "a\M \ least(##M, \x. x\M \ Q(x),a) \ least(##M,Q,a)" + unfolding least_def by simp + +(* Better to have this in M_basic or similar *) +lemma (in M_ctm) unique_least: "a\M \ b\M \ least(##M,Q,a) \ least(##M,Q,b) \ a=b" + unfolding least_def + by (auto, erule_tac i=a and j=b in Ord_linear_lt; (drule ltD | simp); auto intro:Ord_in_Ord) + +context M_trivial +begin + +subsection\Absoluteness and closure under \<^term>\Least\\ + +lemma least_abs: + assumes "\x. Q(x) \ M(x)" "M(a)" + shows "least(M,Q,a) \ a = (\ x. Q(x))" + unfolding least_def +proof (cases "\b[M]. Ord(b) \ \ Q(b)"; intro iffI; simp add:assms) + case True + with \\x. Q(x) \ M(x)\ + have "\ (\i. Ord(i) \ Q(i)) " by blast + then + show "0 =(\ x. Q(x))" using Least_0 by simp + then + show "ordinal(M, \ x. Q(x)) \ (empty(M, Least(Q)) \ Q(Least(Q)))" + by simp +next + assume "\b[M]. Ord(b) \ Q(b)" + then + obtain i where "M(i)" "Ord(i)" "Q(i)" by blast + assume "a = (\ x. Q(x))" + moreover + note \M(a)\ + moreover from \Q(i)\ \Ord(i)\ + have "Q(\ x. Q(x))" (is ?G) + by (blast intro:LeastI) + moreover + have "(\b[M]. Ord(b) \ b \ (\ x. Q(x)) \ \ Q(b))" (is "?H") + using less_LeastE[of Q _ False] + by (auto, drule_tac ltI, simp, blast) + ultimately + show "ordinal(M, \ x. Q(x)) \ (empty(M, \ x. Q(x)) \ (\b[M]. Ord(b) \ \ Q(b)) \ ?G \ ?H)" + by simp +next + assume 1:"\b[M]. Ord(b) \ Q(b)" + then + obtain i where "M(i)" "Ord(i)" "Q(i)" by blast + assume "Ord(a) \ (a = 0 \ (\b[M]. Ord(b) \ \ Q(b)) \ Q(a) \ (\b[M]. Ord(b) \ b \ a \ \ Q(b)))" + with 1 + have "Ord(a)" "Q(a)" "\b[M]. Ord(b) \ b \ a \ \ Q(b)" + by blast+ + moreover from this and \\x. Q(x) \ M(x)\ + have "Ord(b) \ b \ a \ \ Q(b)" for b + by blast + moreover from this and \Ord(a)\ + have "b < a \ \ Q(b)" for b + unfolding lt_def using Ord_in_Ord by blast + ultimately + show "a = (\ x. Q(x))" + using Least_equality by simp +qed + +lemma Least_closed: + assumes "\x. Q(x) \ M(x)" + shows "M(\ x. Q(x))" + using assms LeastI[of Q] Least_0 by (cases "(\i. Ord(i) \ Q(i))", auto) + +end (* M_trivial *) + +end \ No newline at end of file diff --git a/thys/Forcing/Names.thy b/thys/Forcing/Names.thy new file mode 100644 --- /dev/null +++ b/thys/Forcing/Names.thy @@ -0,0 +1,1060 @@ +section\Names and generic extensions\ + +theory Names + imports + Forcing_Data + Interface + Recursion_Thms + Synthetic_Definition +begin + +definition + SepReplace :: "[i, i\i, i\ o] \ i" where + "SepReplace(A,b,Q) \ {y . x\A, y=b(x) \ Q(x)}" + +syntax + "_SepReplace" :: "[i, pttrn, i, o] \ i" ("(1{_ ../ _ \ _, _})") +translations + "{b .. x\A, Q}" => "CONST SepReplace(A, \x. b, \x. Q)" + +lemma Sep_and_Replace: "{b(x) .. x\A, P(x) } = {b(x) . x\{y\A. P(y)}}" + by (auto simp add:SepReplace_def) + +lemma SepReplace_subset : "A\A'\ {b .. x\A, Q}\{b .. x\A', Q}" + by (auto simp add:SepReplace_def) + +lemma SepReplace_iff [simp]: "y\{b(x) .. x\A, P(x)} \ (\x\A. y=b(x) & P(x))" + by (auto simp add:SepReplace_def) + +lemma SepReplace_dom_implies : + "(\ x . x \A \ b(x) = b'(x))\ {b(x) .. x\A, Q(x)}={b'(x) .. x\A, Q(x)}" + by (simp add:SepReplace_def) + +lemma SepReplace_pred_implies : + "\x. Q(x)\ b(x) = b'(x)\ {b(x) .. x\A, Q(x)}={b'(x) .. x\A, Q(x)}" + by (force simp add:SepReplace_def) + +subsection\The well-founded relation \<^term>\ed\\ + +lemma eclose_sing : "x \ eclose(a) \ x \ eclose({a})" + by(rule subsetD[OF mem_eclose_subset],simp+) + +lemma ecloseE : + assumes "x \ eclose(A)" + shows "x \ A \ (\ B \ A . x \ eclose(B))" + using assms +proof (induct rule:eclose_induct_down) + case (1 y) + then + show ?case + using arg_into_eclose by auto +next + case (2 y z) + from \y \ A \ (\B\A. y \ eclose(B))\ + consider (inA) "y \ A" | (exB) "(\B\A. y \ eclose(B))" + by auto + then show ?case + proof (cases) + case inA + then + show ?thesis using 2 arg_into_eclose by auto + next + case exB + then obtain B where "y \ eclose(B)" "B\A" + by auto + then + show ?thesis using 2 ecloseD[of y B z] by auto + qed +qed + +lemma eclose_singE : "x \ eclose({a}) \ x = a \ x \ eclose(a)" + by(blast dest: ecloseE) + +lemma in_eclose_sing : + assumes "x \ eclose({a})" "a \ eclose(z)" + shows "x \ eclose({z})" +proof - + from \x\eclose({a})\ + consider (eq) "x=a" | (lt) "x\eclose(a)" + using eclose_singE by auto + then + show ?thesis + using eclose_sing mem_eclose_trans assms + by (cases, auto) +qed + +lemma in_dom_in_eclose : + assumes "x \ domain(z)" + shows "x \ eclose(z)" +proof - + from assms + obtain y where "\x,y\ \ z" + unfolding domain_def by auto + then + show ?thesis + unfolding Pair_def + using ecloseD[of "{x,x}"] ecloseD[of "{{x,x},{x,y}}"] arg_into_eclose + by auto +qed + +text\term\ed\ is the well-founded relation on which \<^term>\val\ is defined.\ +definition + ed :: "[i,i] \ o" where + "ed(x,y) \ x \ domain(y)" + +definition + edrel :: "i \ i" where + "edrel(A) \ Rrel(ed,A)" + + +lemma edI[intro!]: "t\domain(x) \ ed(t,x)" + unfolding ed_def . + +lemma edD[dest!]: "ed(t,x) \ t\domain(x)" + unfolding ed_def . + + +lemma rank_ed: + assumes "ed(y,x)" + shows "succ(rank(y)) \ rank(x)" +proof + from assms + obtain p where "\y,p\\x" by auto + moreover + obtain s where "y\s" "s\\y,p\" unfolding Pair_def by auto + ultimately + have "rank(y) < rank(s)" "rank(s) < rank(\y,p\)" "rank(\y,p\) < rank(x)" + using rank_lt by blast+ + then + show "rank(y) < rank(x)" + using lt_trans by blast +qed + +lemma edrel_dest [dest]: "x \ edrel(A) \ \ a\ A. \ b \ A. x =\a,b\" + by(auto simp add:ed_def edrel_def Rrel_def) + +lemma edrelD : "x \ edrel(A) \ \ a\ A. \ b \ A. x =\a,b\ \ a \ domain(b)" + by(auto simp add:ed_def edrel_def Rrel_def) + +lemma edrelI [intro!]: "x\A \ y\A \ x \ domain(y) \ \x,y\\edrel(A)" + by (simp add:ed_def edrel_def Rrel_def) + +lemma edrel_trans: "Transset(A) \ y\A \ x \ domain(y) \ \x,y\\edrel(A)" + by (rule edrelI, auto simp add:Transset_def domain_def Pair_def) + +lemma domain_trans: "Transset(A) \ y\A \ x \ domain(y) \ x\A" + by (auto simp add: Transset_def domain_def Pair_def) + +lemma relation_edrel : "relation(edrel(A))" + by(auto simp add: relation_def) + +lemma field_edrel : "field(edrel(A))\A" + by blast + +lemma edrel_sub_memrel: "edrel(A) \ trancl(Memrel(eclose(A)))" +proof + fix z + assume + "z\edrel(A)" + then obtain x y where + Eq1: "x\A" "y\A" "z=\x,y\" "x\domain(y)" + using edrelD + by blast + then obtain u v where + Eq2: "x\u" "u\v" "v\y" + unfolding domain_def Pair_def by auto + with Eq1 have + Eq3: "x\eclose(A)" "y\eclose(A)" "u\eclose(A)" "v\eclose(A)" + by (auto, rule_tac [3-4] ecloseD, rule_tac [3] ecloseD, simp_all add:arg_into_eclose) + let + ?r="trancl(Memrel(eclose(A)))" + from Eq2 and Eq3 have + "\x,u\\?r" "\u,v\\?r" "\v,y\\?r" + by (auto simp add: r_into_trancl) + then have + "\x,y\\?r" + by (rule_tac trancl_trans, rule_tac [2] trancl_trans, simp) + with Eq1 show "z\?r" by simp +qed + +lemma wf_edrel : "wf(edrel(A))" + using wf_subset [of "trancl(Memrel(eclose(A)))"] edrel_sub_memrel + wf_trancl wf_Memrel + by auto + +lemma ed_induction: + assumes "\x. \\y. ed(y,x) \ Q(y) \ \ Q(x)" + shows "Q(a)" +proof(induct rule: wf_induct2[OF wf_edrel[of "eclose({a})"] ,of a "eclose({a})"]) + case 1 + then show ?case using arg_into_eclose by simp +next + case 2 + then show ?case using field_edrel . +next + case (3 x) + then + show ?case + using assms[of x] edrelI domain_trans[OF Transset_eclose 3(1)] by blast +qed + +lemma dom_under_edrel_eclose: "edrel(eclose({x})) -`` {x} = domain(x)" +proof + show "edrel(eclose({x})) -`` {x} \ domain(x)" + unfolding edrel_def Rrel_def ed_def + by auto +next + show "domain(x) \ edrel(eclose({x})) -`` {x}" + unfolding edrel_def Rrel_def + using in_dom_in_eclose eclose_sing arg_into_eclose + by blast +qed + +lemma ed_eclose : "\y,z\ \ edrel(A) \ y \ eclose(z)" + by(drule edrelD,auto simp add:domain_def in_dom_in_eclose) + +lemma tr_edrel_eclose : "\y,z\ \ edrel(eclose({x}))^+ \ y \ eclose(z)" + by(rule trancl_induct,(simp add: ed_eclose mem_eclose_trans)+) + + +lemma restrict_edrel_eq : + assumes "z \ domain(x)" + shows "edrel(eclose({x})) \ eclose({z})\eclose({z}) = edrel(eclose({z}))" +proof(intro equalityI subsetI) + let ?ec="\ y . edrel(eclose({y}))" + let ?ez="eclose({z})" + let ?rr="?ec(x) \ ?ez \ ?ez" + fix y + assume yr:"y \ ?rr" + with yr obtain a b where 1:"\a,b\ \ ?rr" "a \ ?ez" "b \ ?ez" "\a,b\ \ ?ec(x)" "y=\a,b\" + by blast + moreover + from this + have "a \ domain(b)" using edrelD by blast + ultimately + show "y \ edrel(eclose({z}))" by blast +next + let ?ec="\ y . edrel(eclose({y}))" + let ?ez="eclose({z})" + let ?rr="?ec(x) \ ?ez \ ?ez" + fix y + assume yr:"y \ edrel(?ez)" + then obtain a b where "a \ ?ez" "b \ ?ez" "y=\a,b\" "a \ domain(b)" + using edrelD by blast + moreover + from this assms + have "z \ eclose(x)" using in_dom_in_eclose by simp + moreover + from assms calculation + have "a \ eclose({x})" "b \ eclose({x})" using in_eclose_sing by simp_all + moreover + from this \a\domain(b)\ + have "\a,b\ \ edrel(eclose({x}))" by blast + ultimately + show "y \ ?rr" by simp +qed + +lemma tr_edrel_subset : + assumes "z \ domain(x)" + shows "tr_down(edrel(eclose({x})),z) \ eclose({z})" +proof(intro subsetI) + let ?r="\ x . edrel(eclose({x}))" + fix y + assume "y \ tr_down(?r(x),z)" + then + have "\y,z\ \ ?r(x)^+" using tr_downD by simp + with assms + show "y \ eclose({z})" using tr_edrel_eclose eclose_sing by simp +qed + + +context M_ctm +begin + +lemma upairM : "x \ M \ y \ M \ {x,y} \ M" + by (simp flip: setclass_iff) + +lemma singletonM : "a \ M \ {a} \ M" + by (simp flip: setclass_iff) + +lemma Rep_simp : "Replace(u,\ y z . z = f(y)) = { f(y) . y \ u}" + by(auto) + +end (* M_ctm *) + +subsection\Values and check-names\ +context forcing_data +begin + +definition + Hcheck :: "[i,i] \ i" where + "Hcheck(z,f) \ { \f`y,one\ . y \ z}" + +definition + check :: "i \ i" where + "check(x) \ transrec(x , Hcheck)" + +lemma checkD: + "check(x) = wfrec(Memrel(eclose({x})), x, Hcheck)" + unfolding check_def transrec_def .. + +definition + rcheck :: "i \ i" where + "rcheck(x) \ Memrel(eclose({x}))^+" + + +lemma Hcheck_trancl:"Hcheck(y, restrict(f,Memrel(eclose({x}))-``{y})) + = Hcheck(y, restrict(f,(Memrel(eclose({x}))^+)-``{y}))" + unfolding Hcheck_def + using restrict_trans_eq by simp + +lemma check_trancl: "check(x) = wfrec(rcheck(x), x, Hcheck)" + using checkD wf_eq_trancl Hcheck_trancl unfolding rcheck_def by simp + +(* relation of check is in M *) +lemma rcheck_in_M : + "x \ M \ rcheck(x) \ M" + unfolding rcheck_def by (simp flip: setclass_iff) + + +lemma aux_def_check: "x \ y \ + wfrec(Memrel(eclose({y})), x, Hcheck) = + wfrec(Memrel(eclose({x})), x, Hcheck)" + by (rule wfrec_eclose_eq,auto simp add: arg_into_eclose eclose_sing) + +lemma def_check : "check(y) = { \check(w),one\ . w \ y}" +proof - + let + ?r="\y. Memrel(eclose({y}))" + have wfr: "\w . wf(?r(w))" + using wf_Memrel .. + then + have "check(y)= Hcheck( y, \x\?r(y) -`` {y}. wfrec(?r(y), x, Hcheck))" + using wfrec[of "?r(y)" y "Hcheck"] checkD by simp + also + have " ... = Hcheck( y, \x\y. wfrec(?r(y), x, Hcheck))" + using under_Memrel_eclose arg_into_eclose by simp + also + have " ... = Hcheck( y, \x\y. check(x))" + using aux_def_check checkD by simp + finally show ?thesis using Hcheck_def by simp +qed + + +lemma def_checkS : + fixes n + assumes "n \ nat" + shows "check(succ(n)) = check(n) \ {\check(n),one\}" +proof - + have "check(succ(n)) = {\check(i),one\ . i \ succ(n)} " + using def_check by blast + also have "... = {\check(i),one\ . i \ n} \ {\check(n),one\}" + by blast + also have "... = check(n) \ {\check(n),one\}" + using def_check[of n,symmetric] by simp + finally show ?thesis . +qed + +lemma field_Memrel2 : + assumes "x \ M" + shows "field(Memrel(eclose({x}))) \ M" +proof - + have "field(Memrel(eclose({x}))) \ eclose({x})" "eclose({x}) \ M" + using Ordinal.Memrel_type field_rel_subset assms eclose_least[OF trans_M] by auto + then + show ?thesis using subset_trans by simp +qed + +definition + Hv :: "i\i\i\i" where + "Hv(G,x,f) \ { f`y .. y\ domain(x), \p\P. \y,p\ \ x \ p \ G }" + +text\The funcion \<^term>\val\ interprets a name in \<^term>\M\ +according to a (generic) filter \<^term>\G\. Note the definition +in terms of the well-founded recursor.\ + +definition + val :: "i\i\i" where + "val(G,\) \ wfrec(edrel(eclose({\})), \ ,Hv(G))" + +lemma aux_def_val: + assumes "z \ domain(x)" + shows "wfrec(edrel(eclose({x})),z,Hv(G)) = wfrec(edrel(eclose({z})),z,Hv(G))" +proof - + let ?r="\x . edrel(eclose({x}))" + have "z\eclose({z})" using arg_in_eclose_sing . + moreover + have "relation(?r(x))" using relation_edrel . + moreover + have "wf(?r(x))" using wf_edrel . + moreover from assms + have "tr_down(?r(x),z) \ eclose({z})" using tr_edrel_subset by simp + ultimately + have "wfrec(?r(x),z,Hv(G)) = wfrec[eclose({z})](?r(x),z,Hv(G))" + using wfrec_restr by simp + also from \z\domain(x)\ + have "... = wfrec(?r(z),z,Hv(G))" + using restrict_edrel_eq wfrec_restr_eq by simp + finally show ?thesis . +qed + +text\The next lemma provides the usual recursive expresion for the definition of term\val\.\ + +lemma def_val: "val(G,x) = {val(G,t) .. t\domain(x) , \p\P . \t,p\\x \ p \ G }" +proof - + let + ?r="\\ . edrel(eclose({\}))" + let + ?f="\z\?r(x)-``{x}. wfrec(?r(x),z,Hv(G))" + have "\\. wf(?r(\))" using wf_edrel by simp + with wfrec [of _ x] + have "val(G,x) = Hv(G,x,?f)" using val_def by simp + also + have " ... = Hv(G,x,\z\domain(x). wfrec(?r(x),z,Hv(G)))" + using dom_under_edrel_eclose by simp + also + have " ... = Hv(G,x,\z\domain(x). val(G,z))" + using aux_def_val val_def by simp + finally + show ?thesis using Hv_def SepReplace_def by simp +qed + +lemma val_mono : "x\y \ val(G,x) \ val(G,y)" + by (subst (1 2) def_val, force) + +text\Check-names are the canonical names for elements of the +ground model. Here we show that this is the case.\ + +lemma valcheck : "one \ G \ one \ P \ val(G,check(y)) = y" +proof (induct rule:eps_induct) + case (1 y) + then show ?case + proof - + have "check(y) = { \check(w), one\ . w \ y}" (is "_ = ?C") + using def_check . + then + have "val(G,check(y)) = val(G, {\check(w), one\ . w \ y})" + by simp + also + have " ... = {val(G,t) .. t\domain(?C) , \p\P . \t, p\\?C \ p \ G }" + using def_val by blast + also + have " ... = {val(G,t) .. t\domain(?C) , \w\y. t=check(w) }" + using 1 by simp + also + have " ... = {val(G,check(w)) . w\y }" + by force + finally + show "val(G,check(y)) = y" + using 1 by simp + qed +qed + +lemma val_of_name : + "val(G,{x\A\P. Q(x)}) = {val(G,t) .. t\A , \p\P . Q(\t,p\) \ p \ G }" +proof - + let + ?n="{x\A\P. Q(x)}" and + ?r="\\ . edrel(eclose({\}))" + let + ?f="\z\?r(?n)-``{?n}. val(G,z)" + have + wfR : "wf(?r(\))" for \ + by (simp add: wf_edrel) + have "domain(?n) \ A" by auto + { fix t + assume H:"t \ domain({x \ A \ P . Q(x)})" + then have "?f ` t = (if t \ ?r(?n)-``{?n} then val(G,t) else 0)" + by simp + moreover have "... = val(G,t)" + using dom_under_edrel_eclose H if_P by auto + } + then + have Eq1: "t \ domain({x \ A \ P . Q(x)}) \ val(G,t) = ?f` t" for t + by simp + have "val(G,?n) = {val(G,t) .. t\domain(?n), \p \ P . \t,p\ \ ?n \ p \ G}" + by (subst def_val,simp) + also + have "... = {?f`t .. t\domain(?n), \p\P . \t,p\\?n \ p\G}" + unfolding Hv_def + by (subst SepReplace_dom_implies,auto simp add:Eq1) + also + have "... = { (if t\?r(?n)-``{?n} then val(G,t) else 0) .. t\domain(?n), \p\P . \t,p\\?n \ p\G}" + by (simp) + also + have Eq2: "... = { val(G,t) .. t\domain(?n), \p\P . \t,p\\?n \ p\G}" + proof - + have "domain(?n) \ ?r(?n)-``{?n}" + using dom_under_edrel_eclose by simp + then + have "\t\domain(?n). (if t\?r(?n)-``{?n} then val(G,t) else 0) = val(G,t)" + by auto + then + show "{ (if t\?r(?n)-``{?n} then val(G,t) else 0) .. t\domain(?n), \p\P . \t,p\\?n \ p\G} = + { val(G,t) .. t\domain(?n), \p\P . \t,p\\?n \ p\G}" + by auto + qed + also + have " ... = { val(G,t) .. t\A, \p\P . \t,p\\?n \ p\G}" + by force + finally + show " val(G,?n) = { val(G,t) .. t\A, \p\P . Q(\t,p\) \ p\G}" + by auto +qed + +lemma val_of_name_alt : + "val(G,{x\A\P. Q(x)}) = {val(G,t) .. t\A , \p\P\G . Q(\t,p\) }" + using val_of_name by force + +lemma val_only_names: "val(F,\) = val(F,{x\\. \t\domain(\). \p\P. x=\t,p\})" + (is "_ = val(F,?name)") +proof - + have "val(F,?name) = {val(F, t).. t\domain(?name), \p\P. \t, p\ \ ?name \ p \ F}" + using def_val by blast + also + have " ... = {val(F, t). t\{y\domain(?name). \p\P. \y, p\ \ ?name \ p \ F}}" + using Sep_and_Replace by simp + also + have " ... = {val(F, t). t\{y\domain(\). \p\P. \y, p\ \ \ \ p \ F}}" + by blast + also + have " ... = {val(F, t).. t\domain(\), \p\P. \t, p\ \ \ \ p \ F}" + using Sep_and_Replace by simp + also + have " ... = val(F, \)" + using def_val[symmetric] by blast + finally + show ?thesis .. +qed + +lemma val_only_pairs: "val(F,\) = val(F,{x\\. \t p. x=\t,p\})" +proof + have "val(F,\) = val(F,{x\\. \t\domain(\). \p\P. x=\t,p\})" + (is "_ = val(F,?name)") + using val_only_names . + also + have "... \ val(F,{x\\. \t p. x=\t,p\})" + using val_mono[of ?name "{x\\. \t p. x=\t,p\}"] by auto + finally + show "val(F,\) \ val(F,{x\\. \t p. x=\t,p\})" by simp +next + show "val(F,{x\\. \t p. x=\t,p\}) \ val(F,\)" + using val_mono[of "{x\\. \t p. x=\t,p\}"] by auto +qed + +lemma val_subset_domain_times_range: "val(F,\) \ val(F,domain(\)\range(\))" + using val_only_pairs[THEN equalityD1] + val_mono[of "{x \ \ . \t p. x = \t, p\}" "domain(\)\range(\)"] by blast + +lemma val_subset_domain_times_P: "val(F,\) \ val(F,domain(\)\P)" + using val_only_names[of F \] val_mono[of "{x\\. \t\domain(\). \p\P. x=\t,p\}" "domain(\)\P" F] + by auto + +definition + GenExt :: "i\i" ("M[_]") + where "GenExt(G)\ {val(G,\). \ \ M}" + + +lemma val_of_elem: "\\,p\ \ \ \ p\G \ p\P \ val(G,\) \ val(G,\)" +proof - + assume + "\\,p\ \ \" + then + have "\\domain(\)" by auto + assume "p\G" "p\P" + with \\\domain(\)\ \\\,p\ \ \\ + have "val(G,\) \ {val(G,t) .. t\domain(\) , \p\P . \t, p\\\ \ p \ G }" + by auto + then + show ?thesis by (subst def_val) +qed + +lemma elem_of_val: "x\val(G,\) \ \\\domain(\). val(G,\) = x" + by (subst (asm) def_val,auto) + +lemma elem_of_val_pair: "x\val(G,\) \ \\. \p\G. \\,p\\\ \ val(G,\) = x" + by (subst (asm) def_val,auto) + +lemma elem_of_val_pair': + assumes "\\M" "x\val(G,\)" + shows "\\\M. \p\G. \\,p\\\ \ val(G,\) = x" +proof - + from assms + obtain \ p where "p\G" "\\,p\\\" "val(G,\) = x" + using elem_of_val_pair by blast + moreover from this \\\M\ + have "\\M" + using pair_in_M_iff[THEN iffD1, THEN conjunct1, simplified] + transitivity by blast + ultimately + show ?thesis by blast +qed + + +lemma GenExtD: + "x \ M[G] \ \\\M. x = val(G,\)" + by (simp add:GenExt_def) + +lemma GenExtI: + "x \ M \ val(G,x) \ M[G]" + by (auto simp add: GenExt_def) + +lemma Transset_MG : "Transset(M[G])" +proof - + { fix vc y + assume "vc \ M[G]" and "y \ vc" + then obtain c where "c\M" "val(G,c)\M[G]" "y \ val(G,c)" + using GenExtD by auto + from \y \ val(G,c)\ + obtain \ where "\\domain(c)" "val(G,\) = y" + using elem_of_val by blast + with trans_M \c\M\ + have "y \ M[G]" + using domain_trans GenExtI by blast + } + then + show ?thesis using Transset_def by auto +qed + +lemmas transitivity_MG = Transset_intf[OF Transset_MG] + +lemma check_n_M : + fixes n + assumes "n \ nat" + shows "check(n) \ M" + using \n\nat\ +proof (induct n) + case 0 + then show ?case using zero_in_M by (subst def_check,simp) +next + case (succ x) + have "one \ M" using one_in_P P_sub_M subsetD by simp + with \check(x)\M\ + have "\check(x),one\ \ M" + using tuples_in_M by simp + then + have "{\check(x),one\} \ M" + using singletonM by simp + with \check(x)\M\ + have "check(x) \ {\check(x),one\} \ M" + using Un_closed by simp + then show ?case using \x\nat\ def_checkS by simp +qed + + +definition + PHcheck :: "[i,i,i,i] \ o" where + "PHcheck(o,f,y,p) \ p\M \ (\fy[##M]. fun_apply(##M,f,y,fy) \ pair(##M,fy,o,p))" + +definition + is_Hcheck :: "[i,i,i,i] \ o" where + "is_Hcheck(o,z,f,hc) \ is_Replace(##M,z,PHcheck(o,f),hc)" + +lemma one_in_M: "one \ M" + by (insert one_in_P P_in_M, simp add: transitivity) + +lemma def_PHcheck: + assumes + "z\M" "f\M" + shows + "Hcheck(z,f) = Replace(z,PHcheck(one,f))" +proof - + from assms + have "\f`x,one\ \ M" "f`x\M" if "x\z" for x + using tuples_in_M one_in_M transitivity that apply_closed by simp_all + then + have "{y . x \ z, y = \f ` x, one\} = {y . x \ z, y = \f ` x, one\ \ y\M \ f`x\M}" + by simp + then + show ?thesis + using \z\M\ \f\M\ transitivity + unfolding Hcheck_def PHcheck_def RepFun_def + by auto +qed + +(* + "PHcheck(o,f,y,p) \ \fy[##M]. fun_apply(##M,f,y,fy) \ pair(##M,fy,o,p)" +*) +definition + PHcheck_fm :: "[i,i,i,i] \ i" where + "PHcheck_fm(o,f,y,p) \ Exists(And(fun_apply_fm(succ(f),succ(y),0) + ,pair_fm(0,succ(o),succ(p))))" + +lemma PHcheck_type [TC]: + "\ x \ nat; y \ nat; z \ nat; u \ nat \ \ PHcheck_fm(x,y,z,u) \ formula" + by (simp add:PHcheck_fm_def) + +lemma sats_PHcheck_fm [simp]: + "\ x \ nat; y \ nat; z \ nat; u \ nat ; env \ list(M)\ + \ sats(M,PHcheck_fm(x,y,z,u),env) \ + PHcheck(nth(x,env),nth(y,env),nth(z,env),nth(u,env))" + using zero_in_M Internalizations.nth_closed by (simp add: PHcheck_def PHcheck_fm_def) + +(* + "is_Hcheck(o,z,f,hc) \ is_Replace(##M,z,PHcheck(o,f),hc)" +*) +definition + is_Hcheck_fm :: "[i,i,i,i] \ i" where + "is_Hcheck_fm(o,z,f,hc) \ Replace_fm(z,PHcheck_fm(succ(succ(o)),succ(succ(f)),0,1),hc)" + +lemma is_Hcheck_type [TC]: + "\ x \ nat; y \ nat; z \ nat; u \ nat \ \ is_Hcheck_fm(x,y,z,u) \ formula" + by (simp add:is_Hcheck_fm_def) + +lemma sats_is_Hcheck_fm [simp]: + "\ x \ nat; y \ nat; z \ nat; u \ nat ; env \ list(M)\ + \ sats(M,is_Hcheck_fm(x,y,z,u),env) \ + is_Hcheck(nth(x,env),nth(y,env),nth(z,env),nth(u,env))" + using sats_Replace_fm unfolding is_Hcheck_def is_Hcheck_fm_def + by simp + + +(* instance of replacement for hcheck *) +lemma wfrec_Hcheck : + assumes + "X\M" + shows + "wfrec_replacement(##M,is_Hcheck(one),rcheck(X))" +proof - + have "is_Hcheck(one,a,b,c) \ + sats(M,is_Hcheck_fm(8,2,1,0),[c,b,a,d,e,y,x,z,one,rcheck(x)])" + if "a\M" "b\M" "c\M" "d\M" "e\M" "y\M" "x\M" "z\M" + for a b c d e y x z + using that one_in_M \X\M\ rcheck_in_M by simp + then have 1:"sats(M,is_wfrec_fm(is_Hcheck_fm(8,2,1,0),4,1,0), + [y,x,z,one,rcheck(X)]) \ + is_wfrec(##M, is_Hcheck(one),rcheck(X), x, y)" + if "x\M" "y\M" "z\M" for x y z + using that sats_is_wfrec_fm \X\M\ rcheck_in_M one_in_M by simp + let + ?f="Exists(And(pair_fm(1,0,2), + is_wfrec_fm(is_Hcheck_fm(8,2,1,0),4,1,0)))" + have satsf:"sats(M, ?f, [x,z,one,rcheck(X)]) \ + (\y\M. pair(##M,x,y,z) & is_wfrec(##M, is_Hcheck(one),rcheck(X), x, y))" + if "x\M" "z\M" for x z + using that 1 \X\M\ rcheck_in_M one_in_M by (simp del:pair_abs) + have artyf:"arity(?f) = 4" + unfolding is_wfrec_fm_def is_Hcheck_fm_def Replace_fm_def PHcheck_fm_def + pair_fm_def upair_fm_def is_recfun_fm_def fun_apply_fm_def big_union_fm_def + pre_image_fm_def restriction_fm_def image_fm_def + by (simp add:nat_simp_union) + then + have "strong_replacement(##M,\x z. sats(M,?f,[x,z,one,rcheck(X)]))" + using replacement_ax 1 artyf \X\M\ rcheck_in_M one_in_M by simp + then + have "strong_replacement(##M,\x z. + \y\M. pair(##M,x,y,z) & is_wfrec(##M, is_Hcheck(one),rcheck(X), x, y))" + using repl_sats[of M ?f "[one,rcheck(X)]"] satsf by (simp del:pair_abs) + then + show ?thesis unfolding wfrec_replacement_def by simp +qed + +lemma repl_PHcheck : + assumes + "f\M" + shows + "strong_replacement(##M,PHcheck(one,f))" +proof - + have "arity(PHcheck_fm(2,3,0,1)) = 4" + unfolding PHcheck_fm_def fun_apply_fm_def big_union_fm_def pair_fm_def image_fm_def + upair_fm_def + by (simp add:nat_simp_union) + with \f\M\ + have "strong_replacement(##M,\x y. sats(M,PHcheck_fm(2,3,0,1),[x,y,one,f]))" + using replacement_ax one_in_M by simp + with \f\M\ + show ?thesis using one_in_M unfolding strong_replacement_def univalent_def by simp +qed + +lemma univ_PHcheck : "\ z\M ; f\M \ \ univalent(##M,z,PHcheck(one,f))" + unfolding univalent_def PHcheck_def by simp + +lemma relation2_Hcheck : + "relation2(##M,is_Hcheck(one),Hcheck)" +proof - + have 1:"\x\z; PHcheck(one,f,x,y) \ \ (##M)(y)" + if "z\M" "f\M" for z f x y + using that unfolding PHcheck_def by simp + have "is_Replace(##M,z,PHcheck(one,f),hc) \ hc = Replace(z,PHcheck(one,f))" + if "z\M" "f\M" "hc\M" for z f hc + using that Replace_abs[OF _ _ univ_PHcheck 1] by simp + with def_PHcheck + show ?thesis + unfolding relation2_def is_Hcheck_def Hcheck_def by simp +qed + +lemma PHcheck_closed : + "\z\M ; f\M ; x\z; PHcheck(one,f,x,y) \ \ (##M)(y)" + unfolding PHcheck_def by simp + +lemma Hcheck_closed : + "\y\M. \g\M. function(g) \ Hcheck(y,g)\M" +proof - + have "Replace(y,PHcheck(one,f))\M" if "f\M" "y\M" for f y + using that repl_PHcheck PHcheck_closed[of y f] univ_PHcheck + strong_replacement_closed + by (simp flip: setclass_iff) + then show ?thesis using def_PHcheck by auto +qed + +lemma wf_rcheck : "x\M \ wf(rcheck(x))" + unfolding rcheck_def using wf_trancl[OF wf_Memrel] . + +lemma trans_rcheck : "x\M \ trans(rcheck(x))" + unfolding rcheck_def using trans_trancl . + +lemma relation_rcheck : "x\M \ relation(rcheck(x))" + unfolding rcheck_def using relation_trancl . + +lemma check_in_M : "x\M \ check(x) \ M" + unfolding transrec_def + using wfrec_Hcheck[of x] check_trancl wf_rcheck trans_rcheck relation_rcheck rcheck_in_M + Hcheck_closed relation2_Hcheck trans_wfrec_closed[of "rcheck(x)" x "is_Hcheck(one)" Hcheck] + by (simp flip: setclass_iff) + +end (* forcing_data *) + +(* check if this should go to Relative! *) +definition + is_singleton :: "[i\o,i,i] \ o" where + "is_singleton(A,x,z) \ \c[A]. empty(A,c) \ is_cons(A,x,c,z)" + +lemma (in M_trivial) singleton_abs[simp] : "\ M(x) ; M(s) \ \ is_singleton(M,x,s) \ s = {x}" + unfolding is_singleton_def using nonempty by simp + +definition + singleton_fm :: "[i,i] \ i" where + "singleton_fm(i,j) \ Exists(And(empty_fm(0), cons_fm(succ(i),0,succ(j))))" + +lemma singleton_type[TC] : "\ x \ nat; y \ nat \ \ singleton_fm(x,y) \ formula" + unfolding singleton_fm_def by simp + +lemma is_singleton_iff_sats: + "\ nth(i,env) = x; nth(j,env) = y; + i \ nat; j\nat ; env \ list(A)\ + \ is_singleton(##A,x,y) \ sats(A, singleton_fm(i,j), env)" + unfolding is_singleton_def singleton_fm_def by simp + +context forcing_data begin + +(* Internalization and absoluteness of rcheck *) +definition + is_rcheck :: "[i,i] \ o" where + "is_rcheck(x,z) \ \r\M. tran_closure(##M,r,z) \ (\ec\M. membership(##M,ec,r) \ + (\s\M. is_singleton(##M,x,s) \ is_eclose(##M,s,ec)))" + +lemma rcheck_abs : + "\ x\M ; r\M \ \ is_rcheck(x,r) \ r = rcheck(x)" + unfolding rcheck_def is_rcheck_def + using singletonM trancl_closed Memrel_closed eclose_closed by simp + +schematic_goal rcheck_fm_auto: + assumes + "i \ nat" "j \ nat" "env \ list(M)" + shows + "is_rcheck(nth(i,env),nth(j,env)) \ sats(M,?rch(i,j),env)" + unfolding is_rcheck_def + by (insert assms ; (rule sep_rules is_singleton_iff_sats is_eclose_iff_sats + trans_closure_fm_iff_sats | simp)+) + +synthesize "rcheck_fm" from_schematic rcheck_fm_auto + +definition + is_check :: "[i,i] \ o" where + "is_check(x,z) \ \rch\M. is_rcheck(x,rch) \ is_wfrec(##M,is_Hcheck(one),rch,x,z)" + +lemma check_abs : + assumes + "x\M" "z\M" + shows + "is_check(x,z) \ z = check(x)" +proof - + have + "is_check(x,z) \ is_wfrec(##M,is_Hcheck(one),rcheck(x),x,z)" + unfolding is_check_def using assms rcheck_abs rcheck_in_M + unfolding check_trancl is_check_def by simp + then show ?thesis + unfolding check_trancl + using assms wfrec_Hcheck[of x] wf_rcheck trans_rcheck relation_rcheck rcheck_in_M + Hcheck_closed relation2_Hcheck trans_wfrec_abs[of "rcheck(x)" x z "is_Hcheck(one)" Hcheck] + by (simp flip: setclass_iff) +qed + +(* \rch\M. is_rcheck(x,rch) \ is_wfrec(##M,is_Hcheck(one),rch,x,z) *) +definition + check_fm :: "[i,i,i] \ i" where + "check_fm(x,o,z) \ Exists(And(rcheck_fm(1#+x,0), + is_wfrec_fm(is_Hcheck_fm(6#+o,2,1,0),0,1#+x,1#+z)))" + +lemma check_fm_type[TC] : + "\x\nat;o\nat;z\nat\ \ check_fm(x,o,z)\formula" + unfolding check_fm_def by simp + +lemma sats_check_fm : + assumes + "nth(o,env) = one" "x\nat" "z\nat" "o\nat" "env\list(M)" "x < length(env)" "z < length(env)" + shows + "sats(M, check_fm(x,o,z), env) \ is_check(nth(x,env),nth(z,env))" +proof - + have sats_is_Hcheck_fm: + "\a0 a1 a2 a3 a4. \ a0\M; a1\M; a2\M; a3\M; a4\M \ \ + is_Hcheck(one,a2, a1, a0) \ + sats(M, is_Hcheck_fm(6#+o,2,1,0), [a0,a1,a2,a3,a4,r]@env)" if "r\M" for r + using that one_in_M assms by simp + then + have "sats(M, is_wfrec_fm(is_Hcheck_fm(6#+o,2,1,0),0,1#+x,1#+z),Cons(r,env)) + \ is_wfrec(##M,is_Hcheck(one),r,nth(x,env),nth(z,env))" if "r\M" for r + using that assms one_in_M sats_is_wfrec_fm by simp + then + show ?thesis unfolding is_check_def check_fm_def + using assms rcheck_in_M one_in_M rcheck_fm_iff_sats[symmetric] by simp +qed + + +lemma check_replacement: + "{check(x). x\P} \ M" +proof - + have "arity(check_fm(0,2,1)) = 3" + unfolding check_fm_def rcheck_fm_def trans_closure_fm_def is_eclose_fm_def mem_eclose_fm_def + is_Hcheck_fm_def Replace_fm_def PHcheck_fm_def finite_ordinal_fm_def is_iterates_fm_def + is_wfrec_fm_def is_recfun_fm_def restriction_fm_def pre_image_fm_def eclose_n_fm_def + is_nat_case_fm_def quasinat_fm_def Memrel_fm_def singleton_fm_def fm_defs iterates_MH_fm_def + by (simp add:nat_simp_union) + moreover + have "check(x)\M" if "x\P" for x + using that Transset_intf[of M x P] trans_M check_in_M P_in_M by simp + ultimately + show ?thesis using sats_check_fm check_abs P_in_M check_in_M one_in_M + Repl_in_M[of "check_fm(0,2,1)" "[one]" is_check check] by simp +qed + +lemma pair_check : "\ p\M ; y\M \ \ (\c\M. is_check(p,c) \ pair(##M,c,p,y)) \ y = \check(p),p\" + using check_abs check_in_M tuples_in_M by simp + + +lemma M_subset_MG : "one \ G \ M \ M[G]" + using check_in_M one_in_P GenExtI + by (intro subsetI, subst valcheck [of G,symmetric], auto) + +text\The name for the generic filter\ +definition + G_dot :: "i" where + "G_dot \ {\check(p),p\ . p\P}" + +lemma G_dot_in_M : + "G_dot \ M" +proof - + let ?is_pcheck = "\x y. \ch\M. is_check(x,ch) \ pair(##M,ch,x,y)" + let ?pcheck_fm = "Exists(And(check_fm(1,3,0),pair_fm(0,1,2)))" + have "sats(M,?pcheck_fm,[x,y,one]) \ ?is_pcheck(x,y)" if "x\M" "y\M" for x y + using sats_check_fm that one_in_M by simp + moreover + have "?is_pcheck(x,y) \ y = \check(x),x\" if "x\M" "y\M" for x y + using that check_abs check_in_M by simp + moreover + have "?pcheck_fm\formula" by simp + moreover + have "arity(?pcheck_fm)=3" + unfolding check_fm_def rcheck_fm_def trans_closure_fm_def is_eclose_fm_def mem_eclose_fm_def + is_Hcheck_fm_def Replace_fm_def PHcheck_fm_def finite_ordinal_fm_def is_iterates_fm_def + is_wfrec_fm_def is_recfun_fm_def restriction_fm_def pre_image_fm_def eclose_n_fm_def + is_nat_case_fm_def quasinat_fm_def Memrel_fm_def singleton_fm_def fm_defs iterates_MH_fm_def + by (simp add:nat_simp_union) + moreover + from P_in_M check_in_M tuples_in_M P_sub_M + have "\check(p),p\ \ M" if "p\P" for p + using that by auto + ultimately + show ?thesis + unfolding G_dot_def + using one_in_M P_in_M Repl_in_M[of ?pcheck_fm "[one]"] + by simp +qed + + +lemma val_G_dot : + assumes "G \ P" + "one \ G" + shows "val(G,G_dot) = G" +proof (intro equalityI subsetI) + fix x + assume "x\val(G,G_dot)" + then obtain \ p where "p\G" "\\,p\ \ G_dot" "val(G,\) = x" "\ = check(p)" + unfolding G_dot_def using elem_of_val_pair G_dot_in_M + by force + with \one\G\ \G\P\ show + "x \ G" + using valcheck P_sub_M by auto +next + fix p + assume "p\G" + have "\check(q),q\ \ G_dot" if "q\P" for q + unfolding G_dot_def using that by simp + with \p\G\ \G\P\ + have "val(G,check(p)) \ val(G,G_dot)" + using val_of_elem G_dot_in_M by blast + with \p\G\ \G\P\ \one\G\ + show "p \ val(G,G_dot)" + using P_sub_M valcheck by auto +qed + + +lemma G_in_Gen_Ext : + assumes "G \ P" and "one \ G" + shows "G \ M[G]" + using assms val_G_dot GenExtI[of _ G] G_dot_in_M + by force + +(* Move this to M_trivial *) +lemma fst_snd_closed: "p\M \ fst(p) \ M \ snd(p)\ M" +proof (cases "\a. \b. p = \a, b\") + case False + then + show "fst(p) \ M \ snd(p) \ M" unfolding fst_def snd_def using zero_in_M by auto +next + case True + then + obtain a b where "p = \a, b\" by blast + with True + have "fst(p) = a" "snd(p) = b" unfolding fst_def snd_def by simp_all + moreover + assume "p\M" + moreover from this + have "a\M" + unfolding \p = _\ Pair_def by (force intro:Transset_M[OF trans_M]) + moreover from \p\M\ + have "b\M" + using Transset_M[OF trans_M, of "{a,b}" p] Transset_M[OF trans_M, of "b" "{a,b}"] + unfolding \p = _\ Pair_def by (simp) + ultimately + show ?thesis by simp +qed + +end (* forcing_data *) + +locale G_generic = forcing_data + + fixes G :: "i" + assumes generic : "M_generic(G)" +begin + +lemma zero_in_MG : + "0 \ M[G]" +proof - + have "0 = val(G,0)" + using zero_in_M elem_of_val by auto + also + have "... \ M[G]" + using GenExtI zero_in_M by simp + finally show ?thesis . +qed + +lemma G_nonempty: "G\0" +proof - + have "P\P" .. + with P_in_M P_dense \P\P\ + show "G \ 0" + using generic unfolding M_generic_def by auto +qed + +end (* context G_generic *) +end \ No newline at end of file diff --git a/thys/Forcing/Nat_Miscellanea.thy b/thys/Forcing/Nat_Miscellanea.thy new file mode 100644 --- /dev/null +++ b/thys/Forcing/Nat_Miscellanea.thy @@ -0,0 +1,277 @@ +section\Auxiliary results on arithmetic\ +theory Nat_Miscellanea imports ZF begin + +text\Most of these results will get used at some point for the +calculation of arities.\ +lemmas nat_succI = Ord_succ_mem_iff [THEN iffD2,OF nat_into_Ord] + +lemma nat_succD : "m \ nat \ succ(n) \ succ(m) \ n \ m" + by (drule_tac j="succ(m)" in ltI,auto elim:ltD) + +lemmas zero_in = ltD [OF nat_0_le] + +lemma in_n_in_nat : "m \ nat \ n \ m \ n \ nat" + by(drule ltI[of "n"],auto simp add: lt_nat_in_nat) + +lemma in_succ_in_nat : "m \ nat \ n \ succ(m) \ n \ nat" + by(auto simp add:in_n_in_nat) + +lemma ltI_neg : "x \ nat \ j \ x \ j \ x \ j < x" + by (simp add: le_iff) + +lemma succ_pred_eq : "m \ nat \ m \ 0 \ succ(pred(m)) = m" + by (auto elim: natE) + +lemma succ_ltI : "succ(j) < n \ j < n" + by (simp add: succ_leE[OF leI]) + +lemma succ_In : "n \ nat \ succ(j) \ n \ j \ n" + by (rule succ_ltI[THEN ltD], auto intro: ltI) + +lemmas succ_leD = succ_leE[OF leI] + +lemma succpred_leI : "n \ nat \ n \ succ(pred(n))" + by (auto elim: natE) + +lemma succpred_n0 : "succ(n) \ p \ p\0" + by (auto) + + +lemma funcI : "f \ A \ B \ a \ A \ b= f ` a \ \a, b\ \ f" + by(simp_all add: apply_Pair) + +lemmas natEin = natE [OF lt_nat_in_nat] + +lemma succ_in : "succ(x) \ y \ x \ y" + by (auto dest:ltD) + +lemmas Un_least_lt_iffn = Un_least_lt_iff [OF nat_into_Ord nat_into_Ord] + +lemma pred_le2 : "n\ nat \ m \ nat \ pred(n) \ m \ n \ succ(m)" + by(subgoal_tac "n\nat",rule_tac n="n" in natE,auto) + +lemma pred_le : "n\ nat \ m \ nat \ n \ succ(m) \ pred(n) \ m" + by(subgoal_tac "pred(n)\nat",rule_tac n="n" in natE,auto) + +lemma Un_leD1 : "Ord(i)\ Ord(j)\ Ord(k)\ i \ j \ k \ i \ k" + by (rule Un_least_lt_iff[THEN iffD1[THEN conjunct1]],simp_all) + +lemma Un_leD2 : "Ord(i)\ Ord(j)\ Ord(k)\ i \ j \k \ j \ k" + by (rule Un_least_lt_iff[THEN iffD1[THEN conjunct2]],simp_all) + +lemma gt1 : "n \ nat \ i \ n \ i \ 0 \ i \ 1 \ 1 nat \ n \ m \ pred(n) \ pred(m)" + by(rule_tac n="n" in natE,auto simp add:le_in_nat,erule_tac n="m" in natE,auto) + +lemma succ_mono : "m \ nat \ n \ m \ succ(n) \ succ(m)" + by auto + +lemma pred2_Un: + assumes "j \ nat" "m \ j" "n \ j" + shows "pred(pred(m \ n)) \ pred(pred(j))" + using assms pred_mono[of "j"] le_in_nat Un_least_lt pred_mono by simp + +lemma nat_union_abs1 : + "\ Ord(i) ; Ord(j) ; i \ j \ \ i \ j = j" + by (rule Un_absorb1,erule le_imp_subset) + +lemma nat_union_abs2 : + "\ Ord(i) ; Ord(j) ; i \ j \ \ j \ i = j" + by (rule Un_absorb2,erule le_imp_subset) + +lemma nat_un_max : "Ord(i) \ Ord(j) \ i \ j = max(i,j)" + using max_def nat_union_abs1 not_lt_iff_le leI nat_union_abs2 + by auto + +lemma nat_max_ty : "Ord(i) \Ord(j) \ Ord(max(i,j))" + unfolding max_def by simp + +lemma le_not_lt_nat : "Ord(p) \ Ord(q) \ \ p\ q \ q \ p" + by (rule ltE,rule not_le_iff_lt[THEN iffD1],auto,drule ltI[of q p],auto,erule leI) + +lemmas nat_simp_union = nat_un_max nat_max_ty max_def + +lemma le_succ : "x\nat \ x\succ(x)" by simp +lemma le_pred : "x\nat \ pred(x)\x" + using pred_le[OF _ _ le_succ] pred_succ_eq + by simp + +lemma Un_le_compat : "o \ p \ q \ r \ Ord(o) \ Ord(p) \ Ord(q) \ Ord(r) \ o \ q \ p \ r" + using le_trans[of q r "p\r",OF _ Un_upper2_le] le_trans[of o p "p\r",OF _ Un_upper1_le] + nat_simp_union + by auto + +lemma Un_le : "p \ r \ q \ r \ + Ord(p) \ Ord(q) \ Ord(r) \ + p \ q \ r" + using nat_simp_union by auto + +lemma Un_leI3 : "o \ r \ p \ r \ q \ r \ + Ord(o) \ Ord(p) \ Ord(q) \ Ord(r) \ + o \ p \ q \ r" + using nat_simp_union by auto + +lemma diff_mono : + assumes "m \ nat" "n\nat" "p \ nat" "m < n" "p\m" + shows "m#-p < n#-p" +proof - + from assms + have "m#-p \ nat" "m#-p #+p = m" + using add_diff_inverse2 by simp_all + with assms + show ?thesis + using less_diff_conv[of n p "m #- p",THEN iffD2] by simp +qed + +lemma pred_Un: + "x \ nat \ y \ nat \ Arith.pred(succ(x) \ y) = x \ Arith.pred(y)" + "x \ nat \ y \ nat \ Arith.pred(x \ succ(y)) = Arith.pred(x) \ y" + using pred_Un_distrib pred_succ_eq by simp_all + +lemma le_natI : "j \ n \ n \ nat \ j\nat" + by(drule ltD,rule in_n_in_nat,rule nat_succ_iff[THEN iffD2,of n],simp_all) + +lemma le_natE : "n\nat \ j < n \ j\n" + by(rule ltE[of j n],simp+) + +lemma diff_cancel : + assumes "m \ nat" "n\nat" "m < n" + shows "m#-n = 0" + using assms diff_is_0_lemma leI by simp + +lemma leD : assumes "n\nat" "j \ n" + shows "j < n | j = n" + using leE[OF \j\n\,of "jSome results in ordinal arithmetic\ +text\The following results are auxiliary to the proof of +wellfoundedness of the relation \<^term>\frecR\\ + +lemma max_cong : + assumes "x \ y" "Ord(y)" "Ord(z)" shows "max(x,y) \ max(y,z)" + using assms +proof (cases "y \ z") + case True + then show ?thesis + unfolding max_def using assms by simp +next + case False + then have "z \ y" using assms not_le_iff_lt leI by simp + then show ?thesis + unfolding max_def using assms by simp +qed + +lemma max_commutes : + assumes "Ord(x)" "Ord(y)" + shows "max(x,y) = max(y,x)" + using assms Un_commute nat_simp_union(1) nat_simp_union(1)[symmetric] by auto + +lemma max_cong2 : + assumes "x \ y" "Ord(y)" "Ord(z)" "Ord(x)" + shows "max(x,z) \ max(y,z)" +proof - + from assms + have " x \ z \ y \ z" + using lt_Ord Ord_Un Un_mono[OF le_imp_subset[OF \x\y\]] subset_imp_le by auto + then show ?thesis + using nat_simp_union \Ord(x)\ \Ord(z)\ \Ord(y)\ by simp +qed + +lemma max_D1 : + assumes "x = y" "w < z" "Ord(x)" "Ord(w)" "Ord(z)" "max(x,w) = max(y,z)" + shows "z\y" +proof - + from assms + have "w < x \ w" using Un_upper2_lt[OF \w] assms nat_simp_union by simp + then + have "w < x" using assms lt_Un_iff[of x w w] lt_not_refl by auto + then + have "y = y \ z" using assms max_commutes nat_simp_union assms leI by simp + then + show ?thesis using Un_leD2 assms by simp +qed + +lemma max_D2 : + assumes "w = y \ w = z" "x < y" "Ord(x)" "Ord(w)" "Ord(y)" "Ord(z)" "max(x,w) = max(y,z)" + shows "x y" using Un_upper2_lt[OF \x] by simp + then + consider (a) "x < y" | (b) "x < w" + using assms nat_simp_union by simp + then show ?thesis proof (cases) + case a + consider (c) "w = y" | (d) "w = z" + using assms by auto + then show ?thesis proof (cases) + case c + with a show ?thesis by simp + next + case d + with a + show ?thesis + proof (cases "y x] by simp + next + case False + then + have "w \ y" + using not_lt_iff_le[OF assms(5) assms(4)] by simp + with \w=z\ + have "max(z,y) = y" unfolding max_def using assms by simp + with assms + have "... = x \ w" using nat_simp_union max_commutes by simp + then show ?thesis using le_Un_iff assms by blast + qed + qed + next + case b + then show ?thesis . + qed +qed + +lemma oadd_lt_mono2 : + assumes "Ord(n)" "Ord(\)" "Ord(\)" "\ < \" "x < n" "y < n" "0 ++ x < n **\ ++ y" +proof - + consider (0) "\=0" | (s) \ where "Ord(\)" "\ = succ(\)" | (l) "Limit(\)" + using Ord_cases[OF \Ord(\)\,of ?thesis] by force + then show ?thesis + proof cases + case 0 + then show ?thesis using \\<\\ by auto + next + case s + then + have "\\\" using \\<\\ using leI by auto + then + have "n ** \ \ n ** \" using omult_le_mono[OF _ \\\\\] \Ord(n)\ by simp + then + have "n ** \ ++ x < n ** \ ++ n" using oadd_lt_mono[OF _ \x] by simp + also + have "... = n ** \" using \\=succ(_)\ omult_succ \Ord(\)\ \Ord(n)\ by simp + finally + have "n ** \ ++ x < n ** \" by auto + then + show ?thesis using oadd_le_self \Ord(\)\ lt_trans2 \Ord(n)\ by auto + next + case l + have "Ord(x)" using \x lt_Ord by simp + with l + have "succ(\) < \" using Limit_has_succ \\<\\ by simp + have "n ** \ ++ x < n ** \ ++ n" + using oadd_lt_mono[OF le_refl[OF Ord_omult[OF _ \Ord(\)\]] \x] \Ord(n)\ by simp + also + have "... = n ** succ(\)" using omult_succ \Ord(\)\ \Ord(n)\ by simp + finally + have "n ** \ ++ x < n ** succ(\)" by simp + with \succ(\) < \\ + have "n ** \ ++ x < n ** \" using lt_trans omult_lt_mono \Ord(n)\ \0 by auto + then show ?thesis using oadd_le_self \Ord(\)\ lt_trans2 \Ord(n)\ by auto + qed +qed +end diff --git a/thys/Forcing/Ordinals_In_MG.thy b/thys/Forcing/Ordinals_In_MG.thy new file mode 100644 --- /dev/null +++ b/thys/Forcing/Ordinals_In_MG.thy @@ -0,0 +1,55 @@ +section\Ordinals in generic extensions\ +theory Ordinals_In_MG + imports + Forcing_Theorems Relative_Univ +begin + +context G_generic +begin + +lemma rank_val: "rank(val(G,x)) \ rank(x)" (is "?Q(x)") +proof (induct rule:ed_induction[of ?Q]) + case (1 x) + have "val(G,x) = {val(G,u). u\{t\domain(x). \p\P . \t,p\\x \ p \ G }}" + using def_val unfolding Sep_and_Replace by blast + then + have "rank(val(G,x)) = (\u\{t\domain(x). \p\P . \t,p\\x \ p \ G }. succ(rank(val(G,u))))" + using rank[of "val(G,x)"] by simp + moreover + have "succ(rank(val(G, y))) \ rank(x)" if "ed(y, x)" for y + using 1[OF that] rank_ed[OF that] by (auto intro:lt_trans1) + moreover from this + have "(\u\{t\domain(x). \p\P . \t,p\\x \ p \ G }. succ(rank(val(G,u)))) \ rank(x)" + by (rule_tac UN_least_le) (auto) + ultimately + show ?case by simp +qed + +lemma Ord_MG_iff: + assumes "Ord(\)" + shows "\ \ M \ \ \ M[G]" +proof + show "\ \ M \ \ \ M[G]" + using generic[THEN one_in_G, THEN M_subset_MG] .. +next + assume "\ \ M[G]" + then + obtain x where "x\M" "val(G,x) = \" + using GenExtD by auto + then + have "rank(\) \ rank(x)" + using rank_val by blast + with assms + have "\ \ rank(x)" + using rank_of_Ord by simp + then + have "\ \ succ(rank(x))" using ltD by simp + with \x\M\ + show "\ \ M" + using cons_closed transitivity[of \ "succ(rank(x))"] + rank_closed unfolding succ_def by simp +qed + +end (* G_generic *) + +end \ No newline at end of file diff --git a/thys/Forcing/Pairing_Axiom.thy b/thys/Forcing/Pairing_Axiom.thy new file mode 100644 --- /dev/null +++ b/thys/Forcing/Pairing_Axiom.thy @@ -0,0 +1,44 @@ +section\The Axiom of Pairing in $M[G]$\ +theory Pairing_Axiom imports Names begin + +context forcing_data +begin + +lemma val_Upair : + "one \ G \ val(G,{\\,one\,\\,one\}) = {val(G,\),val(G,\)}" + by (insert one_in_P, rule trans, subst def_val,auto simp add: Sep_and_Replace) + +lemma pairing_in_MG : + assumes "M_generic(G)" + shows "upair_ax(##M[G])" +proof - + { + fix x y + have "one\G" using assms one_in_G by simp + from assms + have "G\P" unfolding M_generic_def and filter_def by simp + with \one\G\ + have "one\P" using subsetD by simp + then + have "one\M" using transitivity[OF _ P_in_M] by simp + assume "x \ M[G]" "y \ M[G]" + then + obtain \ \ where + 0 : "val(G,\) = x" "val(G,\) = y" "\ \ M" "\ \ M" + using GenExtD by blast + with \one\M\ + have "\\,one\ \ M" "\\,one\\M" using pair_in_M_iff by auto + then + have 1: "{\\,one\,\\,one\} \ M" (is "?\ \ _") using upair_in_M_iff by simp + then + have "val(G,?\) \ M[G]" using GenExtI by simp + with 1 + have "{val(G,\),val(G,\)} \ M[G]" using val_Upair assms one_in_G by simp + with 0 + have "{x,y} \ M[G]" by simp + } + then show ?thesis unfolding upair_ax_def upair_def by auto +qed + +end (* context forcing_data *) +end \ No newline at end of file diff --git a/thys/Forcing/Pointed_DC.thy b/thys/Forcing/Pointed_DC.thy new file mode 100644 --- /dev/null +++ b/thys/Forcing/Pointed_DC.thy @@ -0,0 +1,148 @@ +section\A pointed version of DC\ +theory Pointed_DC imports ZF.AC + +begin +txt\This proof of DC is from Moschovakis "Notes on Set Theory"\ + +consts dc_witness :: "i \ i \ i \ i \ i \ i" +primrec + wit0 : "dc_witness(0,A,a,s,R) = a" + witrec :"dc_witness(succ(n),A,a,s,R) = s`{x\A. \dc_witness(n,A,a,s,R),x\\R }" + +lemma witness_into_A [TC]: + assumes "a\A" + "(\X . X\0 \ X\A \ s`X\X)" + "\y\A. {x\A. \y,x\\R } \ 0" "n\nat" + shows "dc_witness(n, A, a, s, R)\A" + using \n\nat\ +proof(induct n) + case 0 + then show ?case using \a\A\ by simp +next + case (succ x) + then + show ?case using assms by auto +qed + +lemma witness_related : + assumes "a\A" + "(\X . X\0 \ X\A \ s`X\X)" + "\y\A. {x\A. \y,x\\R } \ 0" "n\nat" + shows "\dc_witness(n, A, a, s, R),dc_witness(succ(n), A, a, s, R)\\R" +proof - + from assms + have "dc_witness(n, A, a, s, R)\A" (is "?x \ A") + using witness_into_A[of _ _ s R n] by simp + with assms + show ?thesis by auto +qed + +lemma witness_funtype: + assumes "a\A" + "(\X . X\0 \ X\A \ s`X\X)" + "\y\A. {x\A. \y,x\\R } \ 0" + shows "(\n\nat. dc_witness(n, A, a, s, R)) \ nat \ A" (is "?f \ _ \ _") +proof - + have "?f \ nat \ {dc_witness(n, A, a, s, R). n\nat}" (is "_ \ _ \ ?B") + using lam_funtype assms by simp + then + have "?B \ A" + using witness_into_A assms by auto + with \?f \ _\ + show ?thesis + using fun_weaken_type + by simp +qed + +lemma witness_to_fun: assumes "a\A" + "(\X . X\0 \ X\A \ s`X\X)" + "\y\A. {x\A. \y,x\\R } \ 0" +shows "\f \ nat\A. \n\nat. f`n =dc_witness(n,A,a,s,R)" + using assms bexI[of _ "\n\nat. dc_witness(n,A,a,s,R)"] witness_funtype + by simp + +theorem pointed_DC : + assumes "(\x\A. \y\A. \x,y\\ R)" + shows "\a\A. (\f \ nat\A. f`0 = a \ (\n \ nat. \f`n,f`succ(n)\\R))" +proof - + have 0:"\y\A. {x \ A . \y, x\ \ R} \ 0" + using assms by auto + from AC_func_Pow[of A] + obtain g + where 1: "g \ Pow(A) - {0} \ A" + "\X. X \ 0 \ X \ A \ g ` X \ X" + by auto + let ?f ="\a.\n\nat. dc_witness(n,A,a,g,R)" + { + fix a + assume "a\A" + from \a\A\ + have f0: "?f(a)`0 = a" by simp + with \a\A\ + have "\?f(a) ` n, ?f(a) ` succ(n)\ \ R" if "n\nat" for n + using witness_related[OF \a\A\ 1(2) 0] beta that by simp + then + have "\f\nat \ A. f ` 0 = a \ (\n\nat. \f ` n, f ` succ(n)\ \ R)" (is "\x\_ .?P(x)") + using f0 witness_funtype 0 1 \a\_\ by blast + } + then show ?thesis by auto +qed + +lemma aux_DC_on_AxNat2 : "\x\A\nat. \y\A. \x,\y,succ(snd(x))\\ \ R \ + \x\A\nat. \y\A\nat. \x,y\ \ {\a,b\\R. snd(b) = succ(snd(a))}" + by (rule ballI, erule_tac x="x" in ballE, simp_all) + +lemma infer_snd : "c\ A\B \ snd(c) = k \ c=\fst(c),k\" + by auto + +corollary DC_on_A_x_nat : + assumes "(\x\A\nat. \y\A. \x,\y,succ(snd(x))\\ \ R)" "a\A" + shows "\f \ nat\A. f`0 = a \ (\n \ nat. \\f`n,n\,\f`succ(n),succ(n)\\\R)" (is "\x\_.?P(x)") +proof - + let ?R'="{\a,b\\R. snd(b) = succ(snd(a))}" + from assms(1) + have "\x\A\nat. \y\A\nat. \x,y\ \ ?R'" + using aux_DC_on_AxNat2 by simp + with \a\_\ + obtain f where + F:"f\nat\A\nat" "f ` 0 = \a,0\" "\n\nat. \f ` n, f ` succ(n)\ \ ?R'" + using pointed_DC[of "A\nat" ?R'] by blast + let ?f="\x\nat. fst(f`x)" + from F + have "?f\nat\A" "?f ` 0 = a" by auto + have 1:"n\ nat \ f`n= \?f`n, n\" for n + proof(induct n set:nat) + case 0 + then show ?case using F by simp + next + case (succ x) + then + have "\f`x, f`succ(x)\ \ ?R'" "f`x \ A\nat" "f`succ(x)\A\nat" + using F by simp_all + then + have "snd(f`succ(x)) = succ(snd(f`x))" by simp + with succ \f`x\_\ + show ?case using infer_snd[OF \f`succ(_)\_\] by auto + qed + have "\\?f`n,n\,\?f`succ(n),succ(n)\\ \ R" if "n\nat" for n + using that 1[of "succ(n)"] 1[OF \n\_\] F(3) by simp + with \f`0=\a,0\\ + show ?thesis using rev_bexI[OF \?f\_\] by simp +qed + +lemma aux_sequence_DC : + assumes "\x\A. \n\nat. \y\A. \x,y\ \ S`n" + "R={\\x,n\,\y,m\\ \ (A\nat)\(A\nat). \x,y\\S`m }" + shows "\ x\A\nat . \y\A. \x,\y,succ(snd(x))\\ \ R" + using assms Pair_fst_snd_eq by auto + +lemma aux_sequence_DC2 : "\x\A. \n\nat. \y\A. \x,y\ \ S`n \ + \x\A\nat. \y\A. \x,\y,succ(snd(x))\\ \ {\\x,n\,\y,m\\\(A\nat)\(A\nat). \x,y\\S`m }" + by auto + +lemma sequence_DC: + assumes "\x\A. \n\nat. \y\A. \x,y\ \ S`n" + shows "\a\A. (\f \ nat\A. f`0 = a \ (\n \ nat. \f`n,f`succ(n)\\S`succ(n)))" + by (rule ballI,insert assms,drule aux_sequence_DC2, drule DC_on_A_x_nat, auto) + +end \ No newline at end of file diff --git a/thys/Forcing/Powerset_Axiom.thy b/thys/Forcing/Powerset_Axiom.thy new file mode 100644 --- /dev/null +++ b/thys/Forcing/Powerset_Axiom.thy @@ -0,0 +1,303 @@ +section\The Powerset Axiom in $M[G]$\ +theory Powerset_Axiom + imports Renaming_Auto Separation_Axiom Pairing_Axiom Union_Axiom +begin + +simple_rename "perm_pow" src "[ss,p,l,o,fs,\]" tgt "[fs,ss,sp,p,l,o,\]" + +lemma Collect_inter_Transset: + assumes + "Transset(M)" "b \ M" + shows + "{x\b . P(x)} = {x\b . P(x)} \ M" + using assms unfolding Transset_def + by (auto) + +context G_generic begin + +lemma name_components_in_M: + assumes "<\,p>\\" "\ \ M" + shows "\\M" "p\M" +proof - + from assms obtain a where + "\ \ a" "p \ a" "a\<\,p>" + unfolding Pair_def by auto + moreover from assms + have "<\,p>\M" + using transitivity by simp + moreover from calculation + have "a\M" + using transitivity by simp + ultimately + show "\\M" "p\M" + using transitivity by simp_all +qed + +lemma sats_fst_snd_in_M: + assumes + "A\M" "B\M" "\ \ formula" "p\M" "l\M" "o\M" "\\M" + "arity(\) \ 6" + shows + "{sq \A\B . sats(M,\,[snd(sq),p,l,o,fst(sq),\])} \ M" + (is "?\ \ M") +proof - + have "6\nat" "7\nat" by simp_all + let ?\' = "ren(\)`6`7`perm_pow_fn" + from \A\M\ \B\M\ have + "A\B \ M" + using cartprod_closed by simp + from \arity(\) \ 6\ \\\ formula\ \6\_\ \7\_\ + have "?\' \ formula" "arity(?\')\7" + unfolding perm_pow_fn_def + using perm_pow_thm arity_ren ren_tc Nil_type + by auto + with \?\' \ formula\ + have 1: "arity(Exists(Exists(And(pair_fm(0,1,2),?\'))))\5" (is "arity(?\)\5") + unfolding pair_fm_def upair_fm_def + using nat_simp_union pred_le arity_type by auto + { + fix sp + note \A\B \ M\ + moreover + assume "sp \ A\B" + moreover from calculation + have "fst(sp) \ A" "snd(sp) \ B" + using fst_type snd_type by simp_all + ultimately + have "sp \ M" "fst(sp) \ M" "snd(sp) \ M" + using \A\M\ \B\M\ transitivity + by simp_all + note inM = \A\M\ \B\M\ \p\M\ \l\M\ \o\M\ \\\M\ + \sp\M\ \fst(sp)\M\ \snd(sp)\M\ + with 1 \sp \ M\ \?\' \ formula\ + have "M, [sp,p,l,o,\]@[p] \ ?\ \ M,[sp,p,l,o,\] \ ?\" (is "M,?env0@ _\_ \ _") + using arity_sats_iff[of ?\ "[p]" M ?env0] by auto + also from inM \sp \ A\B\ + have "... \ sats(M,?\',[fst(sp),snd(sp),sp,p,l,o,\])" + by auto + also from inM \\ \ formula\ \arity(\) \ 6\ + have "... \ sats(M,\,[snd(sp),p,l,o,fst(sp),\])" + (is "sats(_,_,?env1) \ sats(_,_,?env2)") + using sats_iff_sats_ren[of \ 6 7 ?env2 M ?env1 perm_pow_fn] perm_pow_thm + unfolding perm_pow_fn_def by simp + finally + have "sats(M,?\,[sp,p,l,o,\,p]) \ sats(M,\,[snd(sp),p,l,o,fst(sp),\])" + by simp + } + then have + "?\ = {sp\A\B . sats(M,?\,[sp,p,l,o,\,p])}" + by auto + also from assms \A\B\M\ have + " ... \ M" + proof - + from 1 + have "arity(?\) \ 6" + using leI by simp + moreover from \?\' \ formula\ + have "?\ \ formula" + by simp + moreover note assms \A\B\M\ + ultimately + show "{x \ A\B . sats(M, ?\, [x, p, l, o, \, p])} \ M" + using separation_ax separation_iff + by simp + qed + finally show ?thesis . +qed + +lemma Pow_inter_MG: + assumes + "a\M[G]" + shows + "Pow(a) \ M[G] \ M[G]" +proof - + from assms obtain \ where + "\ \ M" "val(G, \) = a" + using GenExtD by auto + let ?Q="Pow(domain(\)\P) \ M" + from \\\M\ + have "domain(\)\P \ M" "domain(\) \ M" + using domain_closed cartprod_closed P_in_M + by simp_all + then + have "?Q \ M" + proof - + from power_ax \domain(\)\P \ M\ obtain Q where + "powerset(##M,domain(\)\P,Q)" "Q \ M" + unfolding power_ax_def by auto + moreover from calculation + have "z\Q \ z\M" for z + using transitivity by blast + ultimately + have "Q = {a\Pow(domain(\)\P) . a\M}" + using \domain(\)\P \ M\ powerset_abs[of "domain(\)\P" Q] + by (simp flip: setclass_iff) + also + have " ... = ?Q" + by auto + finally + show ?thesis using \Q\M\ by simp + qed + let + ?\="?Q\{one}" + let + ?b="val(G,?\)" + from \?Q\M\ + have "?\\M" + using one_in_P P_in_M transitivity + by (simp flip: setclass_iff) + from \?\\M\ + have "?b \ M[G]" + using GenExtI by simp + have "Pow(a) \ M[G] \ ?b" + proof + fix c + assume "c \ Pow(a) \ M[G]" + then obtain \ where + "c\M[G]" "\ \ M" "val(G,\) = c" + using GenExtD by auto + let ?\="{sp \domain(\)\P . snd(sp) \ (Member(0,1)) [fst(sp),\] }" + have "arity(forces(Member(0,1))) = 6" + using arity_forces_at by auto + with \domain(\) \ M\ \\ \ M\ + have "?\ \ M" + using P_in_M one_in_M leq_in_M sats_fst_snd_in_M + by simp + then + have "?\ \ ?Q" + by auto + then + have "val(G,?\) \ ?b" + using one_in_G one_in_P generic val_of_elem [of ?\ one ?\ G] + by auto + have "val(G,?\) = c" + proof(intro equalityI subsetI) + fix x + assume "x \ val(G,?\)" + then obtain \ p where + 1: "<\,p>\?\" "p\G" "val(G,\) = x" + using elem_of_val_pair + by blast + moreover from \<\,p>\?\\ \?\ \ M\ + have "\\M" + using name_components_in_M[of _ _ ?\] by auto + moreover from 1 + have "(p \ (Member(0,1)) [\,\])" "p\P" + by simp_all + moreover + note \val(G,\) = c\ + ultimately + have "sats(M[G],Member(0,1),[x,c])" + using \\ \ M\ generic definition_of_forcing nat_simp_union + by auto + moreover + have "x\M[G]" + using \val(G,\) = x\ \\\M\ \\\M\ GenExtI by blast + ultimately + show "x\c" + using \c\M[G]\ by simp + next + fix x + assume "x \ c" + with \c \ Pow(a) \ M[G]\ + have "x \ a" "c\M[G]" "x\M[G]" + using transitivity_MG + by auto + with \val(G, \) = a\ + obtain \ where + "\\domain(\)" "val(G,\) = x" + using elem_of_val + by blast + moreover note \x\c\ \val(G,\) = c\ + moreover from calculation + have "val(G,\) \ val(G,\)" + by simp + moreover note \c\M[G]\ \x\M[G]\ + moreover from calculation + have "sats(M[G],Member(0,1),[x,c])" + by simp + moreover + have "Member(0,1)\formula" by simp + moreover + have "\\M" + proof - + from \\\domain(\)\ + obtain p where "<\,p> \ \" + by auto + with \\\M\ + show ?thesis + using name_components_in_M by blast + qed + moreover note \\ \ M\ + ultimately + obtain p where "p\G" "(p \ Member(0,1) [\,\])" + using generic truth_lemma[of "Member(0,1)" "G" "[\,\]" ] nat_simp_union + by auto + moreover from \p\G\ + have "p\P" + using generic unfolding M_generic_def filter_def by blast + ultimately + have "<\,p>\?\" + using \\\domain(\)\ by simp + with \val(G,\) = x\ \p\G\ + show "x\val(G,?\)" + using val_of_elem [of _ _ "?\"] by auto + qed + with \val(G,?\) \ ?b\ + show "c\?b" by simp + qed + then + have "Pow(a) \ M[G] = {x\?b . x\a & x\M[G]}" + by auto + also from \a\M[G]\ + have " ... = {x\?b . sats(M[G],subset_fm(0,1),[x,a]) & x\M[G]}" + using Transset_MG by force + also + have " ... = {x\?b . sats(M[G],subset_fm(0,1),[x,a])} \ M[G]" + by auto + also from \?b\M[G]\ + have " ... = {x\?b . sats(M[G],subset_fm(0,1),[x,a])}" + using Collect_inter_Transset Transset_MG + by simp + also from \?b\M[G]\ \a\M[G]\ + have " ... \ M[G]" + using Collect_sats_in_MG GenExtI nat_simp_union by simp + finally show ?thesis . +qed +end (* context: G_generic *) + + +context G_generic begin + +interpretation mgtriv: M_trivial "##M[G]" + using generic Union_MG pairing_in_MG zero_in_MG transitivity_MG + unfolding M_trivial_def M_trans_def M_trivial_axioms_def by (simp; blast) + + +theorem power_in_MG : "power_ax(##(M[G]))" + unfolding power_ax_def +proof (intro rallI, simp only:setclass_iff rex_setclass_is_bex) + (* After simplification, we have to show that for every + a\M[G] there exists some x\M[G] with powerset(##M[G],a,x) + *) + fix a + assume "a \ M[G]" + then + have "(##M[G])(a)" by simp + have "{x\Pow(a) . x \ M[G]} = Pow(a) \ M[G]" + by auto + also from \a\M[G]\ + have " ... \ M[G]" + using Pow_inter_MG by simp + finally + have "{x\Pow(a) . x \ M[G]} \ M[G]" . + moreover from \a\M[G]\ \{x\Pow(a) . x \ M[G]} \ _\ + have "powerset(##M[G], a, {x\Pow(a) . x \ M[G]})" + using mgtriv.powerset_abs[OF \(##M[G])(a)\] + by simp + ultimately + show "\x\M[G] . powerset(##M[G], a, x)" + by auto +qed +end (* context: G_generic *) +end \ No newline at end of file diff --git a/thys/Forcing/Proper_Extension.thy b/thys/Forcing/Proper_Extension.thy new file mode 100644 --- /dev/null +++ b/thys/Forcing/Proper_Extension.thy @@ -0,0 +1,72 @@ +section\Separative notions and proper extensions\ +theory Proper_Extension + imports + Names + +begin + +text\The key ingredient to obtain a proper extension is to have +a \<^emph>\separative preorder\:\ + +locale separative_notion = forcing_notion + + assumes separative: "p\P \ \q\P. \r\P. q \ p \ r \ p \ q \ r" +begin + +text\For separative preorders, the complement of every filter is +dense. Hence an $M$-generic filter can't belong to the ground model.\ + +lemma filter_complement_dense: + assumes "filter(G)" shows "dense(P - G)" +proof + fix p + assume "p\P" + show "\d\P - G. d \ p" + proof (cases "p\G") + case True + note \p\P\ assms + moreover + obtain q r where "q \ p" "r \ p" "q \ r" "q\P" "r\P" + using separative[OF \p\P\] + by force + with \filter(G)\ + obtain s where "s \ p" "s \ G" "s \ P" + using filter_imp_compat[of G q r] + by auto + then + show ?thesis by blast + next + case False + with \p\P\ + show ?thesis using leq_reflI unfolding Diff_def by auto + qed +qed + +end (* separative_notion *) + +locale ctm_separative = forcing_data + separative_notion +begin + +lemma generic_not_in_M: assumes "M_generic(G)" shows "G \ M" +proof + assume "G\M" + then + have "P - G \ M" + using P_in_M Diff_closed by simp + moreover + have "\(\q\G. q \ P - G)" "(P - G) \ P" + unfolding Diff_def by auto + moreover + note assms + ultimately + show "False" + using filter_complement_dense[of G] M_generic_denseD[of G "P-G"] + M_generic_def by simp \ \need to put generic ==> filter in claset\ +qed + +theorem proper_extension: assumes "M_generic(G)" shows "M \ M[G]" + using assms G_in_Gen_Ext[of G] one_in_G[of G] generic_not_in_M + by force + +end (* ctm_separative *) + +end \ No newline at end of file diff --git a/thys/Forcing/ROOT b/thys/Forcing/ROOT new file mode 100644 --- /dev/null +++ b/thys/Forcing/ROOT @@ -0,0 +1,19 @@ +chapter AFP + +session Forcing (AFP) = "ZF-Constructible" + + description " + Formalization of Forcing in Isabelle/ZF + + We formalize the theory of forcing in the set theory framework of + Isabelle/ZF. Under the assumption of the existence of a countable + transitive model of ZFC, we construct a proper generic extension + and show that the latter also satisfies ZFC. + " + options [timeout=300] + theories + "Rasiowa_Sikorski" + "Forcing_Main" + document_files + "root.tex" + "root.bib" + "root.bst" diff --git a/thys/Forcing/Rasiowa_Sikorski.thy b/thys/Forcing/Rasiowa_Sikorski.thy new file mode 100644 --- /dev/null +++ b/thys/Forcing/Rasiowa_Sikorski.thy @@ -0,0 +1,52 @@ +section\The general Rasiowa-Sikorski lemma\ +theory Rasiowa_Sikorski imports Forcing_Notions Pointed_DC begin + +context countable_generic +begin + +lemma RS_relation: + assumes "p\P" "n\nat" + shows "\y\P. \p,y\ \ (\m\nat. {\x,y\\P\P. y\x \ y\\`(pred(m))})`n" +proof - + from seq_of_denses \n\nat\ + have "dense(\ ` pred(n))" by simp + with \p\P\ + have "\d\\ ` Arith.pred(n). d\ p" + unfolding dense_def by simp + then obtain d where 3: "d \ \ ` Arith.pred(n) \ d\ p" + by blast + from countable_subs_of_P \n\nat\ + have "\ ` Arith.pred(n) \ Pow(P)" + by (blast dest:apply_funtype intro:pred_type) + then + have "\ ` Arith.pred(n) \ P" + by (rule PowD) + with 3 + have "d \ P \ d\ p \ d \ \ ` Arith.pred(n)" + by auto + with \p\P\ \n\nat\ + show ?thesis by auto +qed + +lemma DC_imp_RS_sequence: + assumes "p\P" + shows "\f. f: nat\P \ f ` 0 = p \ + (\n\nat. f ` succ(n)\ f ` n \ f ` succ(n) \ \ ` n)" +proof - + let ?S="(\m\nat. {\x,y\\P\P. y\x \ y\\`(pred(m))})" + have "\x\P. \n\nat. \y\P. \x,y\ \ ?S`n" + using RS_relation by (auto) + then + have "\a\P. (\f \ nat\P. f`0 = a \ (\n \ nat. \f`n,f`succ(n)\\?S`succ(n)))" + using sequence_DC by (blast) + with \p\P\ + show ?thesis by auto +qed + +theorem rasiowa_sikorski: + "p\P \ \G. p\G \ D_generic(G)" + using RS_sequence_imp_rasiowa_sikorski by (auto dest:DC_imp_RS_sequence) + +end (* countable_generic *) + +end \ No newline at end of file diff --git a/thys/Forcing/Recursion_Thms.thy b/thys/Forcing/Recursion_Thms.thy new file mode 100644 --- /dev/null +++ b/thys/Forcing/Recursion_Thms.thy @@ -0,0 +1,226 @@ +section\Some enhanced theorems on recursion\ + +theory Recursion_Thms imports ZF.Epsilon begin + +text\We prove results concerning definitions by well-founded +recursion on some relation \<^term>\R\ and its transitive closure +\<^term>\R^*\\ + (* Restrict the relation r to the field A*A *) + +lemma fld_restrict_eq : "a \ A \ (r \ A\A)-``{a} = (r-``{a} \ A)" + by(force) + +lemma fld_restrict_mono : "relation(r) \ A \ B \ r \ A\A \ r \ B\B" + by(auto) + +lemma fld_restrict_dom : + assumes "relation(r)" "domain(r) \ A" "range(r)\ A" + shows "r\ A\A = r" +proof (rule equalityI,blast,rule subsetI) + { fix x + assume xr: "x \ r" + from xr assms have "\ a b . x = \a,b\" by (simp add: relation_def) + then obtain a b where "\a,b\ \ r" "\a,b\ \ r\A\A" "x \ r\A\A" + using assms xr + by force + then have "x\ r \ A\A" by simp + } + then show "x \ r \ x\ r\A\A" for x . +qed + +definition tr_down :: "[i,i] \ i" + where "tr_down(r,a) = (r^+)-``{a}" + +lemma tr_downD : "x \ tr_down(r,a) \ \x,a\ \ r^+" + by (simp add: tr_down_def vimage_singleton_iff) + +lemma pred_down : "relation(r) \ r-``{a} \ tr_down(r,a)" + by(simp add: tr_down_def vimage_mono r_subset_trancl) + +lemma tr_down_mono : "relation(r) \ x \ r-``{a} \ tr_down(r,x) \ tr_down(r,a)" + by(rule subsetI,simp add:tr_down_def,auto dest: underD,force simp add: underI r_into_trancl trancl_trans) + +lemma rest_eq : + assumes "relation(r)" and "r-``{a} \ B" and "a \ B" + shows "r-``{a} = (r\B\B)-``{a}" +proof (intro equalityI subsetI) + fix x + assume "x \ r-``{a}" + then + have "x \ B" using assms by (simp add: subsetD) + from \x\ r-``{a}\ + have "\x,a\ \ r" using underD by simp + then + show "x \ (r\B\B)-``{a}" using \x\B\ \a\B\ underI by simp +next + from assms + show "x \ r -`` {a}" if "x \ (r \ B\B) -`` {a}" for x + using vimage_mono that by auto +qed + +lemma wfrec_restr_eq : "r' = r \ A\A \ wfrec[A](r,a,H) = wfrec(r',a,H)" + by(simp add:wfrec_on_def) + +lemma wfrec_restr : + assumes rr: "relation(r)" and wfr:"wf(r)" + shows "a \ A \ tr_down(r,a) \ A \ wfrec(r,a,H) = wfrec[A](r,a,H)" +proof (induct a arbitrary:A rule:wf_induct_raw[OF wfr] ) + case (1 a) + have wfRa : "wf[A](r)" + using wf_subset wfr wf_on_def Int_lower1 by simp + from pred_down rr + have "r -`` {a} \ tr_down(r, a)" . + with 1 + have "r-``{a} \ A" by (force simp add: subset_trans) + { + fix x + assume x_a : "x \ r-``{a}" + with \r-``{a} \ A\ + have "x \ A" .. + from pred_down rr + have b : "r -``{x} \ tr_down(r,x)" . + then + have "tr_down(r,x) \ tr_down(r,a)" + using tr_down_mono x_a rr by simp + with 1 + have "tr_down(r,x) \ A" using subset_trans by force + have "\x,a\ \ r" using x_a underD by simp + with 1 \tr_down(r,x) \ A\ \x \ A\ + have "wfrec(r,x,H) = wfrec[A](r,x,H)" by simp + } + then + have "x\ r-``{a} \ wfrec(r,x,H) = wfrec[A](r,x,H)" for x . + then + have Eq1 :"(\ x \ r-``{a} . wfrec(r,x,H)) = (\ x \ r-``{a} . wfrec[A](r,x,H))" + using lam_cong by simp + + from assms + have "wfrec(r,a,H) = H(a,\ x \ r-``{a} . wfrec(r,x,H))" by (simp add:wfrec) + also + have "... = H(a,\ x \ r-``{a} . wfrec[A](r,x,H))" + using assms Eq1 by simp + also from 1 \r-``{a} \ A\ + have "... = H(a,\ x \ (r\A\A)-``{a} . wfrec[A](r,x,H))" + using assms rest_eq by simp + also from \a\A\ + have "... = H(a,\ x \ (r-``{a})\A . wfrec[A](r,x,H))" + using fld_restrict_eq by simp + also from \a\A\ \wf[A](r)\ + have "... = wfrec[A](r,a,H)" using wfrec_on by simp + finally show ?case . +qed + +lemmas wfrec_tr_down = wfrec_restr[OF _ _ _ subset_refl] + +lemma wfrec_trans_restr : "relation(r) \ wf(r) \ trans(r) \ r-``{a}\A \ a \ A \ + wfrec(r, a, H) = wfrec[A](r, a, H)" + by(subgoal_tac "tr_down(r,a) \ A",auto simp add : wfrec_restr tr_down_def trancl_eq_r) + + +lemma field_trancl : "field(r^+) = field(r)" + by (blast intro: r_into_trancl dest!: trancl_type [THEN subsetD]) + +definition + Rrel :: "[i\i\o,i] \ i" where + "Rrel(R,A) \ {z\A\A. \x y. z = \x, y\ \ R(x,y)}" + +lemma RrelI : "x \ A \ y \ A \ R(x,y) \ \x,y\ \ Rrel(R,A)" + unfolding Rrel_def by simp + +lemma Rrel_mem: "Rrel(mem,x) = Memrel(x)" + unfolding Rrel_def Memrel_def .. + +lemma relation_Rrel: "relation(Rrel(R,d))" + unfolding Rrel_def relation_def by simp + +lemma field_Rrel: "field(Rrel(R,d)) \ d" + unfolding Rrel_def by auto + +lemma Rrel_mono : "A \ B \ Rrel(R,A) \ Rrel(R,B)" + unfolding Rrel_def by blast + +lemma Rrel_restr_eq : "Rrel(R,A) \ B\B = Rrel(R,A\B)" + unfolding Rrel_def by blast + +(* now a consequence of the previous lemmas *) +lemma field_Memrel : "field(Memrel(A)) \ A" + (* unfolding field_def using Ordinal.Memrel_type by blast *) + using Rrel_mem field_Rrel by blast + +lemma restrict_trancl_Rrel: + assumes "R(w,y)" + shows "restrict(f,Rrel(R,d)-``{y})`w + = restrict(f,(Rrel(R,d)^+)-``{y})`w" +proof (cases "y\d") + let ?r="Rrel(R,d)" and ?s="(Rrel(R,d))^+" + case True + show ?thesis + proof (cases "w\d") + case True + with \y\d\ assms + have "\w,y\\?r" + unfolding Rrel_def by blast + then + have "\w,y\\?s" + using r_subset_trancl[of ?r] relation_Rrel[of R d] by blast + with \\w,y\\?r\ + have "w\?r-``{y}" "w\?s-``{y}" + using vimage_singleton_iff by simp_all + then + show ?thesis by simp + next + case False + then + have "w\domain(restrict(f,?r-``{y}))" + using subsetD[OF field_Rrel[of R d]] by auto + moreover from \w\d\ + have "w\domain(restrict(f,?s-``{y}))" + using subsetD[OF field_Rrel[of R d], of w] field_trancl[of ?r] + fieldI1[of w y ?s] by auto + ultimately + have "restrict(f,?r-``{y})`w = 0" "restrict(f,?s-``{y})`w = 0" + unfolding apply_def by auto + then show ?thesis by simp + qed +next + let ?r="Rrel(R,d)" + let ?s="?r^+" + case False + then + have "?r-``{y}=0" + unfolding Rrel_def by blast + then + have "w\?r-``{y}" by simp + with \y\d\ assms + have "y\field(?s)" + using field_trancl subsetD[OF field_Rrel[of R d]] by force + then + have "w\?s-``{y}" + using vimage_singleton_iff by blast + with \w\?r-``{y}\ + show ?thesis by simp +qed + +lemma restrict_trans_eq: + assumes "w \ y" + shows "restrict(f,Memrel(eclose({x}))-``{y})`w + = restrict(f,(Memrel(eclose({x}))^+)-``{y})`w" + using assms restrict_trancl_Rrel[of mem ] Rrel_mem by (simp) + +lemma wf_eq_trancl: + assumes "\ f y . H(y,restrict(f,R-``{y})) = H(y,restrict(f,R^+-``{y}))" + shows "wfrec(R, x, H) = wfrec(R^+, x, H)" (is "wfrec(?r,_,_) = wfrec(?r',_,_)") +proof - + have "wfrec(R, x, H) = wftrec(?r^+, x, \y f. H(y, restrict(f,?r-``{y})))" + unfolding wfrec_def .. + also + have " ... = wftrec(?r^+, x, \y f. H(y, restrict(f,(?r^+)-``{y})))" + using assms by simp + also + have " ... = wfrec(?r^+, x, H)" + unfolding wfrec_def using trancl_eq_r[OF relation_trancl trans_trancl] by simp + finally + show ?thesis . +qed + +end diff --git a/thys/Forcing/Relative_Univ.thy b/thys/Forcing/Relative_Univ.thy new file mode 100644 --- /dev/null +++ b/thys/Forcing/Relative_Univ.thy @@ -0,0 +1,382 @@ +section\Relativization of the cumulative hierarchy\ +theory Relative_Univ + imports + "ZF-Constructible.Rank" + Internalizations + Recursion_Thms + +begin + +lemma (in M_trivial) powerset_abs' [simp]: + assumes + "M(x)" "M(y)" + shows + "powerset(M,x,y) \ y = {a\Pow(x) . M(a)}" + using powerset_abs assms by simp + +lemma Collect_inter_Transset: + assumes + "Transset(M)" "b \ M" + shows + "{x\b . P(x)} = {x\b . P(x)} \ M" + using assms unfolding Transset_def + by (auto) + +lemma (in M_trivial) family_union_closed: "\strong_replacement(M, \x y. y = f(x)); M(A); \x\A. M(f(x))\ + \ M(\x\A. f(x))" + using RepFun_closed .. + +(* "Vfrom(A,i) \ transrec(i, %x f. A \ (\y\x. Pow(f`y)))" *) +(* HVfrom is *not* the recursive step for Vfrom. It is the + relativized version *) +definition + HVfrom :: "[i\o,i,i,i] \ i" where + "HVfrom(M,A,x,f) \ A \ (\y\x. {a\Pow(f`y). M(a)})" + +(* z = Pow(f`y) *) +definition + is_powapply :: "[i\o,i,i,i] \ o" where + "is_powapply(M,f,y,z) \ M(z) \ (\fy[M]. fun_apply(M,f,y,fy) \ powerset(M,fy,z))" + +(* Trivial lemma *) +lemma is_powapply_closed: "is_powapply(M,f,y,z) \ M(z)" + unfolding is_powapply_def by simp + +(* is_Replace(M,A,P,z) \ \u[M]. u \ z \ (\x[M]. x\A & P(x,u)) *) +definition + is_HVfrom :: "[i\o,i,i,i,i] \ o" where + "is_HVfrom(M,A,x,f,h) \ \U[M]. \R[M]. union(M,A,U,h) + \ big_union(M,R,U) \ is_Replace(M,x,is_powapply(M,f),R)" + + +definition + is_Vfrom :: "[i\o,i,i,i] \ o" where + "is_Vfrom(M,A,i,V) \ is_transrec(M,is_HVfrom(M,A),i,V)" + +definition + is_Vset :: "[i\o,i,i] \ o" where + "is_Vset(M,i,V) \ \z[M]. empty(M,z) \ is_Vfrom(M,z,i,V)" + + +subsection\Formula synthesis\ + +schematic_goal sats_is_powapply_fm_auto: + assumes + "f\nat" "y\nat" "z\nat" "env\list(A)" "0\A" + shows + "is_powapply(##A,nth(f, env),nth(y, env),nth(z, env)) + \ sats(A,?ipa_fm(f,y,z),env)" + unfolding is_powapply_def is_Collect_def powerset_def subset_def + using nth_closed assms + by (simp) (rule sep_rules | simp)+ + +schematic_goal is_powapply_iff_sats: + assumes + "nth(f,env) = ff" "nth(y,env) = yy" "nth(z,env) = zz" "0\A" + "f \ nat" "y \ nat" "z \ nat" "env \ list(A)" + shows + "is_powapply(##A,ff,yy,zz) \ sats(A, ?is_one_fm(a,r), env)" + unfolding \nth(f,env) = ff\[symmetric] \nth(y,env) = yy\[symmetric] + \nth(z,env) = zz\[symmetric] + by (rule sats_is_powapply_fm_auto(1); simp add:assms) + +(* rank *) +definition + Hrank :: "[i,i] \ i" where + "Hrank(x,f) = (\y\x. succ(f`y))" + +definition + PHrank :: "[i\o,i,i,i] \ o" where + "PHrank(M,f,y,z) \ M(z) \ (\fy[M]. fun_apply(M,f,y,fy) \ successor(M,fy,z))" + +definition + is_Hrank :: "[i\o,i,i,i] \ o" where + "is_Hrank(M,x,f,hc) \ (\R[M]. big_union(M,R,hc) \is_Replace(M,x,PHrank(M,f),R)) " + +definition + rrank :: "i \ i" where + "rrank(a) \ Memrel(eclose({a}))^+" + +lemma (in M_eclose) wf_rrank : "M(x) \ wf(rrank(x))" + unfolding rrank_def using wf_trancl[OF wf_Memrel] . + +lemma (in M_eclose) trans_rrank : "M(x) \ trans(rrank(x))" + unfolding rrank_def using trans_trancl . + +lemma (in M_eclose) relation_rrank : "M(x) \ relation(rrank(x))" + unfolding rrank_def using relation_trancl . + +lemma (in M_eclose) rrank_in_M : "M(x) \ M(rrank(x))" + unfolding rrank_def by simp + + +subsection\Absoluteness results\ + +locale M_eclose_pow = M_eclose + + assumes + power_ax : "power_ax(M)" and + powapply_replacement : "M(f) \ strong_replacement(M,is_powapply(M,f))" and + HVfrom_replacement : "\ M(i) ; M(A) \ \ + transrec_replacement(M,is_HVfrom(M,A),i)" and + PHrank_replacement : "M(f) \ strong_replacement(M,PHrank(M,f))" and + is_Hrank_replacement : "M(x) \ wfrec_replacement(M,is_Hrank(M),rrank(x))" + +begin + +lemma is_powapply_abs: "\M(f); M(y)\ \ is_powapply(M,f,y,z) \ M(z) \ z = {x\Pow(f`y). M(x)}" + unfolding is_powapply_def by simp + +lemma "\M(A); M(x); M(f); M(h) \ \ + is_HVfrom(M,A,x,f,h) \ + (\R[M]. h = A \ \R \ is_Replace(M, x,\x y. y = {x \ Pow(f ` x) . M(x)}, R))" + using is_powapply_abs unfolding is_HVfrom_def by auto + +lemma Replace_is_powapply: + assumes + "M(R)" "M(A)" "M(f)" + shows + "is_Replace(M, A, is_powapply(M, f), R) \ R = Replace(A,is_powapply(M,f))" +proof - + have "univalent(M,A,is_powapply(M,f))" + using \M(A)\ \M(f)\ unfolding univalent_def is_powapply_def by simp + moreover + have "\x y. \ x\A; is_powapply(M,f,x,y) \ \ M(y)" + using \M(A)\ \M(f)\ unfolding is_powapply_def by simp + ultimately + show ?thesis using \M(A)\ \M(R)\ Replace_abs by simp +qed + +lemma powapply_closed: + "\ M(y) ; M(f) \ \ M({x \ Pow(f ` y) . M(x)})" + using apply_closed power_ax unfolding power_ax_def by simp + +lemma RepFun_is_powapply: + assumes + "M(R)" "M(A)" "M(f)" + shows + "Replace(A,is_powapply(M,f)) = RepFun(A,\y.{x\Pow(f`y). M(x)})" +proof - + have "{y . x \ A, M(y) \ y = {x \ Pow(f ` x) . M(x)}} = {y . x \ A, y = {x \ Pow(f ` x) . M(x)}}" + using assms powapply_closed transM[of _ A] by blast + also + have " ... = {{x \ Pow(f ` y) . M(x)} . y \ A}" by auto + finally + show ?thesis using assms is_powapply_abs transM[of _ A] by simp +qed + +lemma RepFun_powapply_closed: + assumes + "M(f)" "M(A)" + shows + "M(Replace(A,is_powapply(M,f)))" +proof - + have "univalent(M,A,is_powapply(M,f))" + using \M(A)\ \M(f)\ unfolding univalent_def is_powapply_def by simp + moreover + have "\ x\A ; is_powapply(M,f,x,y) \ \ M(y)" for x y + using assms unfolding is_powapply_def by simp + ultimately + show ?thesis using assms powapply_replacement by simp +qed + +lemma Union_powapply_closed: + assumes + "M(x)" "M(f)" + shows + "M(\y\x. {a\Pow(f`y). M(a)})" +proof - + have "M({a\Pow(f`y). M(a)})" if "y\x" for y + using that assms transM[of _ x] powapply_closed by simp + then + have "M({{a\Pow(f`y). M(a)}. y\x})" + using assms transM[of _ x] RepFun_powapply_closed RepFun_is_powapply by simp + then show ?thesis using assms by simp +qed + +lemma relation2_HVfrom: "M(A) \ relation2(M,is_HVfrom(M,A),HVfrom(M,A))" + unfolding is_HVfrom_def HVfrom_def relation2_def + using Replace_is_powapply RepFun_is_powapply + Union_powapply_closed RepFun_powapply_closed by auto + +lemma HVfrom_closed : + "M(A) \ \x[M]. \g[M]. function(g) \ M(HVfrom(M,A,x,g))" + unfolding HVfrom_def using Union_powapply_closed by simp + +lemma transrec_HVfrom: + assumes "M(A)" + shows "Ord(i) \ {x\Vfrom(A,i). M(x)} = transrec(i,HVfrom(M,A))" +proof (induct rule:trans_induct) + case (step i) + have "Vfrom(A,i) = A \ (\y\i. Pow((\x\i. Vfrom(A, x)) ` y))" + using def_transrec[OF Vfrom_def, of A i] by simp + then + have "Vfrom(A,i) = A \ (\y\i. Pow(Vfrom(A, y)))" + by simp + then + have "{x\Vfrom(A,i). M(x)} = {x\A. M(x)} \ (\y\i. {x\Pow(Vfrom(A, y)). M(x)})" + by auto + with \M(A)\ + have "{x\Vfrom(A,i). M(x)} = A \ (\y\i. {x\Pow(Vfrom(A, y)). M(x)})" + by (auto intro:transM) + also + have "... = A \ (\y\i. {x\Pow({z\Vfrom(A,y). M(z)}). M(x)})" + proof - + have "{x\Pow(Vfrom(A, y)). M(x)} = {x\Pow({z\Vfrom(A,y). M(z)}). M(x)}" + if "y\i" for y by (auto intro:transM) + then + show ?thesis by simp + qed + also from step + have " ... = A \ (\y\i. {x\Pow(transrec(y, HVfrom(M, A))). M(x)})" by auto + also + have " ... = transrec(i, HVfrom(M, A))" + using def_transrec[of "\y. transrec(y, HVfrom(M, A))" "HVfrom(M, A)" i,symmetric] + unfolding HVfrom_def by simp + finally + show ?case . +qed + +lemma Vfrom_abs: "\ M(A); M(i); M(V); Ord(i) \ \ is_Vfrom(M,A,i,V) \ V = {x\Vfrom(A,i). M(x)}" + unfolding is_Vfrom_def + using relation2_HVfrom HVfrom_closed HVfrom_replacement + transrec_abs[of "is_HVfrom(M,A)" i "HVfrom(M,A)"] transrec_HVfrom by simp + +lemma Vfrom_closed: "\ M(A); M(i); Ord(i) \ \ M({x\Vfrom(A,i). M(x)})" + unfolding is_Vfrom_def + using relation2_HVfrom HVfrom_closed HVfrom_replacement + transrec_closed[of "is_HVfrom(M,A)" i "HVfrom(M,A)"] transrec_HVfrom by simp + +lemma Vset_abs: "\ M(i); M(V); Ord(i) \ \ is_Vset(M,i,V) \ V = {x\Vset(i). M(x)}" + using Vfrom_abs unfolding is_Vset_def by simp + +lemma Vset_closed: "\ M(i); Ord(i) \ \ M({x\Vset(i). M(x)})" + using Vfrom_closed unfolding is_Vset_def by simp + +lemma Hrank_trancl:"Hrank(y, restrict(f,Memrel(eclose({x}))-``{y})) + = Hrank(y, restrict(f,(Memrel(eclose({x}))^+)-``{y}))" + unfolding Hrank_def + using restrict_trans_eq by simp + +lemma rank_trancl: "rank(x) = wfrec(rrank(x), x, Hrank)" +proof - + have "rank(x) = wfrec(Memrel(eclose({x})), x, Hrank)" + (is "_ = wfrec(?r,_,_)") + unfolding rank_def transrec_def Hrank_def by simp + also + have " ... = wftrec(?r^+, x, \y f. Hrank(y, restrict(f,?r-``{y})))" + unfolding wfrec_def .. + also + have " ... = wftrec(?r^+, x, \y f. Hrank(y, restrict(f,(?r^+)-``{y})))" + using Hrank_trancl by simp + also + have " ... = wfrec(?r^+, x, Hrank)" + unfolding wfrec_def using trancl_eq_r[OF relation_trancl trans_trancl] by simp + finally + show ?thesis unfolding rrank_def . +qed + +lemma univ_PHrank : "\ M(z) ; M(f) \ \ univalent(M,z,PHrank(M,f))" + unfolding univalent_def PHrank_def by simp + + +lemma PHrank_abs : + "\ M(f) ; M(y) \ \ PHrank(M,f,y,z) \ M(z) \ z = succ(f`y)" + unfolding PHrank_def by simp + +lemma PHrank_closed : "PHrank(M,f,y,z) \ M(z)" + unfolding PHrank_def by simp + +lemma Replace_PHrank_abs: + assumes + "M(z)" "M(f)" "M(hr)" + shows + "is_Replace(M,z,PHrank(M,f),hr) \ hr = Replace(z,PHrank(M,f))" +proof - + have "\x y. \x\z; PHrank(M,f,x,y) \ \ M(y)" + using \M(z)\ \M(f)\ unfolding PHrank_def by simp + then + show ?thesis using \M(z)\ \M(hr)\ \M(f)\ univ_PHrank Replace_abs by simp +qed + +lemma RepFun_PHrank: + assumes + "M(R)" "M(A)" "M(f)" + shows + "Replace(A,PHrank(M,f)) = RepFun(A,\y. succ(f`y))" +proof - + have "{z . y \ A, M(z) \ z = succ(f`y)} = {z . y \ A, z = succ(f`y)}" + using assms PHrank_closed transM[of _ A] by blast + also + have " ... = {succ(f`y) . y \ A}" by auto + finally + show ?thesis using assms PHrank_abs transM[of _ A] by simp +qed + +lemma RepFun_PHrank_closed : + assumes + "M(f)" "M(A)" + shows + "M(Replace(A,PHrank(M,f)))" +proof - + have "\ x\A ; PHrank(M,f,x,y) \ \ M(y)" for x y + using assms unfolding PHrank_def by simp + with univ_PHrank + show ?thesis using assms PHrank_replacement by simp +qed + +lemma relation2_Hrank : + "relation2(M,is_Hrank(M),Hrank)" + unfolding is_Hrank_def Hrank_def relation2_def + using Replace_PHrank_abs RepFun_PHrank RepFun_PHrank_closed by auto + + +lemma Union_PHrank_closed: + assumes + "M(x)" "M(f)" + shows + "M(\y\x. succ(f`y))" +proof - + have "M(succ(f`y))" if "y\x" for y + using that assms transM[of _ x] by simp + then + have "M({succ(f`y). y\x})" + using assms transM[of _ x] RepFun_PHrank_closed RepFun_PHrank by simp + then show ?thesis using assms by simp +qed + +lemma is_Hrank_closed : + "M(A) \ \x[M]. \g[M]. function(g) \ M(Hrank(x,g))" + unfolding Hrank_def using RepFun_PHrank_closed Union_PHrank_closed by simp + +lemma rank_closed: "M(a) \ M(rank(a))" + unfolding rank_trancl + using relation2_Hrank is_Hrank_closed is_Hrank_replacement + wf_rrank relation_rrank trans_rrank rrank_in_M + trans_wfrec_closed[of "rrank(a)" a "is_Hrank(M)"] by simp + + +lemma M_into_Vset: + assumes "M(a)" + shows "\i[M]. \V[M]. ordinal(M,i) \ is_Vfrom(M,0,i,V) \ a\V" +proof - + let ?i="succ(rank(a))" + from assms + have "a\{x\Vfrom(0,?i). M(x)}" (is "a\?V") + using Vset_Ord_rank_iff by simp + moreover from assms + have "M(?i)" + using rank_closed by simp + moreover + note \M(a)\ + moreover from calculation + have "M(?V)" + using Vfrom_closed by simp + moreover from calculation + have "ordinal(M,?i) \ is_Vfrom(M, 0, ?i, ?V) \ a \ ?V" + using Ord_rank Vfrom_abs by simp + ultimately + show ?thesis by blast +qed + +end +end \ No newline at end of file diff --git a/thys/Forcing/Renaming.thy b/thys/Forcing/Renaming.thy new file mode 100644 --- /dev/null +++ b/thys/Forcing/Renaming.thy @@ -0,0 +1,584 @@ +section\Renaming of variables in internalized formulas\ + +theory Renaming + imports + Nat_Miscellanea + "ZF-Constructible.Formula" +begin + +lemma app_nm : + assumes "n\nat" "m\nat" "f\n\m" "x \ nat" + shows "f`x \ nat" +proof(cases "x\n") + case True + then show ?thesis using assms in_n_in_nat apply_type by simp +next + case False + then show ?thesis using assms apply_0 domain_of_fun by simp +qed + +subsection\Renaming of free variables\ + +definition + union_fun :: "[i,i,i,i] \ i" where + "union_fun(f,g,m,p) \ \j \ m \ p . if j\m then f`j else g`j" + +lemma union_fun_type: + assumes "f \ m \ n" + "g \ p \ q" + shows "union_fun(f,g,m,p) \ m \ p \ n \ q" +proof - + let ?h="union_fun(f,g,m,p)" + have + D: "?h`x \ n \ q" if "x \ m \ p" for x + proof (cases "x \ m") + case True + then have + "x \ m \ p" by simp + with \x\m\ + have "?h`x = f`x" + unfolding union_fun_def beta by simp + with \f \ m \ n\ \x\m\ + have "?h`x \ n" by simp + then show ?thesis .. + next + case False + with \x \ m \ p\ + have "x \ p" + by auto + with \x\m\ + have "?h`x = g`x" + unfolding union_fun_def using beta by simp + with \g \ p \ q\ \x\p\ + have "?h`x \ q" by simp + then show ?thesis .. + qed + have A:"function(?h)" unfolding union_fun_def using function_lam by simp + have " x\ (m \ p) \ (n \ q)" if "x\ ?h" for x + using that lamE[of x "m \ p" _ "x \ (m \ p) \ (n \ q)"] D unfolding union_fun_def + by auto + then have B:"?h \ (m \ p) \ (n \ q)" .. + have "m \ p \ domain(?h)" + unfolding union_fun_def using domain_lam by simp + with A B + show ?thesis using Pi_iff [THEN iffD2] by simp +qed + +lemma union_fun_action : + assumes + "env \ list(M)" + "env' \ list(M)" + "length(env) = m \ p" + "\ i . i \ m \ nth(f`i,env') = nth(i,env)" + "\ j . j \ p \ nth(g`j,env') = nth(j,env)" + shows "\ i . i \ m \ p \ + nth(i,env) = nth(union_fun(f,g,m,p)`i,env')" +proof - + let ?h = "union_fun(f,g,m,p)" + have "nth(x, env) = nth(?h`x,env')" if "x \ m \ p" for x + using that + proof (cases "x\m") + case True + with \x\m\ + have "?h`x = f`x" + unfolding union_fun_def beta by simp + with assms \x\m\ + have "nth(x,env) = nth(?h`x,env')" by simp + then show ?thesis . + next + case False + with \x \ m \ p\ + have + "x \ p" "x\m" by auto + then + have "?h`x = g`x" + unfolding union_fun_def beta by simp + with assms \x\p\ + have "nth(x,env) = nth(?h`x,env')" by simp + then show ?thesis . + qed + then show ?thesis by simp +qed + + +lemma id_fn_type : + assumes "n \ nat" + shows "id(n) \ n \ n" + unfolding id_def using \n\nat\ by simp + +lemma id_fn_action: + assumes "n \ nat" "env\list(M)" + shows "\ j . j < n \ nth(j,env) = nth(id(n)`j,env)" +proof - + show "nth(j,env) = nth(id(n)`j,env)" if "j < n" for j using that \n\nat\ ltD by simp +qed + + +definition + sum :: "[i,i,i,i,i] \ i" where + "sum(f,g,m,n,p) \ \j \ m#+p . if j nat" "n\nat" + "f \ m\n" "x \ m" + shows "sum(f,g,m,n,p)`x = f`x" +proof - + from \m\nat\ + have "m\m#+p" + using add_le_self[of m] by simp + with assms + have "x\m#+p" + using ltI[of x m] lt_trans2[of x m "m#+p"] ltD by simp + from assms + have "xx\m#+p\ + show ?thesis unfolding sum_def by simp +qed + +lemma sum_inr: + assumes "m \ nat" "n\nat" "p\nat" + "g\p\q" "m \ x" "x < m#+p" + shows "sum(f,g,m,n,p)`x = g`(x#-m)#+n" +proof - + from assms + have "x\nat" + using in_n_in_nat[of "m#+p"] ltD + by simp + with assms + have "\ xm#+p" + using ltD by simp + with \\ x + show ?thesis unfolding sum_def by simp +qed + + +lemma sum_action : + assumes "m \ nat" "n\nat" "p\nat" "q\nat" + "f \ m\n" "g\p\q" + "env \ list(M)" + "env' \ list(M)" + "env1 \ list(M)" + "env2 \ list(M)" + "length(env) = m" + "length(env1) = p" + "length(env') = n" + "\ i . i < m \ nth(i,env) = nth(f`i,env')" + "\ j. j < p \ nth(j,env1) = nth(g`j,env2)" + shows "\ i . i < m#+p \ + nth(i,env@env1) = nth(sum(f,g,m,n,p)`i,env'@env2)" +proof - + let ?h = "sum(f,g,m,n,p)" + from \m\nat\ \n\nat\ \q\nat\ + have "m\m#+p" "n\n#+q" "q\n#+q" + using add_le_self[of m] add_le_self2[of n q] by simp_all + from \p\nat\ + have "p = (m#+p)#-m" using diff_add_inverse2 by simp + have "nth(x, env @ env1) = nth(?h`x,env'@env2)" if "xm" "f`x \ n" "x\nat" + using assms sum_inl ltD apply_type[of f m _ x] in_n_in_nat by simp_all + with \x assms + have "f`x < n" "f`xnat" + using ltI in_n_in_nat by simp_all + with 2 \x assms + have "nth(x,env@env1) = nth(x,env)" + using nth_append[OF \env\list(M)\] \x\nat\ by simp + also + have + "... = nth(f`x,env')" + using 2 \x assms by simp + also + have "... = nth(f`x,env'@env2)" + using nth_append[OF \env'\list(M)\] \f`x \f`x \nat\ by simp + also + have "... = nth(?h`x,env'@env2)" + using 2 by simp + finally + have "nth(x, env @ env1) = nth(?h`x,env'@env2)" . + then show ?thesis . + next + case False + have "x\nat" + using that in_n_in_nat[of "m#+p" x] ltD \p\nat\ \m\nat\ by simp + with \length(env) = m\ + have "m\x" "length(env) \ x" + using not_lt_iff_le \m\nat\ \\x by simp_all + with \\x \length(env) = m\ + have 2 : "?h`x= g`(x#-m)#+n" "\ x x\nat\ \p=m#+p#-m\ + have "x#-m < p" + using diff_mono[OF _ _ _ \x \m\x\] by simp + then have "x#-m\p" using ltD by simp + with \g\p\q\ + have "g`(x#-m) \ q" by simp + with \q\nat\ \length(env') = n\ + have "g`(x#-m) < q" "g`(x#-m)\nat" using ltI in_n_in_nat by simp_all + with \q\nat\ \n\nat\ + have "(g`(x#-m))#+n g`(x#-m)#+n" "\ g`(x#-m)#+n < length(env')" + using add_lt_mono1[of "g`(x#-m)" _ n,OF _ \q\nat\] + add_le_self2[of n] \length(env') = n\ + by simp_all + from assms \\ x < length(env)\ \length(env) = m\ + have "nth(x,env @ env1) = nth(x#-m,env1)" + using nth_append[OF \env\list(M)\ \x\nat\] by simp + also + have "... = nth(g`(x#-m),env2)" + using assms \x#-m < p\ by simp + also + have "... = nth((g`(x#-m)#+n)#-length(env'),env2)" + using \length(env') = n\ + diff_add_inverse2 \g`(x#-m)\nat\ + by simp + also + have "... = nth((g`(x#-m)#+n),env'@env2)" + using nth_append[OF \env'\list(M)\] \n\nat\ \\ g`(x#-m)#+n < length(env')\ + by simp + also + have "... = nth(?h`x,env'@env2)" + using 2 by simp + finally + have "nth(x, env @ env1) = nth(?h`x,env'@env2)" . + then show ?thesis . + qed + then show ?thesis by simp +qed + +lemma sum_type : + assumes "m \ nat" "n\nat" "p\nat" "q\nat" + "f \ m\n" "g\p\q" + shows "sum(f,g,m,n,p) \ (m#+p) \ (n#+q)" +proof - + let ?h = "sum(f,g,m,n,p)" + from \m\nat\ \n\nat\ \q\nat\ + have "m\m#+p" "n\n#+q" "q\n#+q" + using add_le_self[of m] add_le_self2[of n q] by simp_all + from \p\nat\ + have "p = (m#+p)#-m" using diff_add_inverse2 by simp + {fix x + assume 1: "x\m#+p" "xm" + using assms sum_inl ltD by simp_all + with \f\m\n\ + have "?h`x \ n" by simp + with \n\nat\ have "?h`x < n" using ltI by simp + with \n\n#+q\ + have "?h`x < n#+q" using lt_trans2 by simp + then + have "?h`x \ n#+q" using ltD by simp + } + then have 1:"?h`x \ n#+q" if "x\m#+p" "xm#+p" "m\x" + then have "xnat" using ltI in_n_in_nat[of "m#+p"] ltD by simp_all + with 1 + have 2 : "?h`x= g`(x#-m)#+n" + using assms sum_inr ltD by simp_all + from assms \x\nat\ \p=m#+p#-m\ + have "x#-m < p" using diff_mono[OF _ _ _ \x \m\x\] by simp + then have "x#-m\p" using ltD by simp + with \g\p\q\ + have "g`(x#-m) \ q" by simp + with \q\nat\ have "g`(x#-m) < q" using ltI by simp + with \q\nat\ + have "(g`(x#-m))#+n q\nat\] by simp + with 2 + have "?h`x \ n#+q" using ltD by simp + } + then have 2:"?h`x \ n#+q" if "x\m#+p" "m\x" for x using that . + have + D: "?h`x \ n#+q" if "x\m#+p" for x + using that + proof (cases "xm\nat\ have "m\x" using not_lt_iff_le that in_n_in_nat[of "m#+p"] by simp + then show ?thesis using 2 that by simp + qed + have A:"function(?h)" unfolding sum_def using function_lam by simp + have " x\ (m #+ p) \ (n #+ q)" if "x\ ?h" for x + using that lamE[of x "m#+p" _ "x \ (m #+ p) \ (n #+ q)"] D unfolding sum_def + by auto + then have B:"?h \ (m #+ p) \ (n #+ q)" .. + have "m #+ p \ domain(?h)" + unfolding sum_def using domain_lam by simp + with A B + show ?thesis using Pi_iff [THEN iffD2] by simp +qed + +lemma sum_type_id : + assumes + "f \ length(env)\length(env')" + "env \ list(M)" + "env' \ list(M)" + "env1 \ list(M)" + shows + "sum(f,id(length(env1)),length(env),length(env'),length(env1)) \ + (length(env)#+length(env1)) \ (length(env')#+length(env1))" + using assms length_type id_fn_type sum_type + by simp + +lemma sum_type_id_aux2 : + assumes + "f \ m\n" + "m \ nat" "n \ nat" + "env1 \ list(M)" + shows + "sum(f,id(length(env1)),m,n,length(env1)) \ + (m#+length(env1)) \ (n#+length(env1))" + using assms id_fn_type sum_type + by auto + +lemma sum_action_id : + assumes + "env \ list(M)" + "env' \ list(M)" + "f \ length(env)\length(env')" + "env1 \ list(M)" + "\ i . i < length(env) \ nth(i,env) = nth(f`i,env')" + shows "\ i . i < length(env)#+length(env1) \ + nth(i,env@env1) = nth(sum(f,id(length(env1)),length(env),length(env'),length(env1))`i,env'@env1)" +proof - + from assms + have "length(env)\nat" (is "?m \ _") by simp + from assms have "length(env')\nat" (is "?n \ _") by simp + from assms have "length(env1)\nat" (is "?p \ _") by simp + note lenv = id_fn_action[OF \?p\nat\ \env1\list(M)\] + note lenv_ty = id_fn_type[OF \?p\nat\] + { + fix i + assume "i < length(env)#+length(env1)" + have "nth(i,env@env1) = nth(sum(f,id(length(env1)),?m,?n,?p)`i,env'@env1)" + using sum_action[OF \?m\nat\ \?n\nat\ \?p\nat\ \?p\nat\ \f\?m\?n\ + lenv_ty \env\list(M)\ \env'\list(M)\ + \env1\list(M)\ \env1\list(M)\ _ + _ _ assms(5) lenv + ] \i by simp + } + then show "\ i . i < ?m#+length(env1) \ + nth(i,env@env1) = nth(sum(f,id(?p),?m,?n,?p)`i,env'@env1)" by simp +qed + +lemma sum_action_id_aux : + assumes + "f \ m\n" + "env \ list(M)" + "env' \ list(M)" + "env1 \ list(M)" + "length(env) = m" + "length(env') = n" + "length(env1) = p" + "\ i . i < m \ nth(i,env) = nth(f`i,env')" + shows "\ i . i < m#+length(env1) \ + nth(i,env@env1) = nth(sum(f,id(length(env1)),m,n,length(env1))`i,env'@env1)" + using assms length_type id_fn_type sum_action_id + by auto + + +definition + sum_id :: "[i,i] \ i" where + "sum_id(m,f) \ sum(\x\1.x,f,1,1,m)" + +lemma sum_id0 : "m\nat\sum_id(m,f)`0 = 0" + by(unfold sum_id_def,subst sum_inl,auto) + +lemma sum_idS : "p\nat \ q\nat \ f\p\q \ x \ p \ sum_id(p,f)`(succ(x)) = succ(f`x)" + by(subgoal_tac "x\nat",unfold sum_id_def,subst sum_inr, + simp_all add:ltI,simp_all add: app_nm in_n_in_nat) + +lemma sum_id_tc_aux : + "p \ nat \ q \ nat \ f \ p \ q \ sum_id(p,f) \ 1#+p \ 1#+q" + by (unfold sum_id_def,rule sum_type,simp_all) + +lemma sum_id_tc : + "n \ nat \ m \ nat \ f \ n \ m \ sum_id(n,f) \ succ(n) \ succ(m)" + by(rule ssubst[of "succ(n) \ succ(m)" "1#+n \ 1#+m"], + simp,rule sum_id_tc_aux,simp_all) + +subsection\Renaming of formulas\ + +consts ren :: "i\i" +primrec + "ren(Member(x,y)) = + (\ n \ nat . \ m \ nat. \f \ n \ m. Member (f`x, f`y))" + +"ren(Equal(x,y)) = + (\ n \ nat . \ m \ nat. \f \ n \ m. Equal (f`x, f`y))" + +"ren(Nand(p,q)) = + (\ n \ nat . \ m \ nat. \f \ n \ m. Nand (ren(p)`n`m`f, ren(q)`n`m`f))" + +"ren(Forall(p)) = + (\ n \ nat . \ m \ nat. \f \ n \ m. Forall (ren(p)`succ(n)`succ(m)`sum_id(n,f)))" + +lemma arity_meml : "l \ nat \ Member(x,y) \ formula \ arity(Member(x,y)) \ l \ x \ l" + by(simp,rule subsetD,rule le_imp_subset,assumption,simp) +lemma arity_memr : "l \ nat \ Member(x,y) \ formula \ arity(Member(x,y)) \ l \ y \ l" + by(simp,rule subsetD,rule le_imp_subset,assumption,simp) +lemma arity_eql : "l \ nat \ Equal(x,y) \ formula \ arity(Equal(x,y)) \ l \ x \ l" + by(simp,rule subsetD,rule le_imp_subset,assumption,simp) +lemma arity_eqr : "l \ nat \ Equal(x,y) \ formula \ arity(Equal(x,y)) \ l \ y \ l" + by(simp,rule subsetD,rule le_imp_subset,assumption,simp) +lemma nand_ar1 : "p \ formula \ q\formula \arity(p) \ arity(Nand(p,q))" + by (simp,rule Un_upper1_le,simp+) +lemma nand_ar2 : "p \ formula \ q\formula \arity(q) \ arity(Nand(p,q))" + by (simp,rule Un_upper2_le,simp+) + +lemma nand_ar1D : "p \ formula \ q\formula \ arity(Nand(p,q)) \ n \ arity(p) \ n" + by(auto simp add: le_trans[OF Un_upper1_le[of "arity(p)" "arity(q)"]]) +lemma nand_ar2D : "p \ formula \ q\formula \ arity(Nand(p,q)) \ n \ arity(q) \ n" + by(auto simp add: le_trans[OF Un_upper2_le[of "arity(p)" "arity(q)"]]) + + +lemma ren_tc : "p \ formula \ + (\ n m f . n \ nat \ m \ nat \ f \ n\m \ ren(p)`n`m`f \ formula)" + by (induct set:formula,auto simp add: app_nm sum_id_tc) + + +lemma arity_ren : + fixes "p" + assumes "p \ formula" + shows "\ n m f . n \ nat \ m \ nat \ f \ n\m \ arity(p) \ n \ arity(ren(p)`n`m`f)\m" + using assms +proof (induct set:formula) + case (Member x y) + then have "f`x \ m" "f`y \ m" + using Member assms by (simp add: arity_meml apply_funtype,simp add:arity_memr apply_funtype) + then show ?case using Member by (simp add: Un_least_lt ltI) +next + case (Equal x y) + then have "f`x \ m" "f`y \ m" + using Equal assms by (simp add: arity_eql apply_funtype,simp add:arity_eqr apply_funtype) + then show ?case using Equal by (simp add: Un_least_lt ltI) +next + case (Nand p q) + then have "arity(p)\arity(Nand(p,q))" + "arity(q)\arity(Nand(p,q))" + by (subst nand_ar1,simp,simp,simp,subst nand_ar2,simp+) + then have "arity(p)\n" + and "arity(q)\n" using Nand + by (rule_tac j="arity(Nand(p,q))" in le_trans,simp,simp)+ + then have "arity(ren(p)`n`m`f) \ m" and "arity(ren(q)`n`m`f) \ m" + using Nand by auto + then show ?case using Nand by (simp add:Un_least_lt) +next + case (Forall p) + from Forall have "succ(n)\nat" "succ(m)\nat" by auto + from Forall have 2: "sum_id(n,f) \ succ(n)\succ(m)" by (simp add:sum_id_tc) + from Forall have 3:"arity(p) \ succ(n)" by (rule_tac n="arity(p)" in natE,simp+) + then have "arity(ren(p)`succ(n)`succ(m)`sum_id(n,f))\succ(m)" using + Forall \succ(n)\nat\ \succ(m)\nat\ 2 by force + then show ?case using Forall 2 3 ren_tc arity_type pred_le by auto +qed + +lemma arity_forallE : "p \ formula \ m \ nat \ arity(Forall(p)) \ m \ arity(p) \ succ(m)" + by(rule_tac n="arity(p)" in natE,erule arity_type,simp+) + +lemma env_coincidence_sum_id : + assumes "m \ nat" "n \ nat" + "\ \ list(A)" "\' \ list(A)" + "f \ n \ m" + "\ i . i < n \ nth(i,\) = nth(f`i,\')" + "a \ A" "j \ succ(n)" + shows "nth(j,Cons(a,\)) = nth(sum_id(n,f)`j,Cons(a,\'))" +proof - + let ?g="sum_id(n,f)" + have "succ(n) \ nat" using \n\nat\ by simp + then have "j \ nat" using \j\succ(n)\ in_n_in_nat by blast + then have "nth(j,Cons(a,\)) = nth(?g`j,Cons(a,\'))" + proof (cases rule:natE[OF \j\nat\]) + case 1 + then show ?thesis using assms sum_id0 by simp + next + case (2 i) + with \j\succ(n)\ have "succ(i)\succ(n)" by simp + with \n\nat\ have "i \ n" using nat_succD assms by simp + have "f`i\m" using \f\n\m\ apply_type \i\n\ by simp + then have "f`i \ nat" using in_n_in_nat \m\nat\ by simp + have "nth(succ(i),Cons(a,\)) = nth(i,\)" using \i\nat\ by simp + also have "... = nth(f`i,\')" using assms \i\n\ ltI by simp + also have "... = nth(succ(f`i),Cons(a,\'))" using \f`i\nat\ by simp + also have "... = nth(?g`succ(i),Cons(a,\'))" + using assms sum_idS[OF \n\nat\ \m\nat\ \f\n\m\ \i \ n\] cases by simp + finally have "nth(succ(i),Cons(a,\)) = nth(?g`succ(i),Cons(a,\'))" . + then show ?thesis using \j=succ(i)\ by simp + qed + then show ?thesis . +qed + +lemma sats_iff_sats_ren : + fixes "\" + assumes "\ \ formula" + shows "\ n \ nat ; m \ nat ; \ \ list(M) ; \' \ list(M) ; f \ n \ m ; + arity(\) \ n ; + \ i . i < n \ nth(i,\) = nth(f`i,\') \ \ + sats(M,\,\) \ sats(M,ren(\)`n`m`f,\')" + using \\ \ formula\ +proof(induct \ arbitrary:n m \ \' f) + case (Member x y) + have "ren(Member(x,y))`n`m`f = Member(f`x,f`y)" using Member assms arity_type by force + moreover + have "x \ n" using Member arity_meml by simp + moreover + have "y \ n" using Member arity_memr by simp + ultimately + show ?case using Member ltI by simp +next + case (Equal x y) + have "ren(Equal(x,y))`n`m`f = Equal(f`x,f`y)" using Equal assms arity_type by force + moreover + have "x \ n" using Equal arity_eql by simp + moreover + have "y \ n" using Equal arity_eqr by simp + ultimately show ?case using Equal ltI by simp +next + case (Nand p q) + have "ren(Nand(p,q))`n`m`f = Nand(ren(p)`n`m`f,ren(q)`n`m`f)" using Nand by simp + moreover + have "arity(p) \ n" using Nand nand_ar1D by simp + moreover from this + have "i \ arity(p) \ i \ n" for i using subsetD[OF le_imp_subset[OF \arity(p) \ n\]] by simp + moreover from this + have "i \ arity(p) \ nth(i,\) = nth(f`i,\')" for i using Nand ltI by simp + moreover from this + have "sats(M,p,\) \ sats(M,ren(p)`n`m`f,\')" using \arity(p)\n\ Nand by simp + have "arity(q) \ n" using Nand nand_ar2D by simp + moreover from this + have "i \ arity(q) \ i \ n" for i using subsetD[OF le_imp_subset[OF \arity(q) \ n\]] by simp + moreover from this + have "i \ arity(q) \ nth(i,\) = nth(f`i,\')" for i using Nand ltI by simp + moreover from this + have "sats(M,q,\) \ sats(M,ren(q)`n`m`f,\')" using assms \arity(q)\n\ Nand by simp + ultimately + show ?case using Nand by simp +next + case (Forall p) + have 0:"ren(Forall(p))`n`m`f = Forall(ren(p)`succ(n)`succ(m)`sum_id(n,f))" + using Forall by simp + have 1:"sum_id(n,f) \ succ(n) \ succ(m)" (is "?g \ _") using sum_id_tc Forall by simp + then have 2: "arity(p) \ succ(n)" + using Forall le_trans[of _ "succ(pred(arity(p)))"] succpred_leI by simp + have "succ(n)\nat" "succ(m)\nat" using Forall by auto + then have A:"\ j .j < succ(n) \ nth(j, Cons(a, \)) = nth(?g`j, Cons(a, \'))" if "a\M" for a + using that env_coincidence_sum_id Forall ltD by force + have + "sats(M,p,Cons(a,\)) \ sats(M,ren(p)`succ(n)`succ(m)`?g,Cons(a,\'))" if "a\M" for a + proof - + have C:"Cons(a,\) \ list(M)" "Cons(a,\')\list(M)" using Forall that by auto + have "sats(M,p,Cons(a,\)) \ sats(M,ren(p)`succ(n)`succ(m)`?g,Cons(a,\'))" + using Forall(2)[OF \succ(n)\nat\ \succ(m)\nat\ C(1) C(2) 1 2 A[OF \a\M\]] by simp + then show ?thesis . + qed + then show ?case using Forall 0 1 2 by simp +qed + +end diff --git a/thys/Forcing/Renaming_Auto.thy b/thys/Forcing/Renaming_Auto.thy new file mode 100644 --- /dev/null +++ b/thys/Forcing/Renaming_Auto.thy @@ -0,0 +1,57 @@ +theory Renaming_Auto + imports + Renaming + ZF.Finite + ZF.List +keywords + "rename" :: thy_decl % "ML" +and + "simple_rename" :: thy_decl % "ML" +and + "src" +and + "tgt" +abbrevs + "simple_rename" = "" + +begin + +lemmas app_fun = apply_iff[THEN iffD1] +lemmas nat_succI = nat_succ_iff[THEN iffD2] +ML_file\Utils.ml\ +ML_file\Renaming_ML.ml\ +ML\ + open Renaming_ML + + fun renaming_def mk_ren name from to ctxt = + let val to = to |> Syntax.read_term ctxt + val from = from |> Syntax.read_term ctxt + val (tc_lemma,action_lemma,fvs,r) = mk_ren from to ctxt + val (tc_lemma,action_lemma) = (fix_vars tc_lemma fvs ctxt , fix_vars action_lemma fvs ctxt) + val ren_fun_name = Binding.name (name ^ "_fn") + val ren_fun_def = Binding.name (name ^ "_fn_def") + val ren_thm = Binding.name (name ^ "_thm") + in + Local_Theory.note ((ren_thm, []), [tc_lemma,action_lemma]) ctxt |> snd |> + Local_Theory.define ((ren_fun_name, NoSyn), ((ren_fun_def, []), r)) |> snd + end; +\ + +ML\ +local + + val ren_parser = Parse.position (Parse.string -- + (Parse.$$$ "src" |-- Parse.string --| Parse.$$$ "tgt" -- Parse.string)); + + val _ = + Outer_Syntax.local_theory \<^command_keyword>\rename\ "ML setup for synthetic definitions" + (ren_parser >> (fn ((name,(from,to)),_) => renaming_def sum_rename name from to )) + + val _ = + Outer_Syntax.local_theory \<^command_keyword>\simple_rename\ "ML setup for synthetic definitions" + (ren_parser >> (fn ((name,(from,to)),_) => renaming_def ren_thm name from to )) + +in +end +\ +end \ No newline at end of file diff --git a/thys/Forcing/Renaming_ML.ml b/thys/Forcing/Renaming_ML.ml new file mode 100644 --- /dev/null +++ b/thys/Forcing/Renaming_ML.ml @@ -0,0 +1,178 @@ +(* Builds the finite mapping. *) +structure Renaming_ML = struct +open Utils + +fun sum_ f g m n p = @{const Renaming.sum} $ f $ g $ m $ n $ p + +fun mk_ren rho rho' ctxt = + let val rs = to_ML_list rho + val rs' = to_ML_list rho' + val ixs = 0 upto (length rs-1) + fun err t = "The element " ^ Syntax.string_of_term ctxt t ^ " is missing in the target environment" + fun mkp i = + case find_index (fn x => x = nth rs i) rs' of + ~1 => nth rs i |> err |> error + | j => mk_Pair (mk_ZFnat i) (mk_ZFnat j) + in map mkp ixs |> mk_FinSet + end + +fun mk_dom_lemma ren rho = + let val n = rho |> to_ML_list |> length |> mk_ZFnat + in eq_ n (@{const domain} $ ren) |> tp +end + +fun ren_tc_goal fin ren rho rho' = + let val n = rho |> to_ML_list |> length + val m = rho' |> to_ML_list |> length + val fun_ty = if fin then @{const_name "FiniteFun"} else @{const_abbrev "function_space"} + val ty = Const (fun_ty,@{typ "i \ i \ i"}) $ mk_ZFnat n $ mk_ZFnat m + in mem_ ren ty |> tp +end + +fun ren_action_goal ren rho rho' ctxt = + let val setV = Variable.variant_frees ctxt [] [("A",@{typ i})] |> hd |> Free + val j = Variable.variant_frees ctxt [] [("j",@{typ i})] |> hd |> Free + val vs = rho |> to_ML_list + val ws = rho' |> to_ML_list |> filter isFree + val h1 = subset_ (vs|> mk_FinSet) setV + val h2 = lt_ j (mk_ZFnat (length vs)) + val fvs = ([j,setV ] @ ws |> filter isFree) |> map freeName + val lhs = nth_ j rho + val rhs = nth_ (app_ ren j) rho' + val concl = eq_ lhs rhs + in (Logic.list_implies([tp h1,tp h2],tp concl),fvs) + end + + fun sum_tc_goal f m n p = + let val m_length = m |> to_ML_list |> length |> mk_ZFnat + val n_length = n |> to_ML_list |> length |> mk_ZFnat + val p_length = p |> length_ + val id_fun = @{const id} $ p_length + val sum_fun = sum_ f id_fun m_length n_length p_length + val dom = add_ m_length p_length + val codom = add_ n_length p_length + val fun_ty = @{const_abbrev "function_space"} + val ty = Const (fun_ty,@{typ "i \ i \ i"}) $ dom $ codom + in (sum_fun, mem_ sum_fun ty |> tp) + end + +fun sum_action_goal ren rho rho' ctxt = + let val setV = Variable.variant_frees ctxt [] [("A",@{typ i})] |> hd |> Free + val envV = Variable.variant_frees ctxt [] [("env",@{typ i})] |> hd |> Free + val j = Variable.variant_frees ctxt [] [("j",@{typ i})] |> hd |> Free + val vs = rho |> to_ML_list + val ws = rho' |> to_ML_list |> filter isFree + val envL = envV |> length_ + val rhoL = vs |> length |> mk_ZFnat + val h1 = subset_ (append vs ws |> mk_FinSet) setV + val h2 = lt_ j (add_ rhoL envL) + val h3 = mem_ envV (list_ setV) + val fvs = ([j,setV,envV] @ ws |> filter isFree) |> map freeName + val lhs = nth_ j (concat_ rho envV) + val rhs = nth_ (app_ ren j) (concat_ rho' envV) + val concl = eq_ lhs rhs + in (Logic.list_implies([tp h1,tp h2,tp h3],tp concl),fvs) + end + + (* Tactics *) + fun fin ctxt = + REPEAT (resolve_tac ctxt [@{thm nat_succI}] 1) + THEN resolve_tac ctxt [@{thm nat_0I}] 1 + + fun step ctxt thm = + asm_full_simp_tac ctxt 1 + THEN asm_full_simp_tac ctxt 1 + THEN EqSubst.eqsubst_tac ctxt [1] [@{thm app_fun} OF [thm]] 1 + THEN simp_tac ctxt 1 + THEN simp_tac ctxt 1 + + fun fin_fun_tac ctxt = + REPEAT ( + resolve_tac ctxt [@{thm consI}] 1 + THEN resolve_tac ctxt [@{thm ltD}] 1 + THEN simp_tac ctxt 1 + THEN resolve_tac ctxt [@{thm ltD}] 1 + THEN simp_tac ctxt 1) + THEN resolve_tac ctxt [@{thm emptyI}] 1 + THEN REPEAT (simp_tac ctxt 1) + + fun ren_thm e e' ctxt = + let + val r = mk_ren e e' ctxt + val fin_tc_goal = ren_tc_goal true r e e' + val dom_goal = mk_dom_lemma r e + val tc_goal = ren_tc_goal false r e e' + val (action_goal,fvs) = ren_action_goal r e e' ctxt + val fin_tc_lemma = Goal.prove ctxt [] [] fin_tc_goal (fn _ => fin_fun_tac ctxt) + val dom_lemma = Goal.prove ctxt [] [] dom_goal (fn _ => blast_tac ctxt 1) + val tc_lemma = Goal.prove ctxt [] [] tc_goal + (fn _ => EqSubst.eqsubst_tac ctxt [1] [dom_lemma] 1 + THEN resolve_tac ctxt [@{thm FiniteFun_is_fun}] 1 + THEN resolve_tac ctxt [fin_tc_lemma] 1) + val action_lemma = Goal.prove ctxt [] [] action_goal + (fn _ => + forward_tac ctxt [@{thm le_natI}] 1 + THEN fin ctxt + THEN REPEAT (resolve_tac ctxt [@{thm natE}] 1 + THEN step ctxt tc_lemma) + THEN (step ctxt tc_lemma) + ) + in (action_lemma, tc_lemma, fvs, r) + end + +(* +Returns the sum renaming, the goal for type_checking, and the actual lemmas +for the left part of the sum. +*) + fun sum_ren_aux e e' ctxt = + let val env = Variable.variant_frees ctxt [] [("env",@{typ i})] |> hd |> Free + val (left_action_lemma,left_tc_lemma,_,r) = ren_thm e e' ctxt + val (sum_ren,sum_goal_tc) = sum_tc_goal r e e' env + val setV = Variable.variant_frees ctxt [] [("A",@{typ i})] |> hd |> Free + fun hyp en = mem_ en (list_ setV) + in (sum_ren, + freeName env, + Logic.list_implies (map (fn e => e |> hyp |> tp) [env], sum_goal_tc), + left_tc_lemma, + left_action_lemma) +end + +fun sum_tc_lemma rho rho' ctxt = + let val (sum_ren, envVar, tc_goal, left_tc_lemma, left_action_lemma) = sum_ren_aux rho rho' ctxt + val (goal,fvs) = sum_action_goal sum_ren rho rho' ctxt + val r = mk_ren rho rho' ctxt + in (sum_ren, goal,envVar, r,left_tc_lemma, left_action_lemma ,fvs, Goal.prove ctxt [] [] tc_goal + (fn _ => + resolve_tac ctxt [@{thm sum_type_id_aux2}] 1 + THEN asm_simp_tac ctxt 4 + THEN simp_tac ctxt 1 + THEN resolve_tac ctxt [left_tc_lemma] 1 + THEN (fin ctxt) + THEN (fin ctxt) + )) + end + +fun sum_rename rho rho' ctxt = + let + val (_, goal, _, left_rename, left_tc_lemma, left_action_lemma, fvs, sum_tc_lemma) = sum_tc_lemma rho rho' ctxt + val action_lemma = fix_vars left_action_lemma fvs ctxt + in (sum_tc_lemma, Goal.prove ctxt [] [] goal + (fn _ => resolve_tac ctxt [@{thm sum_action_id_aux}] 1 + THEN (simp_tac ctxt 4) + THEN (simp_tac ctxt 1) + THEN (resolve_tac ctxt [left_tc_lemma] 1) + THEN (asm_full_simp_tac ctxt 1) + THEN (asm_full_simp_tac ctxt 1) + THEN (simp_tac ctxt 1) + THEN (simp_tac ctxt 1) + THEN (simp_tac ctxt 1) + THEN (full_simp_tac ctxt 1) + THEN (resolve_tac ctxt [action_lemma] 1) + THEN (blast_tac ctxt 1) + THEN (full_simp_tac ctxt 1) + THEN (full_simp_tac ctxt 1) + + ), fvs, left_rename + ) +end ; +end \ No newline at end of file diff --git a/thys/Forcing/Replacement_Axiom.thy b/thys/Forcing/Replacement_Axiom.thy new file mode 100644 --- /dev/null +++ b/thys/Forcing/Replacement_Axiom.thy @@ -0,0 +1,564 @@ +section\The Axiom of Replacement in $M[G]$\ +theory Replacement_Axiom + imports + Least Relative_Univ Separation_Axiom Renaming_Auto +begin + +rename "renrep1" src "[p,P,leq,o,\,\]" tgt "[V,\,\,p,\,P,leq,o]" + +definition renrep_fn :: "i \ i" where + "renrep_fn(env) \ sum(renrep1_fn,id(length(env)),6,8,length(env))" + +definition + renrep :: "[i,i] \ i" where + "renrep(\,env) = ren(\)`(6#+length(env))`(8#+length(env))`renrep_fn(env)" + +lemma renrep_type [TC]: + assumes "\\formula" "env \ list(M)" + shows "renrep(\,env) \ formula" + unfolding renrep_def renrep_fn_def renrep1_fn_def + using assms renrep1_thm(1) ren_tc + by simp + +lemma arity_renrep: + assumes "\\formula" "arity(\)\ 6#+length(env)" "env \ list(M)" + shows "arity(renrep(\,env)) \ 8#+length(env)" + unfolding renrep_def renrep_fn_def renrep1_fn_def + using assms renrep1_thm(1) arity_ren + by simp + +lemma renrep_sats : + assumes "arity(\) \ 6 #+ length(env)" + "[P,leq,o,p,\,\] @ env \ list(M)" + "V \ M" "\ \ M" + "\\formula" + shows "sats(M, \, [p,P,leq,o,\,\] @ env) \ sats(M, renrep(\,env), [V,\,\,p,\,P,leq,o] @ env)" + unfolding renrep_def renrep_fn_def renrep1_fn_def + by (rule sats_iff_sats_ren,insert assms, auto simp add:renrep1_thm(1)[of _ M,simplified] + renrep1_thm(2)[simplified,where p=p and \=\]) + +rename "renpbdy1" src "[\,p,\,P,leq,o]" tgt "[\,p,x,\,P,leq,o]" + +definition renpbdy_fn :: "i \ i" where + "renpbdy_fn(env) \ sum(renpbdy1_fn,id(length(env)),6,7,length(env))" + +definition + renpbdy :: "[i,i] \ i" where + "renpbdy(\,env) = ren(\)`(6#+length(env))`(7#+length(env))`renpbdy_fn(env)" + + +lemma + renpbdy_type [TC]: "\\formula \ env\list(M) \ renpbdy(\,env) \ formula" + unfolding renpbdy_def renpbdy_fn_def renpbdy1_fn_def + using renpbdy1_thm(1) ren_tc + by simp + +lemma arity_renpbdy: "\\formula \ arity(\) \ 6 #+ length(env) \ env\list(M) \ arity(renpbdy(\,env)) \ 7 #+ length(env)" + unfolding renpbdy_def renpbdy_fn_def renpbdy1_fn_def + using renpbdy1_thm(1) arity_ren + by simp + +lemma + sats_renpbdy: "arity(\) \ 6 #+ length(nenv) \ [\,p,x,\,P,leq,o,\] @ nenv \ list(M) \ \\formula \ + sats(M, \, [\,p,\,P,leq,o] @ nenv) \ sats(M, renpbdy(\,nenv), [\,p,x,\,P,leq,o] @ nenv)" + unfolding renpbdy_def renpbdy_fn_def renpbdy1_fn_def + by (rule sats_iff_sats_ren,auto simp add: renpbdy1_thm(1)[of _ M,simplified] + renpbdy1_thm(2)[simplified,where \=\ and x=x]) + + +rename "renbody1" src "[x,\,P,leq,o]" tgt "[\,x,m,P,leq,o]" + +definition renbody_fn :: "i \ i" where + "renbody_fn(env) \ sum(renbody1_fn,id(length(env)),5,6,length(env))" + +definition + renbody :: "[i,i] \ i" where + "renbody(\,env) = ren(\)`(5#+length(env))`(6#+length(env))`renbody_fn(env)" + +lemma + renbody_type [TC]: "\\formula \ env\list(M) \ renbody(\,env) \ formula" + unfolding renbody_def renbody_fn_def renbody1_fn_def + using renbody1_thm(1) ren_tc + by simp + +lemma arity_renbody: "\\formula \ arity(\) \ 5 #+ length(env) \ env\list(M) \ + arity(renbody(\,env)) \ 6 #+ length(env)" + unfolding renbody_def renbody_fn_def renbody1_fn_def + using renbody1_thm(1) arity_ren + by simp + +lemma + sats_renbody: "arity(\) \ 5 #+ length(nenv) \ [\,x,m,P,leq,o] @ nenv \ list(M) \ \\formula \ + sats(M, \, [x,\,P,leq,o] @ nenv) \ sats(M, renbody(\,nenv), [\,x,m,P,leq,o] @ nenv)" + unfolding renbody_def renbody_fn_def renbody1_fn_def + by (rule sats_iff_sats_ren, auto simp add:renbody1_thm(1)[of _ M,simplified] + renbody1_thm(2)[where \=\ and m=m,simplified]) + +context G_generic +begin + +lemma pow_inter_M: + assumes + "x\M" "y\M" + shows + "powerset(##M,x,y) \ y = Pow(x) \ M" + using assms by auto + + +schematic_goal sats_prebody_fm_auto: + assumes + "\\formula" "[P,leq,one,p,\,\] @ nenv \list(M)" "\\M" "arity(\) \ 2 #+ length(nenv)" + shows + "(\\\M. \V\M. is_Vset(##M,\,V) \ \\V \ sats(M,forces(\),[p,P,leq,one,\,\] @ nenv)) + \ sats(M,?prebody_fm,[\,p,\,P,leq,one] @ nenv)" + apply (insert assms; (rule sep_rules is_Vset_iff_sats[OF _ _ _ _ _ nonempty[simplified]] | simp)) + apply (rule sep_rules is_Vset_iff_sats is_Vset_iff_sats[OF _ _ _ _ _ nonempty[simplified]] | simp)+ + apply (rule nonempty[simplified]) + apply (simp_all) + apply (rule length_type[THEN nat_into_Ord], blast)+ + apply ((rule sep_rules | simp)) + apply ((rule sep_rules | simp)) + apply ((rule sep_rules | simp)) + apply ((rule sep_rules | simp)) + apply ((rule sep_rules | simp)) + apply ((rule sep_rules | simp)) + apply ((rule sep_rules | simp)) + apply (rule renrep_sats[simplified]) + apply (insert assms) + apply(auto simp add: renrep_type definability) +proof - + from assms + have "nenv\list(M)" by simp + with \arity(\)\_\ \\\_\ + show "arity(forces(\)) \ succ(succ(succ(succ(succ(succ(length(nenv)))))))" + using arity_forces_le by simp +qed + +(* The formula synthesized above *) +synthesize_notc "prebody_fm" from_schematic sats_prebody_fm_auto + +lemma prebody_fm_type [TC]: + assumes "\\formula" + "env \ list(M)" + shows "prebody_fm(\,env)\formula" +proof - + from \\\formula\ + have "forces(\)\formula" by simp + then + have "renrep(forces(\),env)\formula" + using \env\list(M)\ by simp + then show ?thesis unfolding prebody_fm_def by simp +qed + +lemmas new_fm_defs = fm_defs is_transrec_fm_def is_eclose_fm_def mem_eclose_fm_def + finite_ordinal_fm_def is_wfrec_fm_def Memrel_fm_def eclose_n_fm_def is_recfun_fm_def is_iterates_fm_def + iterates_MH_fm_def is_nat_case_fm_def quasinat_fm_def pre_image_fm_def restriction_fm_def + +lemma sats_prebody_fm: + assumes + "[P,leq,one,p,\] @ nenv \list(M)" "\\formula" "\\M" "arity(\) \ 2 #+ length(nenv)" + shows + "sats(M,prebody_fm(\,nenv),[\,p,\,P,leq,one] @ nenv) \ + (\\\M. \V\M. is_Vset(##M,\,V) \ \\V \ sats(M,forces(\),[p,P,leq,one,\,\] @ nenv))" + unfolding prebody_fm_def using assms sats_prebody_fm_auto by force + + +lemma arity_prebody_fm: + assumes + "\\formula" "\\M" "env \ list(M)" "arity(\) \ 2 #+ length(env)" + shows + "arity(prebody_fm(\,env))\6 #+ length(env)" + unfolding prebody_fm_def is_HVfrom_fm_def is_powapply_fm_def + using assms new_fm_defs nat_simp_union + arity_renrep[of "forces(\)"] arity_forces_le[simplified] pred_le by auto + + +definition + body_fm' :: "[i,i]\i" where + "body_fm'(\,env) \ Exists(Exists(And(pair_fm(0,1,2),renpbdy(prebody_fm(\,env),env))))" + +lemma body_fm'_type[TC]: "\\formula \ env\list(M) \ body_fm'(\,env)\formula" + unfolding body_fm'_def using prebody_fm_type + by simp + +lemma arity_body_fm': + assumes + "\\formula" "\\M" "env\list(M)" "arity(\) \ 2 #+ length(env)" + shows + "arity(body_fm'(\,env))\5 #+ length(env)" + unfolding body_fm'_def + using assms new_fm_defs nat_simp_union arity_prebody_fm pred_le arity_renpbdy[of "prebody_fm(\,env)"] + by auto + +lemma sats_body_fm': + assumes + "\t p. x=\t,p\" "x\M" "[\,P,leq,one,p,\] @ nenv \list(M)" "\\formula" "arity(\) \ 2 #+ length(nenv)" + shows + "sats(M,body_fm'(\,nenv),[x,\,P,leq,one] @ nenv) \ + sats(M,renpbdy(prebody_fm(\,nenv),nenv),[fst(x),snd(x),x,\,P,leq,one] @ nenv)" + using assms fst_snd_closed[OF \x\M\] unfolding body_fm'_def + by (auto) + +definition + body_fm :: "[i,i]\i" where + "body_fm(\,env) \ renbody(body_fm'(\,env),env)" + +lemma body_fm_type [TC]: "env\list(M) \ \\formula \ body_fm(\,env)\formula" + unfolding body_fm_def by simp + +lemma sats_body_fm: + assumes + "\t p. x=\t,p\" "[\,x,m,P,leq,one] @ nenv \list(M)" + "\\formula" "arity(\) \ 2 #+ length(nenv)" + shows + "sats(M,body_fm(\,nenv),[\,x,m,P,leq,one] @ nenv) \ + sats(M,renpbdy(prebody_fm(\,nenv),nenv),[fst(x),snd(x),x,\,P,leq,one] @ nenv)" + using assms sats_body_fm' sats_renbody[OF _ assms(2), symmetric] arity_body_fm' + unfolding body_fm_def + by auto + +lemma sats_renpbdy_prebody_fm: + assumes + "\t p. x=\t,p\" "x\M" "[\,m,P,leq,one] @ nenv \list(M)" + "\\formula" "arity(\) \ 2 #+ length(nenv)" + shows + "sats(M,renpbdy(prebody_fm(\,nenv),nenv),[fst(x),snd(x),x,\,P,leq,one] @ nenv) \ + sats(M,prebody_fm(\,nenv),[fst(x),snd(x),\,P,leq,one] @ nenv)" + using assms fst_snd_closed[OF \x\M\] + sats_renpbdy[OF arity_prebody_fm _ prebody_fm_type, of concl:M, symmetric] + by force + +lemma body_lemma: + assumes + "\t p. x=\t,p\" "x\M" "[x,\,m,P,leq,one] @ nenv \list(M)" + "\\formula" "arity(\) \ 2 #+ length(nenv)" + shows + "sats(M,body_fm(\,nenv),[\,x,m,P,leq,one] @ nenv) \ + (\\\M. \V\M. is_Vset(\a. (##M)(a),\,V) \ \ \ V \ (snd(x) \ \ ([fst(x),\]@nenv)))" + using assms sats_body_fm[of x \ m nenv] sats_renpbdy_prebody_fm[of x \] + sats_prebody_fm[of "snd(x)" "fst(x)"] fst_snd_closed[OF \x\M\] + by (simp, simp flip: setclass_iff,simp) + +lemma Replace_sats_in_MG: + assumes + "c\M[G]" "env \ list(M[G])" + "\ \ formula" "arity(\) \ 2 #+ length(env)" + "univalent(##M[G], c, \x v. (M[G] , [x,v]@env \ \) )" + shows + "{v. x\c, v\M[G] \ (M[G] , [x,v]@env \ \)} \ M[G]" +proof - + let ?R = "\ x v . v\M[G] \ (M[G] , [x,v]@env \ \)" + from \c\M[G]\ + obtain \' where "val(G, \') = c" "\' \ M" + using GenExt_def by auto + then + have "domain(\')\P\M" (is "?\\M") + using cartprod_closed P_in_M domain_closed by simp + from \val(G, \') = c\ + have "c \ val(G,?\)" + using def_val[of G ?\] one_in_P one_in_G[OF generic] elem_of_val + domain_of_prod[OF one_in_P, of "domain(\')"] by force + from \env \ _\ + obtain nenv where "nenv\list(M)" "env = map(val(G),nenv)" + using map_val by auto + then + have "length(nenv) = length(env)" by simp + define f where "f(\p) \ \ \. \\M \ (\\\M. \ \ Vset(\) \ + (snd(\p) \ \ ([fst(\p),\] @ nenv)))" (is "_ \ \ \. ?P(\p,\)") for \p + have "f(\p) = (\ \. \\M \ (\\\M. \V\M. is_Vset(##M,\,V) \ \\V \ + (snd(\p) \ \ ([fst(\p),\] @ nenv))))" (is "_ = (\ \. \\M \ ?Q(\p,\))") for \p + unfolding f_def using Vset_abs Vset_closed Ord_Least_cong[of "?P(\p)" "\ \. \\M \ ?Q(\p,\)"] + by (simp, simp del:setclass_iff) + moreover + have "f(\p) \ M" for \p + unfolding f_def using Least_closed[of "?P(\p)"] by simp + ultimately + have 1:"least(##M,\\. ?Q(\p,\),f(\p))" for \p + using least_abs[of "\\. \\M \ ?Q(\p,\)" "f(\p)"] least_conj + by (simp flip: setclass_iff) + have "Ord(f(\p))" for \p unfolding f_def by simp + define QQ where "QQ\?Q" + from 1 + have "least(##M,\\. QQ(\p,\),f(\p))" for \p + unfolding QQ_def . + from \arity(\) \ _\ \length(nenv) = _\ + have "arity(\) \ 2 #+ length(nenv)" + by simp + moreover + note assms \nenv\list(M)\ \?\\M\ + moreover + have "\p\?\ \ \t p. \p=\t,p\" for \p + by auto + ultimately + have body:"M , [\,\p,m,P,leq,one] @ nenv \ body_fm(\,nenv) \ ?Q(\p,\)" + if "\p\?\" "\p\M" "m\M" "\\M" for \ \p m + using that P_in_M leq_in_M one_in_M body_lemma[of \p \ m nenv \] by simp + let ?f_fm="least_fm(body_fm(\,nenv),1)" + { + fix \p m + assume asm: "\p\M" "\p\?\" "m\M" + note inM = this P_in_M leq_in_M one_in_M \nenv\list(M)\ + with body + have body':"\\. \ \ M \ (\\\M. \V\M. is_Vset(\a. (##M)(a), \, V) \ \ \ V \ + (snd(\p) \ \ ([fst(\p),\] @ nenv))) \ + M, Cons(\, [\p, m, P, leq, one] @ nenv) \ body_fm(\,nenv)" by simp + from inM + have "M , [\p,m,P,leq,one] @ nenv \ ?f_fm \ least(##M, QQ(\p), m)" + using sats_least_fm[OF body', of 1] unfolding QQ_def + by (simp, simp flip: setclass_iff) + } + then + have "M, [\p,m,P,leq,one] @ nenv \ ?f_fm \ least(##M, QQ(\p), m)" + if "\p\M" "\p\?\" "m\M" for \p m using that by simp + then + have "univalent(##M, ?\, \\p m. M , [\p,m] @ ([P,leq,one] @ nenv) \ ?f_fm)" + unfolding univalent_def by (auto intro:unique_least) + moreover from \length(_) = _\ \env \ _\ + have "length([P,leq,one] @ nenv) = 3 #+ length(env)" by simp + moreover from \arity(_) \ 2 #+ length(nenv)\ + \length(_) = length(_)\[symmetric] \nenv\_\ \\\_\ + have "arity(?f_fm) \ 5 #+ length(env)" + unfolding body_fm_def new_fm_defs least_fm_def + using arity_forces arity_renrep arity_renbody arity_body_fm' nonempty + by (simp add: pred_Un Un_assoc, simp add: Un_assoc[symmetric] nat_union_abs1 pred_Un) + (auto simp add: nat_simp_union, rule pred_le, auto intro:leI) + moreover from \\\formula\ \nenv\list(M)\ + have "?f_fm\formula" by simp + moreover + note inM = P_in_M leq_in_M one_in_M \nenv\list(M)\ \?\\M\ + ultimately + obtain Y where "Y\M" + "\m\M. m \ Y \ (\\p\M. \p \ ?\ \ M, [\p,m] @ ([P,leq,one] @ nenv) \ ?f_fm)" + using replacement_ax[of ?f_fm "[P,leq,one] @ nenv"] + unfolding strong_replacement_def by auto + with \least(_,QQ(_),f(_))\ \f(_) \ M\ \?\\M\ + \_ \ _ \ _ \ M,_ \ ?f_fm \ least(_,_,_)\ + have "f(\p)\Y" if "\p\?\" for \p + using that transitivity[OF _ \?\\M\] + by (clarsimp, rule_tac x="\x,y\" in bexI, auto) + moreover + have "{y\Y. Ord(y)} \ M" + using \Y\M\ separation_ax sats_ordinal_fm trans_M + separation_cong[of "##M" "\y. sats(M,ordinal_fm(0),[y])" "Ord"] + separation_closed by simp + then + have "\ {y\Y. Ord(y)} \ M" (is "?sup \ M") + using Union_closed by simp + then + have "{x\Vset(?sup). x \ M} \ M" + using Vset_closed by simp + moreover + have "{one} \ M" + using one_in_M singletonM by simp + ultimately + have "{x\Vset(?sup). x \ M} \ {one} \ M" (is "?big_name \ M") + using cartprod_closed by simp + then + have "val(G,?big_name) \ M[G]" + by (blast intro:GenExtI) + { + fix v x + assume "x\c" + moreover + note \val(G,\')=c\ \\'\M\ + moreover + from calculation + obtain \ p where "\\,p\\\'" "val(G,\) = x" "p\G" "\\M" + using elem_of_val_pair'[of \' x G] by blast + moreover + assume "v\M[G]" + then + obtain \ where "val(G,\) = v" "\\M" + using GenExtD by auto + moreover + assume "sats(M[G], \, [x,v] @ env)" + moreover + note \\\_\ \nenv\_\ \env = _\ \arity(\)\ 2 #+ length(env)\ + ultimately + obtain q where "q\G" "q \ \ ([\,\]@nenv)" + using truth_lemma[OF \\\_\ generic, symmetric, of "[\,\] @ nenv"] + by auto + with \\\,p\\\'\ \\\,q\\?\ \ f(\\,q\)\Y\ + have "f(\\,q\)\Y" + using generic unfolding M_generic_def filter_def by blast + let ?\="succ(rank(\))" + note \\\M\ + moreover from this + have "?\ \ M" + using rank_closed cons_closed by (simp flip: setclass_iff) + moreover + have "\ \ Vset(?\)" + using Vset_Ord_rank_iff by auto + moreover + note \q \ \ ([\,\] @ nenv)\ + ultimately + have "?P(\\,q\,?\)" by (auto simp del: Vset_rank_iff) + moreover + have "(\ \. ?P(\\,q\,\)) = f(\\,q\)" + unfolding f_def by simp + ultimately + obtain \ where "\\M" "\ \ Vset(f(\\,q\))" "q \ \ ([\,\] @ nenv)" + using LeastI[of "\ \. ?P(\\,q\,\)" ?\] by auto + with \q\G\ \\\M\ \nenv\_\ \arity(\)\ 2 #+ length(nenv)\ + have "M[G], map(val(G),[\,\] @ nenv) \ \" + using truth_lemma[OF \\\_\ generic, of "[\,\] @ nenv"] by auto + moreover from \x\c\ \c\M[G]\ + have "x\M[G]" using transitivity_MG by simp + moreover + note \M[G],[x,v] @ env\ \\ \env = map(val(G),nenv)\ \\\M\ \val(G,\)=x\ + \univalent(##M[G],_,_)\ \x\c\ \v\M[G]\ + ultimately + have "v=val(G,\)" + using GenExtI[of \ G] unfolding univalent_def by (auto) + from \\ \ Vset(f(\\,q\))\ \Ord(f(_))\ \f(\\,q\)\Y\ + have "\ \ Vset(?sup)" + using Vset_Ord_rank_iff lt_Union_iff[of _ "rank(\)"] by auto + with \\\M\ + have "val(G,\) \ val(G,?big_name)" + using domain_of_prod[of one "{one}" "{x\Vset(?sup). x \ M}" ] def_val[of G ?big_name] + one_in_G[OF generic] one_in_P by (auto simp del: Vset_rank_iff) + with \v=val(G,\)\ + have "v \ val(G,{x\Vset(?sup). x \ M} \ {one})" + by simp + } + then + have "{v. x\c, ?R(x,v)} \ val(G,?big_name)" (is "?repl\?big") + by blast + with \?big_name\M\ + have "?repl = {v\?big. \x\c. sats(M[G], \, [x,v] @ env )}" (is "_ = ?rhs") + proof(intro equalityI subsetI) + fix v + assume "v\?repl" + with \?repl\?big\ + obtain x where "x\c" "M[G], [x, v] @ env \ \" "v\?big" + using subsetD by auto + with \univalent(##M[G],_,_)\ \c\M[G]\ + show "v \ ?rhs" + unfolding univalent_def + using transitivity_MG ReplaceI[of "\ x v. \x\c. M[G], [x, v] @ env \ \"] by blast + next + fix v + assume "v\?rhs" + then + obtain x where + "v\val(G, ?big_name)" "M[G], [x, v] @ env \ \" "x\c" + by blast + moreover from this \c\M[G]\ + have "v\M[G]" "x\M[G]" + using transitivity_MG GenExtI[OF \?big_name\_\,of G] by auto + moreover from calculation \univalent(##M[G],_,_)\ + have "?R(x,y) \ y = v" for y + unfolding univalent_def by auto + ultimately + show "v\?repl" + using ReplaceI[of ?R x v c] + by blast + qed + moreover + let ?\ = "Exists(And(Member(0,2#+length(env)),\))" + have "v\M[G] \ (\x\c. M[G], [x,v] @ env \ \) \ M[G], [v] @ env @ [c] \ ?\" + "arity(?\) \ 2 #+ length(env)" "?\\formula" + for v + proof - + fix v + assume "v\M[G]" + with \c\M[G]\ + have "nth(length(env)#+1,[v]@env@[c]) = c" + using \env\_\nth_concat[of v c "M[G]" env] + by auto + note inMG= \nth(length(env)#+1,[v]@env@[c]) = c\ \c\M[G]\ \v\M[G]\ \env\_\ + show "(\x\c. M[G], [x,v] @ env \ \) \ M[G], [v] @ env @ [c] \ ?\" + proof + assume "\x\c. M[G], [x, v] @ env \ \" + then obtain x where + "x\c" "M[G], [x, v] @ env \ \" "x\M[G]" + using transitivity_MG[OF _ \c\M[G]\] + by auto + with \\\_\ \arity(\)\2#+length(env)\ inMG + show "M[G], [v] @ env @ [c] \ Exists(And(Member(0, 2 #+ length(env)), \))" + using arity_sats_iff[of \ "[c]" _ "[x,v]@env"] + by auto + next + assume "M[G], [v] @ env @ [c] \ Exists(And(Member(0, 2 #+ length(env)), \))" + with inMG + obtain x where + "x\M[G]" "x\c" "M[G], [x,v]@env@[c] \ \" + by auto + with \\\_\ \arity(\)\2#+length(env)\ inMG + show "\x\c. M[G], [x, v] @ env\ \" + using arity_sats_iff[of \ "[c]" _ "[x,v]@env"] + by auto + qed + next + from \env\_\ \\\_\ + show "arity(?\)\2#+length(env)" + using pred_mono[OF _ \arity(\)\2#+length(env)\] lt_trans[OF _ le_refl] + by (auto simp add:nat_simp_union) + next + from \\\_\ + show "?\\formula" by simp + qed + moreover from this + have "{v\?big. \x\c. M[G], [x,v] @ env \ \} = {v\?big. M[G], [v] @ env @ [c] \ ?\}" + using transitivity_MG[OF _ GenExtI, OF _ \?big_name\M\] + by simp + moreover from calculation and \env\_\ \c\_\ \?big\M[G]\ + have "{v\?big. M[G] , [v] @ env @ [c] \ ?\} \ M[G]" + using Collect_sats_in_MG by auto + ultimately + show ?thesis by simp +qed + +theorem strong_replacement_in_MG: + assumes + "\\formula" and "arity(\) \ 2 #+ length(env)" "env \ list(M[G])" + shows + "strong_replacement(##M[G],\x v. sats(M[G],\,[x,v] @ env))" +proof - + let ?R="\x y . M[G], [x, y] @ env \ \" + { + fix A + let ?Y="{v . x \ A, v\M[G] \ ?R(x,v)}" + assume 1: "(##M[G])(A)" + "\x[##M[G]]. x \ A \ (\y[##M[G]]. \z[##M[G]]. ?R(x,y) \ ?R(x,z) \ y = z)" + then + have "univalent(##M[G], A, ?R)" "A\M[G]" + unfolding univalent_def by simp_all + with assms \A\_\ + have "(##M[G])(?Y)" + using Replace_sats_in_MG by auto + have "b \ ?Y \ (\x[##M[G]]. x \ A \ ?R(x,b))" if "(##M[G])(b)" for b + proof(rule) + from \A\_\ + show "\x[##M[G]]. x \ A \ ?R(x,b)" if "b \ ?Y" + using that transitivity_MG by auto + next + show "b \ ?Y" if "\x[##M[G]]. x \ A \ ?R(x,b)" + proof - + from \(##M[G])(b)\ + have "b\M[G]" by simp + with that + obtain x where "(##M[G])(x)" "x\A" "b\M[G] \ ?R(x,b)" + by blast + moreover from this 1 \(##M[G])(b)\ + have "x\M[G]" "z\M[G] \ ?R(x,z) \ b = z" for z + by auto + ultimately + show ?thesis + using ReplaceI[of "\ x y. y\M[G] \ ?R(x,y)"] by auto + qed + qed + then + have "\b[##M[G]]. b \ ?Y \ (\x[##M[G]]. x \ A \ ?R(x,b))" + by simp + with \(##M[G])(?Y)\ + have " (\Y[##M[G]]. \b[##M[G]]. b \ Y \ (\x[##M[G]]. x \ A \ ?R(x,b)))" + by auto + } + then show ?thesis unfolding strong_replacement_def univalent_def + by auto +qed + +end (* context G_generic *) + +end \ No newline at end of file diff --git a/thys/Forcing/Separation_Axiom.thy b/thys/Forcing/Separation_Axiom.thy new file mode 100644 --- /dev/null +++ b/thys/Forcing/Separation_Axiom.thy @@ -0,0 +1,400 @@ +section\The Axiom of Separation in $M[G]$\ +theory Separation_Axiom + imports Forcing_Theorems Separation_Rename +begin + +context G_generic +begin + +lemma map_val : + assumes "env\list(M[G])" + shows "\nenv\list(M). env = map(val(G),nenv)" + using assms + proof(induct env) + case Nil + have "map(val(G),Nil) = Nil" by simp + then show ?case by force + next + case (Cons a l) + then obtain a' l' where + "l' \ list(M)" "l=map(val(G),l')" "a = val(G,a')" + "Cons(a,l) = map(val(G),Cons(a',l'))" "Cons(a',l') \ list(M)" + using \a\M[G]\ GenExtD + by force + then show ?case by force +qed + + +lemma Collect_sats_in_MG : + assumes + "c\M[G]" + "\ \ formula" "env\list(M[G])" "arity(\) \ 1 #+ length(env)" + shows + "{x\c. (M[G], [x] @ env \ \)}\ M[G]" +proof - + from \c\M[G]\ + obtain \ where "\ \ M" "val(G, \) = c" + using GenExt_def by auto + let ?\="And(Member(0,1 #+ length(env)),\)" and ?Pl1="[P,leq,one]" + let ?new_form="sep_ren(length(env),forces(?\))" + let ?\="Exists(Exists(And(pair_fm(0,1,2),?new_form)))" + note phi = \\\formula\ \arity(\) \ 1 #+ length(env)\ + then + have "?\\formula" by simp + with \env\_\ phi + have "arity(?\) \ 2#+length(env) " + using nat_simp_union leI by simp + with \env\list(_)\ phi + have "arity(forces(?\)) \ 6 #+ length(env)" + using arity_forces_le by simp + then + have "arity(forces(?\)) \ 7 #+ length(env)" + using nat_simp_union arity_forces leI by simp + with \arity(forces(?\)) \7 #+ _\ \env \ _\ \\ \ formula\ + have "arity(?new_form) \ 7 #+ length(env)" "?new_form \ formula" + using arity_rensep[OF definability[of "?\"]] definability[of "?\"] type_rensep + by auto + then + have "pred(pred(arity(?new_form))) \ 5 #+ length(env)" "?\\formula" + unfolding pair_fm_def upair_fm_def + using nat_simp_union length_type[OF \env\list(M[G])\] + pred_mono[OF _ pred_mono[OF _ \arity(?new_form) \ _\]] + by auto + with \arity(?new_form) \ _\ \?new_form \ formula\ + have "arity(?\) \ 5 #+ length(env)" + unfolding pair_fm_def upair_fm_def + using nat_simp_union arity_forces + by auto + from \\\formula\ + have "forces(?\) \ formula" + using definability by simp + from \\\M\ P_in_M + have "domain(\)\M" "domain(\) \ P \ M" + by (simp_all flip:setclass_iff) + from \env \ _\ + obtain nenv where "nenv\list(M)" "env = map(val(G),nenv)" "length(nenv) = length(env)" + using map_val by auto + from \arity(\) \ _\ \env\_\ \\\_\ + have "arity(\) \ 2#+ length(env)" + using le_trans[OF \arity(\)\_\] add_le_mono[of 1 2,OF _ le_refl] + by auto + with \nenv\_\ \env\_\ \\\M\ \\\_\ \length(nenv) = length(env)\ + have "arity(?\) \ length([\] @ nenv @ [\])" for \ + using nat_union_abs2[OF _ _ \arity(\) \ 2#+ _\] nat_simp_union + by simp + note in_M = \\\M\ \domain(\) \ P \ M\ P_in_M one_in_M leq_in_M + { + fix u + assume "u \ domain(\) \ P" "u \ M" + with in_M \?new_form \ formula\ \?\\formula\ \nenv \ _\ + have Eq1: "(M, [u] @ ?Pl1 @ [\] @ nenv \ ?\) \ + (\\\M. \p\P. u =\\,p\ \ + M, [\,p,u]@?Pl1@[\] @ nenv \ ?new_form)" + by (auto simp add: transitivity) + have Eq3: "\\M \ p\P \ + (M, [\,p,u]@?Pl1@[\]@nenv \ ?new_form) \ + (\F. M_generic(F) \ p \ F \ (M[F], map(val(F), [\] @ nenv@[\]) \ ?\))" + for \ p + proof - + fix p \ + assume "\ \ M" "p\P" + then + have "p\M" using P_in_M by (simp add: transitivity) + note in_M' = in_M \\ \ M\ \p\M\ \u \ domain(\) \ P\ \u \ M\ \nenv\_\ + then + have "[\,u] \ list(M)" by simp + let ?env="[p]@?Pl1@[\] @ nenv @ [\,u]" + let ?new_env=" [\,p,u,P,leq,one,\] @ nenv" + let ?\="Exists(Exists(And(pair_fm(0,1,2),?new_form)))" + have "[\, p, u, \, leq, one, \] \ list(M)" + using in_M' by simp + have "?\ \ formula" "forces(?\)\ formula" + using phi by simp_all + from in_M' + have "?Pl1 \ list(M)" by simp + from in_M' have "?env \ list(M)" by simp + have Eq1': "?new_env \ list(M)" using in_M' by simp + then + have "(M, [\,p,u]@?Pl1@[\] @ nenv \ ?new_form) \ (M, ?new_env \ ?new_form)" + by simp + from in_M' \env \ _\ Eq1' \length(nenv) = length(env)\ + \arity(forces(?\)) \ 7 #+ length(env)\ \forces(?\)\ formula\ + \[\, p, u, \, leq, one, \] \ list(M)\ + have "... \ M, ?env \ forces(?\)" + using sepren_action[of "forces(?\)" "nenv",OF _ _ \nenv\list(M)\] + by simp + also from in_M' + have "... \ M, ([p,P, leq, one,\]@nenv@ [\])@[u] \ forces(?\)" + using app_assoc by simp + also + from in_M' \env\_\ phi \length(nenv) = length(env)\ + \arity(forces(?\)) \ 6 #+ length(env)\ \forces(?\)\formula\ + have "... \ M, [p,P, leq, one,\]@ nenv @ [\] \ forces(?\)" + by (rule_tac arity_sats_iff,auto) + also + from \arity(forces(?\)) \ 6 #+ length(env)\ \forces(?\)\formula\ in_M' phi + have " ... \ (\F. M_generic(F) \ p \ F \ + M[F], map(val(F), [\] @ nenv @ [\]) \ ?\)" + using definition_of_forcing + proof (intro iffI) + assume a1: "M, [p,P, leq, one,\] @ nenv @ [\] \ forces(?\)" + note definition_of_forcing \arity(\)\ 1#+_\ + with \nenv\_\ \arity(?\) \ length([\] @ nenv @ [\])\ \env\_\ + have "p \ P \ ?\\formula \ [\,\] \ list(M) \ + M, [p,P, leq, one] @ [\]@ nenv@[\] \ forces(?\) \ + \G. M_generic(G) \ p \ G \ M[G], map(val(G), [\] @ nenv @[\]) \ ?\" + by auto + then + show "\F. M_generic(F) \ p \ F \ + M[F], map(val(F), [\] @ nenv @ [\]) \ ?\" + using \?\\formula\ \p\P\ a1 \\\M\ \\\M\ by simp + next + assume "\F. M_generic(F) \ p \ F \ + M[F], map(val(F), [\] @ nenv @[\]) \ ?\" + with definition_of_forcing [THEN iffD2] \arity(?\) \ length([\] @ nenv @ [\])\ + show "M, [p, P, leq, one,\] @ nenv @ [\] \ forces(?\)" + using \?\\formula\ \p\P\ in_M' + by auto + qed + finally + show "(M, [\,p,u]@?Pl1@[\]@nenv \ ?new_form) \ (\F. M_generic(F) \ p \ F \ + M[F], map(val(F), [\] @ nenv @ [\]) \ ?\)" + by simp + qed + with Eq1 + have "(M, [u] @ ?Pl1 @ [\] @ nenv \ ?\) \ + (\\\M. \p\P. u =\\,p\ \ + (\F. M_generic(F) \ p \ F \ M[F], map(val(F), [\] @ nenv @ [\]) \ ?\))" + by auto + } + then + have Equivalence: "u\ domain(\) \ P \ u \ M \ + (M, [u] @ ?Pl1 @ [\] @ nenv \ ?\) \ + (\\\M. \p\P. u =\\,p\ \ + (\F. M_generic(F) \ p \ F \ M[F], map(val(F), [\] @ nenv @[\]) \ ?\))" + for u + by simp + moreover from \env = _\ \\\M\ \nenv\list(M)\ + have map_nenv:"map(val(G), nenv@[\]) = env @ [val(G,\)]" + using map_app_distrib append1_eq_iff by auto + ultimately + have aux:"(\\\M. \p\P. u =\\,p\ \ (p\G \ M[G], [val(G,\)] @ env @ [val(G,\)] \ ?\))" + (is "(\\\M. \p\P. _ ( _ \ _, ?vals(\) \ _))") + if "u \ domain(\) \ P" "u \ M" "M, [u]@ ?Pl1 @[\] @ nenv \ ?\" for u + using Equivalence[THEN iffD1, OF that] generic by force + moreover + have "\\M \ val(G,\)\M[G]" for \ + using GenExt_def by auto + moreover + have "\\ M \ [val(G, \)] @ env @ [val(G, \)] \ list(M[G])" for \ + proof - + from \\\M\ + have "val(G,\)\ M[G]" using GenExtI by simp + moreover + assume "\ \ M" + moreover + note \env \ list(M[G])\ + ultimately + show ?thesis + using GenExtI by simp + qed + ultimately + have "(\\\M. \p\P. u=\\,p\ \ (p\G \ val(G,\)\nth(1 #+ length(env),[val(G, \)] @ env @ [val(G, \)]) + \ M[G], ?vals(\) \ \))" + if "u \ domain(\) \ P" "u \ M" "M, [u] @ ?Pl1 @[\] @ nenv \ ?\" for u + using aux[OF that] by simp + moreover from \env \ _\ \\\M\ + have nth:"nth(1 #+ length(env),[val(G, \)] @ env @ [val(G, \)]) = val(G,\)" + if "\\M" for \ + using nth_concat[of "val(G,\)" "val(G,\)" "M[G]"] using that GenExtI by simp + ultimately + have "(\\\M. \p\P. u=\\,p\ \ (p\G \ val(G,\)\val(G,\) \ M[G], ?vals(\) \ \))" + if "u \ domain(\) \ P" "u \ M" "M, [u] @ ?Pl1 @[\] @ nenv \ ?\" for u + using that \\\M\ \env \ _\ by simp + with \domain(\)\P\M\ + have "\u\domain(\)\P . (M, [u] @ ?Pl1 @[\] @ nenv \ ?\) \ (\\\M. \p\P. u =\\,p\ \ + (p \ G \ val(G, \)\val(G, \) \ M[G], ?vals(\) \ \))" + by (simp add:transitivity) + then + have "{u\domain(\)\P . (M,[u] @ ?Pl1 @[\] @ nenv \ ?\) } \ + {u\domain(\)\P . \\\M. \p\P. u =\\,p\ \ + (p \ G \ val(G, \)\val(G, \) \ (M[G], ?vals(\) \ \))}" + (is "?n\?m") + by auto + with val_mono + have first_incl: "val(G,?n) \ val(G,?m)" + by simp + note \val(G,\) = c\ (* from the assumptions *) + with \?\\formula\ \arity(?\) \ _\ in_M \nenv \ _\ \env \ _\ \length(nenv) = _\ + have "?n\M" + using separation_ax leI separation_iff by auto + from generic + have "filter(G)" "G\P" + unfolding M_generic_def filter_def by simp_all + from \val(G,\) = c\ + have "val(G,?m) = + {val(G,t) .. t\domain(\) , \q\P . + (\\\M. \p\P. \t,q\ = \\, p\ \ + (p \ G \ val(G, \) \ c \ (M[G], [val(G, \)] @ env @ [c] \ \)) \ q \ G)}" + using val_of_name by auto + also + have "... = {val(G,t) .. t\domain(\) , \q\P. + val(G, t) \ c \ (M[G], [val(G, t)] @ env @ [c] \ \) \ q \ G}" + proof - + + have "t\M \ + (\q\P. (\\\M. \p\P. \t,q\ = \\, p\ \ + (p \ G \ val(G, \) \ c \ (M[G], [val(G, \)] @ env @ [c] \ \)) \ q \ G)) + \ + (\q\P. val(G, t) \ c \ ( M[G], [val(G, t)]@env@[c]\ \ ) \ q \ G)" for t + by auto + then show ?thesis using \domain(\)\M\ by (auto simp add:transitivity) + qed + also + have "... = {x .. x\c , \q\P. x \ c \ (M[G], [x] @ env @ [c] \ \) \ q \ G}" + proof + + show "... \ {x .. x\c , \q\P. x \ c \ (M[G], [x] @ env @ [c] \ \) \ q \ G}" + by auto + next + (* Now we show the other inclusion: + {x .. x\c , \q\P. x \ c \ (M[G], [x, w, c] \ \) \ q \ G} + \ + {val(G,t)..t\domain(\),\q\P.val(G,t)\c\(M[G], [val(G,t),w] \ \)\q\G} + *) + { + fix x + assume "x\{x .. x\c , \q\P. x \ c \ (M[G], [x] @ env @ [c] \ \) \ q \ G}" + then + have "\q\P. x \ c \ (M[G], [x] @ env @ [c] \ \) \ q \ G" + by simp + with \val(G,\) = c\ + have "\q\P. \t\domain(\). val(G,t) =x \ (M[G], [val(G,t)] @ env @ [c] \ \) \ q \ G" + using Sep_and_Replace elem_of_val by auto + } + then + show " {x .. x\c , \q\P. x \ c \ (M[G], [x] @ env @ [c] \ \) \ q \ G} \ ..." + using SepReplace_iff by force + qed + also + have " ... = {x\c. (M[G], [x] @ env @ [c] \ \)}" + using \G\P\ G_nonempty by force + finally + have val_m: "val(G,?m) = {x\c. (M[G], [x] @ env @ [c] \ \)}" by simp + have "val(G,?m) \ val(G,?n)" + proof + fix x + assume "x \ val(G,?m)" + with val_m + have Eq4: "x \ {x\c. (M[G], [x] @ env @ [c] \ \)}" by simp + with \val(G,\) = c\ + have "x \ val(G,\)" by simp + then + have "\\. \q\G. \\,q\\\ \ val(G,\) =x" + using elem_of_val_pair by auto + then obtain \ q where + "\\,q\\\" "q\G" "val(G,\)=x" by auto + from \\\,q\\\\ + have "\\M" + using domain_trans[OF trans_M \\\_\] by auto + with \\\M\ \nenv \ _\ \env = _\ + have "[val(G,\), val(G,\)] @ env \list(M[G])" + using GenExt_def by auto + with Eq4 \val(G,\)=x\ \val(G,\) = c\ \x \ val(G,\)\ nth \\\M\ + have Eq5: "M[G], [val(G,\)] @ env @[val(G,\)] \ And(Member(0,1 #+ length(env)),\)" + by auto + (* Recall ?\ = And(Member(0,1 #+ length(env)),\) *) + with \\\M\ \\\M\ Eq5 \M_generic(G)\ \\\formula\ \nenv \ _ \ \env = _ \ map_nenv + \arity(?\) \ length([\] @ nenv @ [\])\ + have "(\r\G. M, [r,P,leq,one,\] @ nenv @[\] \ forces(?\))" + using truth_lemma + by auto + then obtain r where (* I can't "obtain" this directly *) + "r\G" "M, [r,P,leq,one,\] @ nenv @ [\] \ forces(?\)" by auto + with \filter(G)\ and \q\G\ obtain p where + "p\G" "p\q" "p\r" + unfolding filter_def compat_in_def by force + with \r\G\ \q\G\ \G\P\ + have "p\P" "r\P" "q\P" "p\M" + using P_in_M by (auto simp add:transitivity) + with \\\formula\ \\\M\ \\\M\ \p\r\ \nenv \ _\ \arity(?\) \ length([\] @ nenv @ [\])\ + \M, [r,P,leq,one,\] @ nenv @ [\] \ forces(?\)\ \env\_\ + have "M, [p,P,leq,one,\] @ nenv @ [\] \ forces(?\)" + using strengthening_lemma + by simp + with \p\P\ \\\formula\ \\\M\ \\\M\ \nenv \ _\ \arity(?\) \ length([\] @ nenv @ [\])\ + have "\F. M_generic(F) \ p \ F \ + M[F], map(val(F), [\] @ nenv @[\]) \ ?\" + using definition_of_forcing + by simp + with \p\P\ \\\M\ + have Eq6: "\\'\M. \p'\P. \\,p\ = <\',p'> \ (\F. M_generic(F) \ p' \ F \ + M[F], map(val(F), [\'] @ nenv @ [\]) \ ?\)" by auto + from \\\M\ \\\,q\\\\ + have "\\,q\ \ M" by (simp add:transitivity) + from \\\,q\\\\ \\\M\ \p\P\ \p\M\ + have "\\,p\\M" "\\,p\\domain(\)\P" + using tuples_in_M by auto + with \\\M\ Eq6 \p\P\ + have "M, [\\,p\] @ ?Pl1 @ [\] @ nenv \ ?\" + using Equivalence by auto + with \\\,p\\domain(\)\P\ + have "\\,p\\?n" by simp + with \p\G\ \p\P\ + have "val(G,\)\val(G,?n)" + using val_of_elem[of \ p] by simp + with \val(G,\)=x\ + show "x\val(G,?n)" by simp + qed (* proof of "val(G,?m) \ val(G,?n)" *) + with val_m first_incl + have "val(G,?n) = {x\c. (M[G], [x] @ env @ [c] \ \)}" by auto + also + have " ... = {x\c. (M[G], [x] @ env \ \)}" + proof - + { + fix x + assume "x\c" + moreover from assms + have "c\M[G]" + unfolding GenExt_def by auto + moreover from this and \x\c\ + have "x\M[G]" + using transitivity_MG + by simp + ultimately + have "(M[G], ([x] @ env) @[c] \ \) \ (M[G], [x] @ env \ \)" + using phi \env \ _\ by (rule_tac arity_sats_iff, simp_all) (* Enhance this *) + } + then show ?thesis by auto + qed + finally + show "{x\c. (M[G], [x] @ env \ \)}\ M[G]" + using \?n\M\ GenExt_def by force +qed + +theorem separation_in_MG: + assumes + "\\formula" and "arity(\) \ 1 #+ length(env)" and "env\list(M[G])" + shows + "separation(##M[G],\x. (M[G], [x] @ env \ \))" +proof - + { + fix c + assume "c\M[G]" + moreover from \env \ _\ + obtain nenv where "nenv\list(M)" + "env = map(val(G),nenv)" "length(env) = length(nenv)" + using GenExt_def map_val[of env] by auto + moreover note \\ \ _\ \arity(\) \ _\ \env \ _\ + ultimately + have Eq1: "{x\c. (M[G], [x] @ env \ \)} \ M[G]" + using Collect_sats_in_MG by auto + } + then + show ?thesis + using separation_iff rev_bexI unfolding is_Collect_def by force +qed + +end (* context: G_generic *) + +end \ No newline at end of file diff --git a/thys/Forcing/Separation_Rename.thy b/thys/Forcing/Separation_Rename.thy new file mode 100644 --- /dev/null +++ b/thys/Forcing/Separation_Rename.thy @@ -0,0 +1,519 @@ +section\Auxiliary renamings for Separation\ +theory Separation_Rename + imports Interface Renaming +begin + +lemmas apply_fun = apply_iff[THEN iffD1] + +lemma nth_concat : "[p,t] \ list(A) \ env\ list(A) \ nth(1 #+ length(env),[p]@ env @ [t]) = t" + by(auto simp add:nth_append) + +lemma nth_concat2 : "env\ list(A) \ nth(length(env),env @ [p,t]) = p" + by(auto simp add:nth_append) + +lemma nth_concat3 : "env\ list(A) \ u = nth(succ(length(env)), env @ [pi, u])" + by(auto simp add:nth_append) + +definition + sep_var :: "i \ i" where + "sep_var(n) \ {\0,1\,\1,3\,\2,4\,\3,5\,\4,0\,\5#+n,6\,\6#+n,2\}" + +definition + sep_env :: "i \ i" where + "sep_env(n) \ \ i \ (5#+n)-5 . i#+2" + +definition weak :: "[i, i] \ i" where + "weak(n,m) \ {i#+m . i \ n}" + +lemma weakD : + assumes "n \ nat" "k\nat" "x \ weak(n,k)" + shows "\ i \ n . x = i#+k" + using assms unfolding weak_def by blast + +lemma weak_equal : + assumes "n\nat" "m\nat" + shows "weak(n,m) = (m#+n) - m" +proof - + have "weak(n,m)\(m#+n)-m" + proof(intro subsetI) + fix x + assume "x\weak(n,m)" + with assms + obtain i where + "i\n" "x=i#+m" + using weakD by blast + then + have "m\i#+m" "im\nat\ \n\nat\ ltI[OF \i\n\] by simp_all + then + have "\i#+mn\nat\ \i\n\] \m\nat\ by simp + with \x=i#+m\ + have "x\m" + using ltI \m\nat\ by auto + moreover + from assms \x=i#+m\ \i + have "xi \n\nat\] by simp + ultimately + show "x\(m#+n)-m" + using ltD DiffI by simp + qed + moreover + have "(m#+n)-m\weak(n,m)" + proof (intro subsetI) + fix x + assume "x\(m#+n)-m" + then + have "x\m#+n" "x\m" + using DiffD1[of x "n#+m" m] DiffD2[of x "n#+m" m] by simp_all + then + have "xnat" + using ltI in_n_in_nat[OF add_type[of m n]] by simp_all + then + obtain i where + "m#+n = succ(x#+i)" + using less_iff_succ_add[OF \x\nat\,of "m#+n"] add_type by auto + then + have "x#+ix\m\ + have "\xm\nat\ \x\nat\ + have "m\x" using not_lt_iff_le by simp + with \x \n\nat\ + have "x#-mx\nat\ _ \m\nat\] by simp + have "m#+n#-m = n" using diff_cancel2 \m\nat\ \n\nat\ by simp + with \x#-m \x\nat\ + have "x#-m \ n" "x=x#-m#+m" + using ltD add_diff_inverse2[OF \m\x\] by simp_all + then + show "x\weak(n,m)" + unfolding weak_def by auto + qed + ultimately + show ?thesis by auto +qed + +lemma weak_zero: + shows "weak(0,n) = 0" + unfolding weak_def by simp + +lemma weakening_diff : + assumes "n \ nat" + shows "weak(n,7) - weak(n,5) \ {5#+n, 6#+n}" + unfolding weak_def using assms +proof(auto) + { + fix i + assume "i\n" "succ(succ(natify(i)))\n" "\w\n. succ(succ(natify(i))) \ natify(w)" + then + have "in\nat\ by simp + from \n\nat\ \i\n\ \succ(succ(natify(i)))\n\ + have "i\nat" "succ(succ(i))\n" using in_n_in_nat by simp_all + from \i + have "succ(i)\n" using succ_leI by simp + with \n\nat\ + consider (a) "succ(i) = n" | (b) "succ(i) < n" + using leD by auto + then have "succ(i) = n" + proof cases + case a + then show ?thesis . + next + case b + then + have "succ(succ(i))\n" using succ_leI by simp + with \n\nat\ + consider (a) "succ(succ(i)) = n" | (b) "succ(succ(i)) < n" + using leD by auto + then have "succ(i) = n" + proof cases + case a + with \succ(succ(i))\n\ show ?thesis by blast + next + case b + then + have "succ(succ(i))\n" using ltD by simp + with \i\nat\ + have "succ(succ(natify(i))) \ natify(succ(succ(i)))" + using \\w\n. succ(succ(natify(i))) \ natify(w)\ by auto + then + have "False" using \i\nat\ by auto + then show ?thesis by blast + qed + then show ?thesis . + qed + with \i\nat\ have "succ(natify(i)) = n" by simp + } + then + show "n \ nat \ + succ(succ(natify(y))) \ n \ + \x\n. succ(succ(natify(y))) \ natify(x) \ + y \ n \ succ(natify(y)) = n" for y + by blast +qed + +lemma in_add_del : + assumes "x\j#+n" "n\nat" "j\nat" + shows "x < j \ x \ weak(n,j)" +proof (cases "xnat" "j#+n\nat" + using in_n_in_nat[OF _ \x\j#+n\] assms by simp_all + then + have "j \ x" "x < j#+n" + using not_lt_iff_le False \j\nat\ \n\nat\ ltI[OF \x\j#+n\] by auto + then + have "x#-j < (j #+ n) #- j" "x = j #+ (x #-j)" + using diff_mono \x\nat\ \j#+n\nat\ \j\nat\ \n\nat\ + add_diff_inverse[OF \j\x\] by simp_all + then + have "x#-j < n" "x = (x #-j ) #+ j" + using diff_add_inverse \n\nat\ add_commute by simp_all + then + have "x#-j \n" using ltD by simp + then + have "x \ weak(n,j)" + unfolding weak_def + using \x= (x#-j) #+j\ RepFunI[OF \x#-j\n\] add_commute by force + then show ?thesis .. +qed + + +lemma sep_env_action: + assumes + "[t,p,u,P,leq,o,pi] \ list(M)" + "env \ list(M)" + shows "\ i . i \ weak(length(env),5) \ + nth(sep_env(length(env))`i,[t,p,u,P,leq,o,pi]@env) = nth(i,[p,P,leq,o,t] @ env @ [pi,u])" +proof - + from assms + have A: "5#+length(env)\nat" "[p, P, leq, o, t] \list(M)" + by simp_all + let ?f="sep_env(length(env))" + have EQ: "weak(length(env),5) = 5#+length(env) - 5" + using weak_equal length_type[OF \env\list(M)\] by simp + let ?tgt="[t,p,u,P,leq,o,pi]@env" + let ?src="[p,P,leq,o,t] @ env @ [pi,u]" + have "nth(?f`i,[t,p,u,P,leq,o,pi]@env) = nth(i,[p,P,leq,o,t] @ env @ [pi,u])" + if "i \ (5#+length(env)-5)" for i + proof - + from that + have 2: "i \ 5#+length(env)" "i \ 5" "i \ nat" "i#-5\nat" "i#+2\nat" + using in_n_in_nat[OF \5#+length(env)\nat\] by simp_all + then + have 3: "\ i < 5" using ltD by force + then + have "5 \ i" "2 \ 5" + using not_lt_iff_le \i\nat\ by simp_all + then have "2 \ i" using le_trans[OF \2\5\] by simp + from A \i \ 5#+length(env)\ + have "i < 5#+length(env)" using ltI by simp + with \i\nat\ \2\i\ A + have C:"i#+2 < 7#+length(env)" by simp + with that + have B: "?f`i = i#+2" unfolding sep_env_def by simp + from 3 assms(1) \i\nat\ + have "\ i#+2 < 7" using not_lt_iff_le add_le_mono by simp + from \i < 5#+length(env)\ 3 \i\nat\ + have "i#-5 < 5#+length(env) #- 5" + using diff_mono[of i "5#+length(env)" 5,OF _ _ _ \i < 5#+length(env)\] + not_lt_iff_le[THEN iffD1] by force + with assms(2) + have "i#-5 < length(env)" using diff_add_inverse length_type by simp + have "nth(i,?src) =nth(i#-5,env@[pi,u])" + using nth_append[OF A(2) \i\nat\] 3 by simp + also + have "... = nth(i#-5, env)" + using nth_append[OF \env \list(M)\ \i#-5\nat\] \i#-5 < length(env)\ by simp + also + have "... = nth(i#+2, ?tgt)" + using nth_append[OF assms(1) \i#+2\nat\] \\ i#+2 <7\ by simp + ultimately + have "nth(i,?src) = nth(?f`i,?tgt)" + using B by simp + then show ?thesis using that by simp + qed + then show ?thesis using EQ by force +qed + +lemma sep_env_type : + assumes "n \ nat" + shows "sep_env(n) : (5#+n)-5 \ (7#+n)-7" +proof - + let ?h="sep_env(n)" + from \n\nat\ + have "(5#+n)#+2 = 7#+n" "7#+n\nat" "5#+n\nat" by simp_all + have + D: "sep_env(n)`x \ (7#+n)-7" if "x \ (5#+n)-5" for x + proof - + from \x\5#+n-5\ + have "?h`x = x#+2" "x<5#+n" "x\nat" + unfolding sep_env_def using ltI in_n_in_nat[OF \5#+n\nat\] by simp_all + then + have "x#+2 < 7#+n" by simp + then + have "x#+2 \ 7#+n" using ltD by simp + from \x\5#+n-5\ + have "x\5" by simp + then have "\x<5" using ltD by blast + then have "5\x" using not_lt_iff_le \x\nat\ by simp + then have "7\x#+2" using add_le_mono \x\nat\ by simp + then have "\x#+2<7" using not_lt_iff_le \x\nat\ by simp + then have "x#+2 \ 7" using ltI \x\nat\ by force + with \x#+2 \ 7#+n\ show ?thesis using \?h`x = x#+2\ DiffI by simp + qed + then show ?thesis unfolding sep_env_def using lam_type by simp +qed + +lemma sep_var_fin_type : + assumes "n \ nat" + shows "sep_var(n) : 7#+n -||> 7#+n" + unfolding sep_var_def + using consI ltD emptyI by force + +lemma sep_var_domain : + assumes "n \ nat" + shows "domain(sep_var(n)) = 7#+n - weak(n,5)" +proof - + let ?A="weak(n,5)" + have A:"domain(sep_var(n)) \ (7#+n)" + unfolding sep_var_def + by(auto simp add: le_natE) + have C: "x=5#+n \ x=6#+n \ x \ 4" if "x\domain(sep_var(n))" for x + using that unfolding sep_var_def by auto + have D : "x7#+n" for x + using that \n\nat\ ltI by simp + have "\ 5#+n < 5#+n" using \n\nat\ lt_irrefl[of _ False] by force + have "\ 6#+n < 5#+n" using \n\nat\ by force + have R: "x < 5#+n" if "x\?A" for x + proof - + from that + obtain i where + "in\nat\ RepFun_iff by force + with \n\nat\ + have "5#+i < 5#+n" using add_lt_mono2 by simp + with \x=5#+i\ + show "x < 5#+n" by simp + qed + then + have 1:"x\?A" if "\x <5#+n" for x using that by blast + have "5#+n \ ?A" "6#+n\?A" + proof - + show "5#+n \ ?A" using 1 \\5#+n<5#+n\ by blast + with 1 show "6#+n \ ?A" using \\6#+n<5#+n\ by blast + qed + then + have E:"x\?A" if "x\domain(sep_var(n))" for x + unfolding weak_def + using C that by force + then + have F: "domain(sep_var(n)) \ 7#+n - ?A" using A by auto + from assms + have "x<7 \ x\weak(n,7)" if "x\7#+n" for x + using in_add_del[OF \x\7#+n\] by simp + moreover + { + fix x + assume asm:"x\7#+n" "x\?A" "x\weak(n,7)" + then + have "x\domain(sep_var(n))" + proof - + from \n\nat\ + have "weak(n,7)-weak(n,5)\{n#+5,n#+6}" + using weakening_diff by simp + with \x\?A\ asm + have "x\{n#+5,n#+6}" using subsetD DiffI by blast + then + show ?thesis unfolding sep_var_def by simp + qed + } + moreover + { + fix x + assume asm:"x\7#+n" "x\?A" "x<7" + then have "x\domain(sep_var(n))" + proof (cases "2 \ n") + case True + moreover + have "0n\nat\ \2\n\] lt_imp_0_lt by auto + ultimately + have "x<5" + using \x<7\ \x\?A\ \n\nat\ in_n_in_nat + unfolding weak_def + by (clarsimp simp add:not_lt_iff_le, auto simp add:lt_def) + then + show ?thesis unfolding sep_var_def + by (clarsimp simp add:not_lt_iff_le, auto simp add:lt_def) + next + case False + then + show ?thesis + proof (cases "n=0") + case True + then show ?thesis + unfolding sep_var_def using ltD asm \n\nat\ by auto + next + case False + then + have "n < 2" using \n\nat\ not_lt_iff_le \\ 2 \ n\ by force + then + have "\ n <1" using \n\0\ by simp + then + have "n=1" using not_lt_iff_le \n<2\ le_iff by auto + then show ?thesis + using \x\?A\ + unfolding weak_def sep_var_def + using ltD asm \n\nat\ by force + qed + qed + } + ultimately + have "w\domain(sep_var(n))" if "w\ 7#+n - ?A" for w + using that by blast + then + have "7#+n - ?A \ domain(sep_var(n))" by blast + with F + show ?thesis by auto +qed + +lemma sep_var_type : + assumes "n \ nat" + shows "sep_var(n) : (7#+n)-weak(n,5) \ 7#+n" + using FiniteFun_is_fun[OF sep_var_fin_type[OF \n\nat\]] + sep_var_domain[OF \n\nat\] by simp + +lemma sep_var_action : + assumes + "[t,p,u,P,leq,o,pi] \ list(M)" + "env \ list(M)" + shows "\ i . i \ (7#+length(env)) - weak(length(env),5) \ + nth(sep_var(length(env))`i,[t,p,u,P,leq,o,pi]@env) = nth(i,[p,P,leq,o,t] @ env @ [pi,u])" + using assms +proof (subst sep_var_domain[OF length_type[OF \env\list(M)\],symmetric],auto) + fix i y + assume "\i, y\ \ sep_var(length(env))" + with assms + show "nth(sep_var(length(env)) ` i, + Cons(t, Cons(p, Cons(u, Cons(P, Cons(leq, Cons(o, Cons(pi, env)))))))) = + nth(i, Cons(p, Cons(P, Cons(leq, Cons(o, Cons(t, env @ [pi, u]))))))" + using apply_fun[OF sep_var_type] assms + unfolding sep_var_def + using nth_concat2[OF \env\list(M)\] nth_concat3[OF \env\list(M)\,symmetric] + by force + qed + +definition + rensep :: "i \ i" where + "rensep(n) \ union_fun(sep_var(n),sep_env(n),7#+n-weak(n,5),weak(n,5))" + +lemma rensep_aux : + assumes "n\nat" + shows "(7#+n-weak(n,5)) \ weak(n,5) = 7#+n" "7#+n \ ( 7 #+ n - 7) = 7#+n" +proof - + from \n\nat\ + have "weak(n,5) = n#+5-5" + using weak_equal by simp + with \n\nat\ + show "(7#+n-weak(n,5)) \ weak(n,5) = 7#+n" "7#+n \ ( 7 #+ n - 7) = 7#+n" + using Diff_partition le_imp_subset by auto +qed + +lemma rensep_type : + assumes "n\nat" + shows "rensep(n) \ 7#+n \ 7#+n" +proof - + from \n\nat\ + have "rensep(n) \ (7#+n-weak(n,5)) \ weak(n,5) \ 7#+n \ (7#+n - 7)" + unfolding rensep_def + using union_fun_type sep_var_type \n\nat\ sep_env_type weak_equal + by force + then + show ?thesis using rensep_aux \n\nat\ by auto +qed + +lemma rensep_action : + assumes "[t,p,u,P,leq,o,pi] @ env \ list(M)" + shows "\ i . i < 7#+length(env) \ nth(rensep(length(env))`i,[t,p,u,P,leq,o,pi]@env) = nth(i,[p,P,leq,o,t] @ env @ [pi,u])" +proof - + let ?tgt="[t,p,u,P,leq,o,pi]@env" + let ?src="[p,P,leq,o,t] @ env @ [pi,u]" + let ?m="7 #+ length(env) - weak(length(env),5)" + let ?p="weak(length(env),5)" + let ?f="sep_var(length(env))" + let ?g="sep_env(length(env))" + let ?n="length(env)" + from assms + have 1 : "[t,p,u,P,leq,o,pi] \ list(M)" " env \ list(M)" + "?src \ list(M)" "?tgt \ list(M)" + "7#+?n = (7#+?n-weak(?n,5)) \ weak(?n,5)" + " length(?src) = (7#+?n-weak(?n,5)) \ weak(?n,5)" + using Diff_partition le_imp_subset rensep_aux by auto + then + have "nth(i, ?src) = nth(union_fun(?f, ?g, ?m, ?p) ` i, ?tgt)" if "i < 7#+length(env)" for i + proof - + from \i<7#+?n\ + have "i \ (7#+?n-weak(?n,5)) \ weak(?n,5)" + using ltD by simp + then show ?thesis + unfolding rensep_def using + union_fun_action[OF \?src\list(M)\ \?tgt\list(M)\ \length(?src) = (7#+?n-weak(?n,5)) \ weak(?n,5)\ + sep_var_action[OF \[t,p,u,P,leq,o,pi] \ list(M)\ \env\list(M)\] + sep_env_action[OF \[t,p,u,P,leq,o,pi] \ list(M)\ \env\list(M)\] + ] that + by simp + qed + then show ?thesis unfolding rensep_def by simp +qed + +definition sep_ren :: "[i,i] \ i" where + "sep_ren(n,\) \ ren(\)`(7#+n)`(7#+n)`rensep(n)" + +lemma arity_rensep: assumes "\\formula" "env \ list(M)" + "arity(\) \ 7#+length(env)" +shows "arity(sep_ren(length(env),\)) \ 7#+length(env)" + unfolding sep_ren_def + using arity_ren rensep_type assms + by simp + +lemma type_rensep [TC]: + assumes "\\formula" "env\list(M)" + shows "sep_ren(length(env),\) \ formula" + unfolding sep_ren_def + using ren_tc rensep_type assms + by simp + +lemma sepren_action: + assumes "arity(\) \ 7 #+ length(env)" + "[t,p,u,P,leq,o,pi] \ list(M)" + "env\list(M)" + "\\formula" + shows "sats(M, sep_ren(length(env),\),[t,p,u,P,leq,o,pi] @ env) \ sats(M, \,[p,P,leq,o,t] @ env @ [pi,u])" +proof - + from assms + have 1: " [t, p, u, P, leq, o, pi] @ env \ list(M)" + "[P,leq,o,p,t] \ list(M)" + "[pi,u] \ list(M)" + by simp_all + then + have 2: "[p,P,leq,o,t] @ env @ [pi,u] \ list(M)" using app_type by simp + show ?thesis + unfolding sep_ren_def + using sats_iff_sats_ren[OF \\\formula\ + add_type[of 7 "length(env)"] + add_type[of 7 "length(env)"] + 2 1(1) + rensep_type[OF length_type[OF \env\list(M)\]] + \arity(\) \ 7 #+ length(env)\] + rensep_action[OF 1(1),rule_format,symmetric] + by simp +qed + +end \ No newline at end of file diff --git a/thys/Forcing/Succession_Poset.thy b/thys/Forcing/Succession_Poset.thy new file mode 100644 --- /dev/null +++ b/thys/Forcing/Succession_Poset.thy @@ -0,0 +1,369 @@ +section\A poset of successions\ +theory Succession_Poset + imports + Arities Proper_Extension Synthetic_Definition + Names +begin + +subsection\The set of finite binary sequences\ + +text\We implement the poset for adding one Cohen real, the set +$2^{<\omega}$ of of finite binary sequences.\ + +definition + seqspace :: "i \ i" ("_^<\" [100]100) where + "seqspace(B) \ \n\nat. (n\B)" + +lemma seqspaceI[intro]: "n\nat \ f:n\B \ f\seqspace(B)" + unfolding seqspace_def by blast + +lemma seqspaceD[dest]: "f\seqspace(B) \ \n\nat. f:n\B" + unfolding seqspace_def by blast + +lemma seqspace_type: + "f \ B^<\ \ \n\nat. f:n\B" + unfolding seqspace_def by auto + +schematic_goal seqspace_fm_auto: + assumes + "nth(i,env) = n" "nth(j,env) = z" "nth(h,env) = B" + "i \ nat" "j \ nat" "h\nat" "env \ list(A)" + shows + "(\om\A. omega(##A,om) \ n \ om \ is_funspace(##A, n, B, z)) \ (A, env \ (?sqsprp(i,j,h)))" + unfolding is_funspace_def + by (insert assms ; (rule sep_rules | simp)+) + +synthesize "seqspace_rep_fm" from_schematic seqspace_fm_auto + +locale M_seqspace = M_trancl + + assumes + seqspace_replacement: "M(B) \ strong_replacement(M,\n z. n\nat \ is_funspace(M,n,B,z))" +begin + +lemma seqspace_closed: + "M(B) \ M(B^<\)" + unfolding seqspace_def using seqspace_replacement[of B] RepFun_closed2 + by simp + +end (* M_seqspace *) + + +sublocale M_ctm \ M_seqspace "##M" +proof (unfold_locales, simp) + fix B + have "arity(seqspace_rep_fm(0,1,2)) \ 3" "seqspace_rep_fm(0,1,2)\formula" + unfolding seqspace_rep_fm_def + using arity_pair_fm arity_omega_fm arity_typed_function_fm nat_simp_union + by auto + moreover + assume "B\M" + ultimately + have "strong_replacement(##M, \x y. M, [x, y, B] \ seqspace_rep_fm(0, 1, 2))" + using replacement_ax[of "seqspace_rep_fm(0,1,2)"] + by simp + moreover + note \B\M\ + moreover from this + have "univalent(##M, A, \x y. M, [x, y, B] \ seqspace_rep_fm(0, 1, 2))" + if "A\M" for A + using that unfolding univalent_def seqspace_rep_fm_def + by (auto, blast dest:transitivity) + ultimately + have "strong_replacement(##M, \n z. \om[##M]. omega(##M,om) \ n \ om \ is_funspace(##M, n, B, z))" + using seqspace_fm_auto[of 0 "[_,_,B]" _ 1 _ 2 B M] unfolding seqspace_rep_fm_def strong_replacement_def + by simp + with \B\M\ + show "strong_replacement(##M, \n z. n \ nat \ is_funspace(##M, n, B, z))" + using M_nat by simp +qed + +definition seq_upd :: "i \ i \ i" where + "seq_upd(f,a) \ \ j \ succ(domain(f)) . if j < domain(f) then f`j else a" + +lemma seq_upd_succ_type : + assumes "n\nat" "f\n\A" "a\A" + shows "seq_upd(f,a)\ succ(n) \ A" +proof - + from assms + have equ: "domain(f) = n" using domain_of_fun by simp + { + fix j + assume "j\succ(domain(f))" + with equ \n\_\ + have "j\n" using ltI by auto + with \n\_\ + consider (lt) "j A" + proof cases + case lt + with \f\_\ + show ?thesis using apply_type ltD[OF lt] by simp + next + case eq + with \a\_\ + show ?thesis by auto + qed + } + with equ + show ?thesis + unfolding seq_upd_def + using lam_type[of "succ(domain(f))"] + by auto +qed + +lemma seq_upd_type : + assumes "f\A^<\" "a\A" + shows "seq_upd(f,a) \ A^<\" +proof - + from \f\_\ + obtain y where "y\nat" "f\y\A" + unfolding seqspace_def by blast + with \a\A\ + have "seq_upd(f,a)\succ(y)\A" + using seq_upd_succ_type by simp + with \y\_\ + show ?thesis + unfolding seqspace_def by auto +qed + +lemma seq_upd_apply_domain [simp]: + assumes "f:n\A" "n\nat" + shows "seq_upd(f,a)`n = a" + unfolding seq_upd_def using assms domain_of_fun by auto + +lemma zero_in_seqspace : + shows "0 \ A^<\" + unfolding seqspace_def + by force + +definition + seqleR :: "i \ i \ o" where + "seqleR(f,g) \ g \ f" + +definition + seqlerel :: "i \ i" where + "seqlerel(A) \ Rrel(\x y. y \ x,A^<\)" + +definition + seqle :: "i" where + "seqle \ seqlerel(2)" + +lemma seqleI[intro!]: + "\f,g\ \ 2^<\\2^<\ \ g \ f \ \f,g\ \ seqle" + unfolding seqspace_def seqle_def seqlerel_def Rrel_def + by blast + +lemma seqleD[dest!]: + "z \ seqle \ \x y. \x,y\ \ 2^<\\2^<\ \ y \ x \ z = \x,y\" + unfolding seqle_def seqlerel_def Rrel_def + by blast + +lemma upd_leI : + assumes "f\2^<\" "a\2" + shows "\seq_upd(f,a),f\\seqle" (is "\?f,_\\_") +proof + show " \?f, f\ \ 2^<\ \ 2^<\" + using assms seq_upd_type by auto +next + show "f \ seq_upd(f,a)" + proof + fix x + assume "x \ f" + moreover from \f \ 2^<\\ + obtain n where "n\nat" "f : n \ 2" + using seqspace_type by blast + moreover from calculation + obtain y where "y\n" "x=\y,f`y\" using Pi_memberD[of f n "\_ . 2"] + by blast + moreover from \f:n\2\ + have "domain(f) = n" using domain_of_fun by simp + ultimately + show "x \ seq_upd(f,a)" + unfolding seq_upd_def lam_def + by (auto intro:ltI) + qed +qed + +lemma preorder_on_seqle: "preorder_on(2^<\,seqle)" + unfolding preorder_on_def refl_def trans_on_def by blast + +lemma zero_seqle_max: "x\2^<\ \ \x,0\ \ seqle" + using zero_in_seqspace + by auto + +interpretation forcing_notion "2^<\" "seqle" "0" + using preorder_on_seqle zero_seqle_max zero_in_seqspace + by unfold_locales simp_all + +abbreviation SEQle :: "[i, i] \ o" (infixl "\s" 50) + where "x \s y \ Leq(x,y)" + +abbreviation SEQIncompatible :: "[i, i] \ o" (infixl "\s" 50) + where "x \s y \ Incompatible(x,y)" + +lemma seqspace_separative: + assumes "f\2^<\" + shows "seq_upd(f,0) \s seq_upd(f,1)" (is "?f \s ?g") +proof + assume "compat(?f, ?g)" + then + obtain h where "h \ 2^<\" "?f \ h" "?g \ h" + by blast + moreover from \f\_\ + obtain y where "y\nat" "f:y\2" by blast + moreover from this + have "?f: succ(y) \ 2" "?g: succ(y) \ 2" + using seq_upd_succ_type by blast+ + moreover from this + have "\y,?f`y\ \ ?f" "\y,?g`y\ \ ?g" using apply_Pair by auto + ultimately + have "\y,0\ \ h" "\y,1\ \ h" by auto + moreover from \h \ 2^<\\ + obtain n where "n\nat" "h:n\2" by blast + ultimately + show "False" + using fun_is_function[of h n "\_. 2"] + unfolding seqspace_def function_def by auto +qed + +definition is_seqleR :: "[i\o,i,i] \ o" where + "is_seqleR(Q,f,g) \ g \ f" + +definition seqleR_fm :: "i \ i" where + "seqleR_fm(fg) \ Exists(Exists(And(pair_fm(0,1,fg#+2),subset_fm(1,0))))" + +lemma type_seqleR_fm : + "fg \ nat \ seqleR_fm(fg) \ formula" + unfolding seqleR_fm_def + by simp + +lemma arity_seqleR_fm : + "fg \ nat \ arity(seqleR_fm(fg)) = succ(fg)" + unfolding seqleR_fm_def + using arity_pair_fm arity_subset_fm nat_simp_union by simp + +lemma (in M_basic) seqleR_abs: + assumes "M(f)" "M(g)" + shows "seqleR(f,g) \ is_seqleR(M,f,g)" + unfolding seqleR_def is_seqleR_def + using assms apply_abs domain_abs domain_closed[OF \M(f)\] domain_closed[OF \M(g)\] + by auto + +definition + relP :: "[i\o,[i\o,i,i]\o,i] \ o" where + "relP(M,r,xy) \ (\x[M]. \y[M]. pair(M,x,y,xy) \ r(M,x,y))" + +lemma (in M_ctm) seqleR_fm_sats : + assumes "fg\nat" "env\list(M)" + shows "sats(M,seqleR_fm(fg),env) \ relP(##M,is_seqleR,nth(fg, env))" + unfolding seqleR_fm_def is_seqleR_def relP_def + using assms trans_M sats_subset_fm pair_iff_sats + by auto + + +lemma (in M_basic) is_related_abs : + assumes "\ f g . M(f) \ M(g) \ rel(f,g) \ is_rel(M,f,g)" + shows "\z . M(z) \ relP(M,is_rel,z) \ (\x y. z = \x,y\ \ rel(x,y))" + unfolding relP_def using pair_in_M_iff assms by auto + +definition + is_RRel :: "[i\o,[i\o,i,i]\o,i,i] \ o" where + "is_RRel(M,is_r,A,r) \ \A2[M]. cartprod(M,A,A,A2) \ is_Collect(M,A2, relP(M,is_r),r)" + +lemma (in M_basic) is_Rrel_abs : + assumes "M(A)" "M(r)" + "\ f g . M(f) \ M(g) \ rel(f,g) \ is_rel(M,f,g)" + shows "is_RRel(M,is_rel,A,r) \ r = Rrel(rel,A)" +proof - + from \M(A)\ + have "M(z)" if "z\A\A" for z + using cartprod_closed transM[of z "A\A"] that by simp + then + have A:"relP(M, is_rel, z) \ (\x y. z = \x, y\ \ rel(x, y))" "M(z)" if "z\A\A" for z + using that is_related_abs[of rel is_rel,OF assms(3)] by auto + then + have "Collect(A\A,relP(M,is_rel)) = Collect(A\A,\z. (\x y. z = \x,y\ \ rel(x,y)))" + using Collect_cong[of "A\A" "A\A" "relP(M,is_rel)",OF _ A(1)] assms(1) assms(2) + by auto + with assms + show ?thesis unfolding is_RRel_def Rrel_def using cartprod_closed + by auto +qed + +definition + is_seqlerel :: "[i\o,i,i] \ o" where + "is_seqlerel(M,A,r) \ is_RRel(M,is_seqleR,A,r)" + +lemma (in M_basic) seqlerel_abs : + assumes "M(A)" "M(r)" + shows "is_seqlerel(M,A,r) \ r = Rrel(seqleR,A)" + unfolding is_seqlerel_def + using is_Rrel_abs[OF \M(A)\ \M(r)\,of seqleR is_seqleR] seqleR_abs + by auto + +definition RrelP :: "[i\i\o,i] \ i" where + "RrelP(R,A) \ {z\A\A. \x y. z = \x, y\ \ R(x,y)}" + +lemma Rrel_eq : "RrelP(R,A) = Rrel(R,A)" + unfolding Rrel_def RrelP_def by auto + +context M_ctm +begin + +lemma Rrel_closed: + assumes "A\M" + "\ a. a \ nat \ rel_fm(a)\formula" + "\ f g . (##M)(f) \ (##M)(g) \ rel(f,g) \ is_rel(##M,f,g)" + "arity(rel_fm(0)) = 1" + "\ a . a \ M \ sats(M,rel_fm(0),[a]) \ relP(##M,is_rel,a)" + shows "(##M)(Rrel(rel,A))" +proof - + have "z\ M \ relP(##M, is_rel, z) \ (\x y. z = \x, y\ \ rel(x, y))" for z + using assms(3) is_related_abs[of rel is_rel] + by auto + with assms + have "Collect(A\A,\z. (\x y. z = \x,y\ \ rel(x,y))) \ M" + using Collect_in_M_0p[of "rel_fm(0)" "\ A z . relP(A,is_rel,z)" "\ z.\x y. z = \x, y\ \ rel(x, y)" ] + cartprod_closed + by simp + then show ?thesis + unfolding Rrel_def by simp +qed + +lemma seqle_in_M: "seqle \ M" + using Rrel_closed seqspace_closed + transitivity[OF _ nat_in_M] type_seqleR_fm[of 0] arity_seqleR_fm[of 0] + seqleR_fm_sats[of 0] seqleR_abs seqlerel_abs + unfolding seqle_def seqlerel_def seqleR_def + by auto + +subsection\Cohen extension is proper\ + +interpretation ctm_separative "2^<\" seqle 0 +proof (unfold_locales) + fix f + let ?q="seq_upd(f,0)" and ?r="seq_upd(f,1)" + assume "f \ 2^<\" + then + have "?q \s f \ ?r \s f \ ?q \s ?r" + using upd_leI seqspace_separative by auto + moreover from calculation + have "?q \ 2^<\" "?r \ 2^<\" + using seq_upd_type[of f 2] by auto + ultimately + show "\q\2^<\. \r\2^<\. q \s f \ r \s f \ q \s r" + by (rule_tac bexI)+ \ \why the heck auto-tools don't solve this?\ +next + show "2^<\ \ M" using nat_into_M seqspace_closed by simp +next + show "seqle \ M" using seqle_in_M . +qed + +lemma cohen_extension_is_proper: "\G. M_generic(G) \ M \ GenExt(G)" + using proper_extension generic_filter_existence zero_in_seqspace + by force + +end (* M_ctm *) + +end \ No newline at end of file diff --git a/thys/Forcing/Synthetic_Definition.thy b/thys/Forcing/Synthetic_Definition.thy new file mode 100644 --- /dev/null +++ b/thys/Forcing/Synthetic_Definition.thy @@ -0,0 +1,130 @@ +section\Automatic synthesis of formulas\ +theory Synthetic_Definition + imports "ZF-Constructible.Formula" + keywords + "synthesize" :: thy_decl % "ML" + and + "synthesize_notc" :: thy_decl % "ML" + and + "from_schematic" + +begin +ML_file\Utils.ml\ + +ML\ +val $` = curry ((op $) o swap) +infix $` + +fun pair f g x = (f x, g x) + +fun display kind pos (thms,thy) = + let val _ = Proof_Display.print_results true pos thy ((kind,""),[thms]) + in thy +end + +fun prove_tc_form goal thms ctxt = + Goal.prove ctxt [] [] goal + (fn _ => rewrite_goal_tac ctxt thms 1 + THEN TypeCheck.typecheck_tac ctxt) + +fun prove_sats goal thms thm_auto ctxt = + let val ctxt' = ctxt |> Simplifier.add_simp (thm_auto |> hd) + in + Goal.prove ctxt [] [] goal + (fn _ => rewrite_goal_tac ctxt thms 1 + THEN PARALLEL_ALLGOALS (asm_simp_tac ctxt') + THEN TypeCheck.typecheck_tac ctxt') + end + +fun is_mem (@{const mem} $ _ $ _) = true + | is_mem _ = false + +fun synth_thm_sats def_name term lhs set env hyps vars vs pos thm_auto lthy = +let val (_,tm,ctxt1) = Utils.thm_concl_tm lthy term + val (thm_refs,ctxt2) = Variable.import true [Proof_Context.get_thm lthy term] ctxt1 |>> #2 + val vs' = map (Thm.term_of o #2) vs + val vars' = map (Thm.term_of o #2) vars + val r_tm = tm |> Utils.dest_lhs_def |> fold (op $`) vs' + val sats = @{const apply} $ (@{const satisfies} $ set $ r_tm) $ env + val rhs = @{const IFOL.eq(i)} $ sats $ (@{const succ} $ @{const zero}) + val concl = @{const IFOL.iff} $ lhs $ rhs + val g_iff = Logic.list_implies(hyps, Utils.tp concl) + val thm = prove_sats g_iff thm_refs thm_auto ctxt2 + val name = Binding.name (def_name ^ "_iff_sats") + val thm = Utils.fix_vars thm (map (#1 o dest_Free) vars') lthy + in + Local_Theory.note ((name, []), [thm]) lthy |> display "theorem" pos + end + +fun synth_thm_tc def_name term hyps vars pos lthy = +let val (_,tm,ctxt1) = Utils.thm_concl_tm lthy term + val (thm_refs,ctxt2) = Variable.import true [Proof_Context.get_thm lthy term] ctxt1 + |>> #2 + val vars' = map (Thm.term_of o #2) vars + val tc_attrib = @{attributes [TC]} + val r_tm = tm |> Utils.dest_lhs_def |> fold (op $`) vars' + val concl = @{const mem} $ r_tm $ @{const formula} + val g = Logic.list_implies(hyps, Utils.tp concl) + val thm = prove_tc_form g thm_refs ctxt2 + val name = Binding.name (def_name ^ "_type") + val thm = Utils.fix_vars thm (map (#1 o dest_Free) vars') ctxt2 + in + Local_Theory.note ((name, tc_attrib), [thm]) lthy |> display "theorem" pos + end + + +fun synthetic_def def_name thmref pos tc auto thy = + let + val (thm_ref,_) = thmref |>> Facts.ref_name + val (((_,vars),thm_tms),_) = Variable.import true [Proof_Context.get_thm thy thm_ref] thy + val (tm,hyps) = thm_tms |> hd |> pair Thm.concl_of Thm.prems_of + val (lhs,rhs) = tm |> Utils.dest_iff_tms o Utils.dest_trueprop + val ((set,t),env) = rhs |> Utils.dest_sats_frm + fun olist t = Ord_List.make String.compare (Term.add_free_names t []) + fun relevant ts (@{const mem} $ t $ _) = not (Term.is_Free t) orelse + Ord_List.member String.compare ts (t |> Term.dest_Free |> #1) + | relevant _ _ = false + val t_vars = olist t + val vs = List.filter (fn (((v,_),_),_) => Utils.inList v t_vars) vars + val at = List.foldr (fn ((_,var),t') => lambda (Thm.term_of var) t') t vs + val hyps' = List.filter (relevant t_vars o Utils.dest_trueprop) hyps + in + Local_Theory.define ((Binding.name def_name, NoSyn), + ((Binding.name (def_name ^ "_def"), []), at)) thy |> #2 |> + (if tc then synth_thm_tc def_name (def_name ^ "_def") hyps' vs pos else I) |> + (if auto then synth_thm_sats def_name (def_name ^ "_def") lhs set env hyps vars vs pos thm_tms else I) + +end +\ +ML\ + +local + val synth_constdecl = + Parse.position (Parse.string -- ((Parse.$$$ "from_schematic" |-- Parse.thm))); + + val _ = + Outer_Syntax.local_theory \<^command_keyword>\synthesize\ "ML setup for synthetic definitions" + (synth_constdecl >> (fn ((bndg,thm),p) => synthetic_def bndg thm p true true)) + + val _ = + Outer_Syntax.local_theory \<^command_keyword>\synthesize_notc\ "ML setup for synthetic definitions" + (synth_constdecl >> (fn ((bndg,thm),p) => synthetic_def bndg thm p false false)) + +in + +end +\ +text\The \<^ML>\synthetic_def\ function extracts definitions from +schematic goals. A new definition is added to the context. \ + +(* example of use *) +(* +schematic_goal mem_formula_ex : + assumes "m\nat" "n\ nat" "env \ list(M)" + shows "nth(m,env) \ nth(n,env) \ sats(M,?frm,env)" + by (insert assms ; (rule sep_rules empty_iff_sats cartprod_iff_sats | simp del:sats_cartprod_fm)+) + +synthesize "\" from_schematic mem_formula_ex +*) + +end diff --git a/thys/Forcing/Union_Axiom.thy b/thys/Forcing/Union_Axiom.thy new file mode 100644 --- /dev/null +++ b/thys/Forcing/Union_Axiom.thy @@ -0,0 +1,177 @@ +section\The Axiom of Unions in $M[G]$\ +theory Union_Axiom + imports Names +begin + +context forcing_data +begin + + +definition Union_name_body :: "[i,i,i,i] \ o" where + "Union_name_body(P',leq',\,\p) \ (\ \[##M]. + \ q[##M]. (q\ P' \ (\\,q\ \ \ \ + (\ r[##M].r\P' \ (\fst(\p),r\ \ \ \ \snd(\p),r\ \ leq' \ \snd(\p),q\ \ leq')))))" + +definition Union_name_fm :: "i" where + "Union_name_fm \ + Exists( + Exists(And(pair_fm(1,0,2), + Exists ( + Exists (And(Member(0,7), + Exists (And(And(pair_fm(2,1,0),Member(0,6)), + Exists (And(Member(0,9), + Exists (And(And(pair_fm(6,1,0),Member(0,4)), + Exists (And(And(pair_fm(6,2,0),Member(0,10)), + Exists (And(pair_fm(7,5,0),Member(0,11)))))))))))))))))" + +lemma Union_name_fm_type [TC]: + "Union_name_fm \formula" + unfolding Union_name_fm_def by simp + + +lemma arity_Union_name_fm : + "arity(Union_name_fm) = 4" + unfolding Union_name_fm_def upair_fm_def pair_fm_def + by(auto simp add: nat_simp_union) + +lemma sats_Union_name_fm : + "\ a \ M; b \ M ; P' \ M ; p \ M ; \ \ M ; \ \ M ; leq' \ M \ \ + sats(M,Union_name_fm,[\\,p\,\,leq',P']@[a,b]) \ + Union_name_body(P',leq',\,\\,p\)" + unfolding Union_name_fm_def Union_name_body_def tuples_in_M + by (subgoal_tac "\\,p\ \ M", auto simp add : tuples_in_M) + + +lemma domD : + assumes "\ \ M" "\ \ domain(\)" + shows "\ \ M" + using assms Transset_M trans_M + by (simp flip: setclass_iff) + + +definition Union_name :: "i \ i" where + "Union_name(\) \ + {u \ domain(\(domain(\))) \ P . Union_name_body(P,leq,\,u)}" + +lemma Union_name_M : assumes "\ \ M" + shows "{u \ domain(\(domain(\))) \ P . Union_name_body(P,leq,\,u)} \ M" + unfolding Union_name_def +proof - + let ?P="\ x . sats(M,Union_name_fm,[x,\,leq]@[P,\,leq])" + let ?Q="\ x . Union_name_body(P,leq,\,x)" + from \\\M\ + have "domain(\(domain(\)))\M" (is "?d \ _") using domain_closed Union_closed by simp + then + have "?d \ P \ M" using cartprod_closed P_in_M by simp + have "arity(Union_name_fm)\6" using arity_Union_name_fm by simp + from assms P_in_M leq_in_M arity_Union_name_fm + have "[\,leq] \ list(M)" "[P,\,leq] \ list(M)" by auto + with assms assms P_in_M leq_in_M \arity(Union_name_fm)\6\ + have "separation(##M,?P)" + using separation_ax by simp + with \?d \ P \ M\ + have A:"{ u \ ?d \ P . ?P(u) } \ M" + using separation_iff by force + have "?P(x)\ ?Q(x)" if "x\ ?d\P" for x + proof - + from \x\ ?d\P\ + have "x = \fst(x),snd(x)\" using Pair_fst_snd_eq by simp + with \x\?d\P\ \?d\M\ + have "fst(x) \ M" "snd(x) \ M" + using mtrans fst_type snd_type P_in_M unfolding M_trans_def by auto + then + have "?P(\fst(x),snd(x)\) \ ?Q(\fst(x),snd(x)\)" + using P_in_M sats_Union_name_fm P_in_M \\\M\ leq_in_M by simp + with \x = \fst(x),snd(x)\\ + show "?P(x) \ ?Q(x)" using that by simp + qed + then show ?thesis using Collect_cong A by simp +qed + + + +lemma Union_MG_Eq : + assumes "a \ M[G]" and "a = val(G,\)" and "filter(G)" and "\ \ M" + shows "\ a = val(G,Union_name(\))" +proof - + { + fix x + assume "x \ \ (val(G,\))" + then obtain i where "i \ val(G,\)" "x \ i" by blast + with \\ \ M\ obtain \ q where + "q \ G" "\\,q\ \ \" "val(G,\) = i" "\ \ M" + using elem_of_val_pair domD by blast + with \x \ i\ obtain \ r where + "r \ G" "\\,r\ \ \" "val(G,\) = x" "\ \ M" + using elem_of_val_pair domD by blast + with \\\,q\\\\ have "\ \ domain(\(domain(\)))" by auto + with \filter(G)\ \q\G\ \r\G\ obtain p where + A: "p \ G" "\p,r\ \ leq" "\p,q\ \ leq" "p \ P" "r \ P" "q \ P" + using low_bound_filter filterD by blast + then have "p \ M" "q\M" "r\M" + using mtrans P_in_M unfolding M_trans_def by auto + with A \\\,r\ \ \\ \\\,q\ \ \\ \\ \ M\ \\ \ domain(\(domain(\)))\ \\\M\ have + "\\,p\ \ Union_name(\)" unfolding Union_name_def Union_name_body_def + by auto + with \p\P\ \p\G\ have "val(G,\) \ val(G,Union_name(\))" + using val_of_elem by simp + with \val(G,\)=x\ have "x \ val(G,Union_name(\))" by simp + } + with \a=val(G,\)\ have 1: "x \ \ a \ x \ val(G,Union_name(\))" for x by simp + { + fix x + assume "x \ (val(G,Union_name(\)))" + then obtain \ p where + "p \ G" "\\,p\ \ Union_name(\)" "val(G,\) = x" + using elem_of_val_pair by blast + with \filter(G)\ have "p\P" using filterD by simp + from \\\,p\ \ Union_name(\)\ obtain \ q r where + "\ \ domain(\)" "\\,q\ \ \ " "\\,r\ \ \" "r\P" "q\P" "\p,r\ \ leq" "\p,q\ \ leq" + unfolding Union_name_def Union_name_body_def by force + with \p\G\ \filter(G)\ have "r \ G" "q \ G" + using filter_leqD by auto + with \\\,r\ \ \\ \\\,q\\\\ \q\P\ \r\P\ have + "val(G,\) \ val(G,\)" "val(G,\) \ val(G,\)" + using val_of_elem by simp+ + then have "val(G,\) \ \ val(G,\)" by blast + with \val(G,\)=x\ \a=val(G,\)\ have + "x \ \ a" by simp + } + with \a=val(G,\)\ + have "x \ val(G,Union_name(\)) \ x \ \ a" for x by blast + then + show ?thesis using 1 by blast +qed + +lemma union_in_MG : assumes "filter(G)" + shows "Union_ax(##M[G])" +proof - + { fix a + assume "a \ M[G]" + then + interpret mgtrans : M_trans "##M[G]" + using transitivity_MG by (unfold_locales; auto) + from \a\_\ obtain \ where "\ \ M" "a=val(G,\)" using GenExtD by blast + then + have "Union_name(\) \ M" (is "?\ \ _") using Union_name_M unfolding Union_name_def by simp + then + have "val(G,?\) \ M[G]" (is "?U \ _") using GenExtI by simp + with \a\_\ + have "(##M[G])(a)" "(##M[G])(?U)" by auto + with \\ \ M\ \filter(G)\ \?U \ M[G]\ \a=val(G,\)\ + have "big_union(##M[G],a,?U)" + using Union_MG_Eq Union_abs by simp + with \?U \ M[G]\ + have "\z[##M[G]]. big_union(##M[G],a,z)" by force + } + then + have "Union_ax(##M[G])" unfolding Union_ax_def by force + then + show ?thesis by simp +qed + +theorem Union_MG : "M_generic(G) \ Union_ax(##M[G])" + by (simp add:M_generic_def union_in_MG) + +end (* forcing_data *) +end diff --git a/thys/Forcing/Utils.ml b/thys/Forcing/Utils.ml new file mode 100644 --- /dev/null +++ b/thys/Forcing/Utils.ml @@ -0,0 +1,126 @@ +signature Utils = + sig + val binop : term -> term -> term -> term + val add_: term -> term -> term + val app_: term -> term -> term + val concat_: term -> term -> term + val dest_apply: term -> term * term + val dest_iff_lhs: term -> term + val dest_iff_rhs: term -> term + val dest_iff_tms: term -> term * term + val dest_lhs_def: term -> term + val dest_rhs_def: term -> term + val dest_satisfies_tms: term -> term * term + val dest_satisfies_frm: term -> term + val dest_eq_tms: term -> term * term + val dest_sats_frm: term -> (term * term) * term + val dest_trueprop: term -> term + val eq_: term -> term -> term + val fix_vars: thm -> string list -> Proof.context -> thm + val formula_: term + val freeName: term -> string + val inList: ''a -> ''a list -> bool + val isFree: term -> bool + val length_: term -> term + val list_: term -> term + val lt_: term -> term -> term + val mem_: term -> term -> term + val mk_FinSet: term list -> term + val mk_Pair: term -> term -> term + val mk_ZFlist: ('a -> term) -> 'a list -> term + val mk_ZFnat: int -> term + val nat_: term + val nth_: term -> term -> term + val subset_: term -> term -> term + val thm_concl_tm : Proof.context -> xstring -> + ((indexname * typ) * cterm) list * term * Proof.context + val to_ML_list: term -> term list + val tp: term -> term + end + +structure Utils : Utils = +struct +(* Smart constructors for ZF-terms *) + +fun inList a = exists (fn b => a = b) + +fun binop h t u = h $ t $ u +val mk_Pair = binop @{const Pair} + +fun mk_FinSet nil = @{const zero} + | mk_FinSet (e :: es) = @{const cons} $ e $ mk_FinSet es + +fun mk_ZFnat 0 = @{const zero} + | mk_ZFnat n = @{const succ} $ mk_ZFnat (n-1) + +fun mk_ZFlist _ nil = @{const "Nil"} + | mk_ZFlist f (t :: ts) = @{const "Cons"} $ f t $ mk_ZFlist f ts + +fun to_ML_list (@{const Nil}) = nil + | to_ML_list (@{const Cons} $ t $ ts) = t :: to_ML_list ts +| to_ML_list _ = nil + +fun isFree (Free (_,_)) = true + | isFree _ = false + +fun freeName (Free (n,_)) = n + | freeName _ = error "Not a free variable" + +val app_ = binop @{const apply} + +fun tp x = @{const Trueprop} $ x +fun length_ env = @{const length} $ env +val nth_ = binop @{const nth} +val add_ = binop @{const add} +val mem_ = binop @{const mem} +val subset_ = binop @{const Subset} +val lt_ = binop @{const lt} +val concat_ = binop @{const app} +val eq_ = binop @{const IFOL.eq(i)} + +(* Abbreviation for sets *) +fun list_ set = @{const list} $ set +val nat_ = @{const nat} +val formula_ = @{const formula} + +(** Destructors of terms **) +fun dest_eq_tms (Const (@{const_name IFOL.eq},_) $ t $ u) = (t, u) + | dest_eq_tms t = raise TERM ("dest_eq_tms", [t]) + +fun dest_lhs_def (Const (@{const_name Pure.eq},_) $ x $ _) = x + | dest_lhs_def t = raise TERM ("dest_lhs_def", [t]) + +fun dest_rhs_def (Const (@{const_name Pure.eq},_) $ _ $ y) = y + | dest_rhs_def t = raise TERM ("dest_rhs_def", [t]) + + +fun dest_apply (@{const apply} $ t $ u) = (t,u) + | dest_apply t = raise TERM ("dest_applies_op", [t]) + +fun dest_satisfies_tms (@{const Formula.satisfies} $ A $ f) = (A,f) + | dest_satisfies_tms t = raise TERM ("dest_satisfies_tms", [t]); + +val dest_satisfies_frm = #2 o dest_satisfies_tms + +fun dest_sats_frm t = t |> dest_eq_tms |> #1 |> dest_apply |>> dest_satisfies_tms ; + +fun dest_trueprop (@{const IFOL.Trueprop} $ t) = t + | dest_trueprop t = t + +fun dest_iff_tms (@{const IFOL.iff} $ t $ u) = (t, u) + | dest_iff_tms t = raise TERM ("dest_iff_tms", [t]) + +val dest_iff_lhs = #1 o dest_iff_tms +val dest_iff_rhs = #2 o dest_iff_tms + +fun thm_concl_tm ctxt thm_ref = + let val (((_,vars),thm_tms),ctxt1) = Variable.import true [Proof_Context.get_thm ctxt thm_ref] ctxt + in (vars, thm_tms |> hd |> Thm.concl_of, ctxt1) +end + +fun fix_vars thm vars ctxt = let + val (_, ctxt1) = Variable.add_fixes vars ctxt + in singleton (Proof_Context.export ctxt1 ctxt) thm +end + +end ; diff --git a/thys/Forcing/document/root.bib b/thys/Forcing/document/root.bib new file mode 100644 --- /dev/null +++ b/thys/Forcing/document/root.bib @@ -0,0 +1,64 @@ +@article{DBLP:journals/jar/PaulsonG96, + author = {Lawrence C. Paulson and + Krzysztof Grabczewski}, + title = {Mechanizing Set Theory}, + journal = {J. Autom. Reasoning}, + volume = {17}, + number = {3}, + pages = {291--323}, + year = {1996}, + xurl = {https://doi.org/10.1007/BF00283132}, + doi = {10.1007/BF00283132}, + timestamp = {Sat, 20 May 2017 00:22:31 +0200}, + biburl = {https://dblp.org/rec/bib/journals/jar/PaulsonG96}, + bibsource = {dblp computer science bibliography, https://dblp.org} +} + +@inproceedings{2018arXiv180705174G, + author = {Gunther, Emmanuel and Pagano, Miguel and S{\'a}nchez Terraf, Pedro}, + title = {First Steps Towards a Formalization of Forcing}, + booktitle = {Proceedings of the 13th Workshop on Logical and Semantic Frameworks + with Applications, {LSFA} 2018, Fortaleza, Brazil, September 26-28, + 2018}, + pages = {119--136}, + year = {2018}, + url = {https://doi.org/10.1016/j.entcs.2019.07.008}, + doi = {10.1016/j.entcs.2019.07.008}, + timestamp = {Wed, 05 Feb 2020 13:47:23 +0100}, + biburl = {https://dblp.org/rec/journals/entcs/GuntherPT19.bib}, + bibsource = {dblp computer science bibliography, https://dblp.org} +} + + +@ARTICLE{2019arXiv190103313G, + author = {Gunther, Emmanuel and Pagano, Miguel and S{\'a}nchez Terraf, Pedro}, + title = "{Mechanization of Separation in Generic Extensions}", + journal = {arXiv e-prints}, + keywords = {Computer Science - Logic in Computer Science, Mathematics - Logic, 03B35 (Primary) 03E40, 03B70, 68T15 (Secondary), F.4.1}, + year = 2019, + month = Jan, + eid = {arXiv:1901.03313}, + volume = {1901.03313}, +archivePrefix = {arXiv}, + eprint = {1901.03313}, + primaryClass = {cs.LO}, + adsurl = {https://ui.adsabs.harvard.edu/\#abs/2019arXiv190103313G}, + adsnote = {Provided by the SAO/NASA Astrophysics Data System}, + abstract = {We mechanize, in the proof assistant Isabelle, a proof of the axiom-scheme of Separation in generic extensions of models of set theory by using the fundamental theorems of forcing. We also formalize the satisfaction of the axioms of Extensionality, Foundation, Union, and Powerset. The axiom of Infinity is likewise treated, under additional assumptions on the ground model. In order to achieve these goals, we extended Paulson's library on constructibility with renaming of variables for internalized formulas, improved results on definitions by recursion on well-founded relations, and sharpened hypotheses in his development of relativization and absoluteness.} +} + +@ARTICLE{2020arXiv200109715G, + author = {{Gunther}, Emmanuel and {Pagano}, Miguel and {S{\'a}nchez Terraf}, Pedro}, + title = "{Formalization of Forcing in Isabelle/ZF}", + journal = {arXiv e-prints}, + keywords = {Computer Science - Logic in Computer Science, Mathematics - Logic}, + year = 2020, + month = jan, + eid = {arXiv:2001.09715}, + volume = {2001.09715}, +archivePrefix = {arXiv}, + eprint = {2001.09715}, + primaryClass = {cs.LO}, + adsurl = {https://ui.adsabs.harvard.edu/abs/2020arXiv200109715G}, + adsnote = {Provided by the SAO/NASA Astrophysics Data System} +} diff --git a/thys/Forcing/document/root.bst b/thys/Forcing/document/root.bst new file mode 100644 --- /dev/null +++ b/thys/Forcing/document/root.bst @@ -0,0 +1,1440 @@ +%% +%% by pedro +%% Based on file `model1b-num-names.bst' +%% +%% +%% +ENTRY + { address + author + booktitle + chapter + edition + editor + howpublished + institution + journal + key + month + note + number + organization + pages + publisher + school + series + title + type + volume + year + } + {} + { label extra.label sort.label short.list } +INTEGERS { output.state before.all mid.sentence after.sentence after.block } +FUNCTION {init.state.consts} +{ #0 'before.all := + #1 'mid.sentence := + #2 'after.sentence := + #3 'after.block := +} +STRINGS { s t} +FUNCTION {output.nonnull} +{ 's := + output.state mid.sentence = + { ", " * write$ } + { output.state after.block = + { add.period$ write$ + newline$ + "\newblock " write$ + } + { output.state before.all = + 'write$ + { add.period$ " " * write$ } + if$ + } + if$ + mid.sentence 'output.state := + } + if$ + s +} +FUNCTION {output} +{ duplicate$ empty$ + 'pop$ + 'output.nonnull + if$ +} +FUNCTION {output.check} +{ 't := + duplicate$ empty$ + { pop$ "empty " t * " in " * cite$ * warning$ } + 'output.nonnull + if$ +} +FUNCTION {fin.entry} +{ add.period$ + write$ + newline$ +} + +FUNCTION {new.block} +{ output.state before.all = + 'skip$ + { after.block 'output.state := } + if$ +} +FUNCTION {new.sentence} +{ output.state after.block = + 'skip$ + { output.state before.all = + 'skip$ + { after.sentence 'output.state := } + if$ + } + if$ +} +FUNCTION {add.blank} +{ " " * before.all 'output.state := +} + +FUNCTION {date.block} +{ + skip$ +} + +FUNCTION {not} +{ { #0 } + { #1 } + if$ +} +FUNCTION {and} +{ 'skip$ + { pop$ #0 } + if$ +} +FUNCTION {or} +{ { pop$ #1 } + 'skip$ + if$ +} +FUNCTION {new.block.checkb} +{ empty$ + swap$ empty$ + and + 'skip$ + 'new.block + if$ +} +FUNCTION {field.or.null} +{ duplicate$ empty$ + { pop$ "" } + 'skip$ + if$ +} +FUNCTION {emphasize} +{ duplicate$ empty$ + { pop$ "" } + { "\textit{" swap$ * "}" * } + if$ +} +%% by pedro +FUNCTION {slanted} +{ duplicate$ empty$ + { pop$ "" } + { "\textsl{" swap$ * "}" * } + if$ +} +FUNCTION {smallcaps} +{ duplicate$ empty$ + { pop$ "" } + { "\textsc{" swap$ * "}" * } + if$ +} +FUNCTION {bold} +{ duplicate$ empty$ + { pop$ "" } + { "\textbf{" swap$ * "}" * } + if$ +} + + +FUNCTION {tie.or.space.prefix} +{ duplicate$ text.length$ #3 < + { "~" } + { " " } + if$ + swap$ +} + +FUNCTION {capitalize} +{ "u" change.case$ "t" change.case$ } + +FUNCTION {space.word} +{ " " swap$ * " " * } + % Here are the language-specific definitions for explicit words. + % Each function has a name bbl.xxx where xxx is the English word. + % The language selected here is ENGLISH +FUNCTION {bbl.and} +{ "and"} + +FUNCTION {bbl.etal} +{ "et~al." } + +FUNCTION {bbl.editors} +{ "eds." } + +FUNCTION {bbl.editor} +{ "ed." } + +FUNCTION {bbl.edby} +{ "edited by" } + +FUNCTION {bbl.edition} +{ "edition" } + +FUNCTION {bbl.volume} +{ "volume" } + +FUNCTION {bbl.of} +{ "of" } + +FUNCTION {bbl.number} +{ "number" } + +FUNCTION {bbl.nr} +{ "no." } + +FUNCTION {bbl.in} +{ "in" } + +FUNCTION {bbl.pages} +{ "pp." } + +FUNCTION {bbl.page} +{ "p." } + +FUNCTION {bbl.chapter} +{ "chapter" } + +FUNCTION {bbl.techrep} +{ "Technical Report" } + +FUNCTION {bbl.mthesis} +{ "Master's thesis" } + +FUNCTION {bbl.phdthesis} +{ "Ph.D. thesis" } + +MACRO {jan} {"January"} + +MACRO {feb} {"February"} + +MACRO {mar} {"March"} + +MACRO {apr} {"April"} + +MACRO {may} {"May"} + +MACRO {jun} {"June"} + +MACRO {jul} {"July"} + +MACRO {aug} {"August"} + +MACRO {sep} {"September"} + +MACRO {oct} {"October"} + +MACRO {nov} {"November"} + +MACRO {dec} {"December"} + +MACRO {acmcs} {"ACM Comput. Surv."} + +MACRO {acta} {"Acta Inf."} + +MACRO {cacm} {"Commun. ACM"} + +MACRO {ibmjrd} {"IBM J. Res. Dev."} + +MACRO {ibmsj} {"IBM Syst.~J."} + +MACRO {ieeese} {"IEEE Trans. Software Eng."} + +MACRO {ieeetc} {"IEEE Trans. Comput."} + +MACRO {ieeetcad} + {"IEEE Trans. Comput. Aid. Des."} + +MACRO {ipl} {"Inf. Process. Lett."} + +MACRO {jacm} {"J.~ACM"} + +MACRO {jcss} {"J.~Comput. Syst. Sci."} + +MACRO {scp} {"Sci. Comput. Program."} + +MACRO {sicomp} {"SIAM J. Comput."} + +MACRO {tocs} {"ACM Trans. Comput. Syst."} + +MACRO {tods} {"ACM Trans. Database Syst."} + +MACRO {tog} {"ACM Trans. Graphic."} + +MACRO {toms} {"ACM Trans. Math. Software"} + +MACRO {toois} {"ACM Trans. Office Inf. Syst."} + +MACRO {toplas} {"ACM Trans. Progr. Lang. Syst."} + +MACRO {tcs} {"Theor. Comput. Sci."} + +FUNCTION {bibinfo.check} +{ swap$ + duplicate$ missing$ + { + pop$ pop$ + "" + } + { duplicate$ empty$ + { + swap$ pop$ + } + { swap$ + "\bibinfo{" swap$ * "}{" * swap$ * "}" * + } + if$ + } + if$ +} +FUNCTION {bibinfo.warn} +{ swap$ + duplicate$ missing$ + { + swap$ "missing " swap$ * " in " * cite$ * warning$ pop$ + "" + } + { duplicate$ empty$ + { + swap$ "empty " swap$ * " in " * cite$ * warning$ + } + { swap$ + pop$ + } + if$ + } + if$ +} +STRINGS { bibinfo} +INTEGERS { nameptr namesleft numnames } + +FUNCTION {format.names} +{ 'bibinfo := + duplicate$ empty$ 'skip$ { + 's := + "" 't := + #1 'nameptr := + s num.names$ 'numnames := + numnames 'namesleft := + { namesleft #0 > } + { s nameptr + "{f{.}.~}{vv~}{ll}{, jj}" + format.name$ + bibinfo bibinfo.check + 't := + nameptr #1 > + { + namesleft #1 > + { ", " * t * } + { + "," * + s nameptr "{ll}" format.name$ duplicate$ "others" = + { 't := } + { pop$ } + if$ + t "others" = + { + " " * bbl.etal * + } + { " " * t * } + if$ + } + if$ + } + 't + if$ + nameptr #1 + 'nameptr := + namesleft #1 - 'namesleft := + } + while$ + } if$ +} +FUNCTION {format.names.ed} +{ + format.names +} +FUNCTION {format.key} +{ empty$ + { key field.or.null } + { "" } + if$ +} + +FUNCTION {format.authors} +{ author "author" format.names smallcaps +} +FUNCTION {get.bbl.editor} +{ editor num.names$ #1 > 'bbl.editors 'bbl.editor if$ } + +FUNCTION {format.editors} +{ editor "editor" format.names duplicate$ empty$ 'skip$ + { + " " * + get.bbl.editor + capitalize + "(" swap$ * ")" * + * + } + if$ +} +FUNCTION {format.note} +{ + note empty$ + { "" } + { note #1 #1 substring$ + duplicate$ "{" = + 'skip$ + { output.state mid.sentence = + { "l" } + { "u" } + if$ + change.case$ + } + if$ + note #2 global.max$ substring$ * "note" bibinfo.check + } + if$ +} + +FUNCTION {format.title} +{ title + duplicate$ empty$ 'skip$ + { "t" change.case$ } + if$ + "title" bibinfo.check +} +FUNCTION {format.full.names} +{'s := + "" 't := + #1 'nameptr := + s num.names$ 'numnames := + numnames 'namesleft := + { namesleft #0 > } + { s nameptr + "{vv~}{ll}" format.name$ + 't := + nameptr #1 > + { + namesleft #1 > + { ", " * t * } + { + s nameptr "{ll}" format.name$ duplicate$ "others" = + { 't := } + { pop$ } + if$ + t "others" = + { + " " * bbl.etal * + } + { + bbl.and + space.word * t * + } + if$ + } + if$ + } + 't + if$ + nameptr #1 + 'nameptr := + namesleft #1 - 'namesleft := + } + while$ +} + +FUNCTION {author.editor.key.full} +{ author empty$ + { editor empty$ + { key empty$ + { cite$ #1 #3 substring$ } + 'key + if$ + } + { editor format.full.names } + if$ + } + { author format.full.names } + if$ +} + +FUNCTION {author.key.full} +{ author empty$ + { key empty$ + { cite$ #1 #3 substring$ } + 'key + if$ + } + { author format.full.names } + if$ +} + +FUNCTION {editor.key.full} +{ editor empty$ + { key empty$ + { cite$ #1 #3 substring$ } + 'key + if$ + } + { editor format.full.names } + if$ +} + +FUNCTION {make.full.names} +{ type$ "book" = + type$ "inbook" = + or + 'author.editor.key.full + { type$ "proceedings" = + 'editor.key.full + 'author.key.full + if$ + } + if$ +} + +FUNCTION {output.bibitem} +{ newline$ + "\bibitem[{" write$ + label write$ + ")" make.full.names duplicate$ short.list = + { pop$ } + { * } + if$ + "}]{" * write$ + cite$ write$ + "}" write$ + newline$ + "" + before.all 'output.state := +} + +FUNCTION {n.dashify} +{ + 't := + "" + { t empty$ not } + { t #1 #1 substring$ "-" = + { t #1 #2 substring$ "--" = not + { "--" * + t #2 global.max$ substring$ 't := + } + { { t #1 #1 substring$ "-" = } + { "-" * + t #2 global.max$ substring$ 't := + } + while$ + } + if$ + } + { t #1 #1 substring$ * + t #2 global.max$ substring$ 't := + } + if$ + } + while$ +} + +FUNCTION {word.in} +{ bbl.in + ":" * + " " * } + +FUNCTION {format.date} +{ year "year" bibinfo.check duplicate$ empty$ + { + "empty year in " cite$ * "; set to ????" * warning$ + pop$ "????" + } + 'skip$ + if$ + % extra.label * + %% by pedro + " (" swap$ * ")" * +} +FUNCTION{format.year} +{ year "year" bibinfo.check duplicate$ empty$ + { "empty year in " cite$ * + "; set to ????" * + warning$ + pop$ "????" + } + { + } + if$ + % extra.label * + " (" swap$ * ")" * +} +FUNCTION {format.btitle} +{ title "title" bibinfo.check + duplicate$ empty$ 'skip$ + { + } + if$ + %% by pedro + "``" swap$ * "''" * +} +FUNCTION {either.or.check} +{ empty$ + 'pop$ + { "can't use both " swap$ * " fields in " * cite$ * warning$ } + if$ +} +FUNCTION {format.bvolume} +{ volume empty$ + { "" } + %% by pedro + { series "series" bibinfo.check + duplicate$ empty$ 'pop$ + { %slanted + } + if$ + "volume and number" number either.or.check + volume tie.or.space.prefix + "volume" bibinfo.check + bold + * * + } + if$ +} +FUNCTION {format.number.series} +{ volume empty$ + { number empty$ + { series field.or.null } + { series empty$ + { number "number" bibinfo.check } + { output.state mid.sentence = + { bbl.number } + { bbl.number capitalize } + if$ + number tie.or.space.prefix "number" bibinfo.check * * + bbl.in space.word * + series "series" bibinfo.check * + } + if$ + } + if$ + } + { "" } + if$ +} + +FUNCTION {format.edition} +{ edition duplicate$ empty$ 'skip$ + { + output.state mid.sentence = + { "l" } + { "t" } + if$ change.case$ + "edition" bibinfo.check + " " * bbl.edition * + } + if$ +} + +INTEGERS { multiresult } +FUNCTION {multi.page.check} +{ 't := + #0 'multiresult := + { multiresult not + t empty$ not + and + } + { t #1 #1 substring$ + duplicate$ "-" = + swap$ duplicate$ "," = + swap$ "+" = + or or + { #1 'multiresult := } + { t #2 global.max$ substring$ 't := } + if$ + } + while$ + multiresult +} +FUNCTION {format.pages} +{ pages duplicate$ empty$ 'skip$ + { duplicate$ multi.page.check + { + bbl.pages swap$ + n.dashify + } + { + bbl.page swap$ + } + if$ + tie.or.space.prefix + "pages" bibinfo.check + * * + } + if$ +} + +FUNCTION {format.pages.simple} +{ pages duplicate$ empty$ 'skip$ + { duplicate$ multi.page.check + { +% bbl.pages swap$ + n.dashify + } + { +% bbl.page swap$ + } + if$ + tie.or.space.prefix + "pages" bibinfo.check + * + } + if$ +} +FUNCTION {format.journal.pages} +{ pages duplicate$ empty$ 'pop$ + { swap$ duplicate$ empty$ + { pop$ pop$ format.pages } + { + ": " * + swap$ + n.dashify + "pages" bibinfo.check + * + } + if$ + } + if$ +} +FUNCTION {format.vol.num.pages} +{ volume field.or.null + duplicate$ empty$ 'skip$ + { + "volume" bibinfo.check + } + if$ + %% by pedro + bold + pages duplicate$ empty$ 'pop$ + { swap$ duplicate$ empty$ + { pop$ pop$ format.pages } + { + ": " * + swap$ + n.dashify + "pages" bibinfo.check + * + } + if$ + } + if$ + format.year * +} + +FUNCTION {format.chapter.pages} +{ chapter empty$ + { "" } + { type empty$ + { bbl.chapter } + { type "l" change.case$ + "type" bibinfo.check + } + if$ + chapter tie.or.space.prefix + "chapter" bibinfo.check + * * + } + if$ +} + +FUNCTION {format.booktitle} +{ + booktitle "booktitle" bibinfo.check +} +FUNCTION {format.in.ed.booktitle} +{ format.booktitle duplicate$ empty$ 'skip$ + { + editor "editor" format.names.ed duplicate$ empty$ 'pop$ + { + " " * + get.bbl.editor + capitalize + "(" swap$ * "), " * + * swap$ + * } + if$ + word.in swap$ * + } + if$ +} +FUNCTION {format.thesis.type} +{ type duplicate$ empty$ + 'pop$ + { swap$ pop$ + "t" change.case$ "type" bibinfo.check + } + if$ +} +FUNCTION {format.tr.number} +{ number "number" bibinfo.check + type duplicate$ empty$ + { pop$ bbl.techrep } + 'skip$ + if$ + "type" bibinfo.check + swap$ duplicate$ empty$ + { pop$ "t" change.case$ } + { tie.or.space.prefix * * } + if$ +} +FUNCTION {format.article.crossref} +{ + word.in + " \cite{" * crossref * "}" * +} +FUNCTION {format.book.crossref} +{ volume duplicate$ empty$ + { "empty volume in " cite$ * "'s crossref of " * crossref * warning$ + pop$ word.in + } + { bbl.volume + swap$ tie.or.space.prefix "volume" bibinfo.check * * bbl.of space.word * + } + if$ + " \cite{" * crossref * "}" * +} +FUNCTION {format.incoll.inproc.crossref} +{ + word.in + " \cite{" * crossref * "}" * +} +FUNCTION {format.org.or.pub} +{ 't := + "" + address empty$ t empty$ and + 'skip$ + { + t empty$ + { address "address" bibinfo.check * + } + { t * + address empty$ + 'skip$ + { ", " * address "address" bibinfo.check * } + if$ + } + if$ + } + if$ +} +FUNCTION {format.publisher.address} +{ publisher "publisher" bibinfo.check format.org.or.pub +} +FUNCTION {format.publisher.address.year} +{ publisher "publisher" bibinfo.check format.org.or.pub + format.journal.pages + format.year * +} + +FUNCTION {school.address.year} +{ school "school" bibinfo.warn + address empty$ + 'skip$ + { ", " * address "address" bibinfo.check * } + if$ + format.year * +} + +FUNCTION {format.publisher.address.pages} +{ publisher "publisher" bibinfo.check format.org.or.pub + format.year * + +} + +FUNCTION {format.organization.address} +{ organization "organization" bibinfo.check format.org.or.pub +} + +FUNCTION {format.organization.address.year} +{ organization "organization" bibinfo.check format.org.or.pub + format.journal.pages + format.year * +} + +FUNCTION {article} +{ "%Type = Article" write$ + output.bibitem + format.authors "author" output.check + author format.key output + format.title "title" output.check + crossref missing$ + { + journal + "journal" bibinfo.check + %% by pedro + emphasize + "journal" output.check + add.blank + format.vol.num.pages output + } + { format.article.crossref output.nonnull + } + if$ +% format.journal.pages + new.sentence + format.note output + fin.entry +} +FUNCTION {book} +{ "%Type = Book" write$ + output.bibitem + author empty$ + { format.editors "author and editor" output.check + editor format.key output + } + { format.authors output.nonnull + crossref missing$ + { "author and editor" editor either.or.check } + 'skip$ + if$ + } + if$ + format.btitle "title" output.check + crossref missing$ + { %% by pedro + format.bvolume output + format.number.series output + % format.bvolume output + format.publisher.address.year output + } + { + format.book.crossref output.nonnull + } + if$ + format.edition output + % format.date "year" output.check + new.sentence + format.note output + fin.entry +} +FUNCTION {booklet} +{ "%Type = Booklet" write$ + output.bibitem + format.authors output + author format.key output + format.title "title" output.check + howpublished "howpublished" bibinfo.check output + address "address" bibinfo.check output + format.date "year" output.check + new.sentence + format.note output + fin.entry +} + +FUNCTION {inbook} +{ "%Type = Inbook" write$ + output.bibitem + author empty$ + { format.editors "author and editor" output.check + editor format.key output + } + { format.authors output.nonnull + format.title "title" output.check + crossref missing$ + { "author and editor" editor either.or.check } + 'skip$ + if$ + } + if$ + format.btitle "title" output.check + crossref missing$ + { + format.bvolume output + format.number.series output + format.publisher.address output + format.pages "pages" output.check + format.edition output + format.date "year" output.check + } + { + format.book.crossref output.nonnull + } + if$ +% format.edition output +% format.pages "pages" output.check + new.sentence + format.note output + fin.entry +} + +FUNCTION {incollection} +{ "%Type = Incollection" write$ + output.bibitem + format.authors "author" output.check + author format.key output + format.title "title" output.check + crossref missing$ + { format.in.ed.booktitle "booktitle" output.check + format.bvolume output + format.number.series output + format.pages "pages" output.check + % format.publisher.address output + % format.date "year" output.check + format.publisher.address.year output + format.edition output + } + { format.incoll.inproc.crossref output.nonnull + } + if$ +% format.pages "pages" output.check + new.sentence + format.note output + fin.entry +} +FUNCTION {inproceedings} +{ "%Type = Inproceedings" write$ + output.bibitem + format.authors "author" output.check + author format.key output + format.title "title" output.check + crossref missing$ + { + journal + "journal" bibinfo.check + "journal" output.check + format.in.ed.booktitle "booktitle" output.check + format.bvolume output + format.number.series output + publisher empty$ + { %format.organization.address output + format.organization.address.year output +% format.journal.pages + } + { organization "organization" bibinfo.check output + format.publisher.address.year output + % format.date "year" output.check +% format.journal.pages + } + if$ + } + { format.incoll.inproc.crossref output.nonnull + format.journal.pages + } + if$ +% format.pages.simple "pages" output.check +%%% La que sigue la muevo adentro del "if" +% format.journal.pages + new.sentence + format.note output + fin.entry +} +FUNCTION {conference} { inproceedings } +FUNCTION {manual} +{ "%Type = Manual" write$ + output.bibitem + format.authors output + author format.key output + format.btitle "title" output.check + organization "organization" bibinfo.check output + address "address" bibinfo.check output + format.edition output + format.date "year" output.check + new.sentence + format.note output + fin.entry +} + +FUNCTION {mastersthesis} +{ "%Type = Masterthesis" write$ + output.bibitem + format.authors "author" output.check + author format.key output + format.btitle + "title" output.check + bbl.mthesis format.thesis.type output.nonnull +% school "school" bibinfo.warn output +% address "address" bibinfo.check output +% format.date "year" output.check + school.address.year output + new.sentence + format.note output + fin.entry +} + +FUNCTION {misc} +{ "%Type = Misc" write$ + output.bibitem + format.authors output + author format.key output + format.title output + howpublished "howpublished" bibinfo.check output + format.date "year" output.check + new.sentence + format.note output + fin.entry +} +FUNCTION {phdthesis} +{ "%Type = Phdthesis" write$ + output.bibitem + format.authors "author" output.check + author format.key output + format.btitle + "title" output.check + bbl.phdthesis format.thesis.type output.nonnull +% school "school" bibinfo.warn output +% address "address" bibinfo.check output +% format.date "year" output.check + school.address.year output + new.sentence + format.note output + fin.entry +} + +FUNCTION {proceedings} +{ "%Type = Proceedings" write$ + output.bibitem + format.editors output + editor format.key output + format.btitle "title" output.check + format.bvolume output + format.number.series output + publisher empty$ + { format.organization.address output } + { organization "organization" bibinfo.check output + format.publisher.address output + } + if$ + format.date "year" output.check + new.sentence + format.note output + fin.entry +} + +FUNCTION {techreport} +{ "%Type = Techreport" write$ + output.bibitem + format.authors "author" output.check + author format.key output + format.btitle + "title" output.check + format.tr.number output.nonnull + institution "institution" bibinfo.warn output + address "address" bibinfo.check output + format.date "year" output.check + new.sentence + format.note output + fin.entry +} + +FUNCTION {unpublished} +{ "%Type = Unpublished" write$ + output.bibitem + format.authors "author" output.check + author format.key output + format.title "title" output.check + format.date "year" output.check + new.sentence + format.note "note" output.check + fin.entry +} + +FUNCTION {default.type} { misc } +READ +FUNCTION {sortify} +{ purify$ + "l" change.case$ +} +INTEGERS { len } +FUNCTION {chop.word} +{ 's := + 'len := + s #1 len substring$ = + { s len #1 + global.max$ substring$ } + 's + if$ +} +FUNCTION {format.lab.names} +{ 's := + "" 't := + s #1 "{vv~}{ll}" format.name$ + s num.names$ duplicate$ + #2 > + { pop$ + " " * bbl.etal * + } + { #2 < + 'skip$ + { s #2 "{ff }{vv }{ll}{ jj}" format.name$ "others" = + { + " " * bbl.etal * + } + { bbl.and space.word * s #2 "{vv~}{ll}" format.name$ + * } + if$ + } + if$ + } + if$ +} + +FUNCTION {author.key.label} +{ author empty$ + { key empty$ + { cite$ #1 #3 substring$ } + 'key + if$ + } + { author format.lab.names } + if$ +} + +FUNCTION {author.editor.key.label} +{ author empty$ + { editor empty$ + { key empty$ + { cite$ #1 #3 substring$ } + 'key + if$ + } + { editor format.lab.names } + if$ + } + { author format.lab.names } + if$ +} + +FUNCTION {editor.key.label} +{ editor empty$ + { key empty$ + { cite$ #1 #3 substring$ } + 'key + if$ + } + { editor format.lab.names } + if$ +} + +FUNCTION {calc.short.authors} +{ type$ "book" = + type$ "inbook" = + or + 'author.editor.key.label + { type$ "proceedings" = + 'editor.key.label + 'author.key.label + if$ + } + if$ + 'short.list := +} + +FUNCTION {calc.label} +{ calc.short.authors + short.list + "(" + * + year duplicate$ empty$ + { pop$ "????" } + { purify$ #-1 #4 substring$ } + if$ + * + 'label := +} + +FUNCTION {sort.format.names} +{ 's := + #1 'nameptr := + "" + s num.names$ 'numnames := + numnames 'namesleft := + { namesleft #0 > } + { s nameptr + "{ll{ }}{ f{ }}{ jj{ }}" + format.name$ 't := + nameptr #1 > + { + " " * + namesleft #1 = t "others" = and + { "zzzzz" * } + { t sortify * } + if$ + } + { t sortify * } + if$ + nameptr #1 + 'nameptr := + namesleft #1 - 'namesleft := + } + while$ +} + +FUNCTION {sort.format.title} +{ 't := + "A " #2 + "An " #3 + "The " #4 t chop.word + chop.word + chop.word + sortify + #1 global.max$ substring$ +} +FUNCTION {author.sort} +{ author empty$ + { key empty$ + { "to sort, need author or key in " cite$ * warning$ + "" + } + { key sortify } + if$ + } + { author sort.format.names } + if$ +} +FUNCTION {author.editor.sort} +{ author empty$ + { editor empty$ + { key empty$ + { "to sort, need author, editor, or key in " cite$ * warning$ + "" + } + { key sortify } + if$ + } + { editor sort.format.names } + if$ + } + { author sort.format.names } + if$ +} +FUNCTION {editor.sort} +{ editor empty$ + { key empty$ + { "to sort, need editor or key in " cite$ * warning$ + "" + } + { key sortify } + if$ + } + { editor sort.format.names } + if$ +} +FUNCTION {presort} +{ calc.label + label sortify + " " + * + type$ "book" = + type$ "inbook" = + or + 'author.editor.sort + { type$ "proceedings" = + 'editor.sort + 'author.sort + if$ + } + if$ + #1 entry.max$ substring$ + 'sort.label := + sort.label + * + " " + * + title field.or.null + sort.format.title + * + #1 entry.max$ substring$ + 'sort.key$ := +} + +ITERATE {presort} +SORT +STRINGS { last.label next.extra } +INTEGERS { last.extra.num number.label } +FUNCTION {initialize.extra.label.stuff} +{ #0 int.to.chr$ 'last.label := + "" 'next.extra := + #0 'last.extra.num := + #0 'number.label := +} +FUNCTION {forward.pass} +{ last.label label = + { last.extra.num #1 + 'last.extra.num := + last.extra.num int.to.chr$ 'extra.label := + } + { "a" chr.to.int$ 'last.extra.num := + "" 'extra.label := + label 'last.label := + } + if$ + number.label #1 + 'number.label := +} +FUNCTION {reverse.pass} +{ next.extra "b" = + { "a" 'extra.label := } + 'skip$ + if$ + extra.label 'next.extra := + extra.label + duplicate$ empty$ + 'skip$ + { "{\natexlab{" swap$ * "}}" * } + if$ + 'extra.label := + label extra.label * 'label := +} +EXECUTE {initialize.extra.label.stuff} +ITERATE {forward.pass} +REVERSE {reverse.pass} +FUNCTION {bib.sort.order} +{ sort.label + " " + * + year field.or.null sortify + * + " " + * + title field.or.null + sort.format.title + * + #1 entry.max$ substring$ + 'sort.key$ := +} +ITERATE {bib.sort.order} +SORT +FUNCTION {begin.bib} +{ preamble$ empty$ + 'skip$ + { preamble$ write$ newline$ } + if$ + "\begin{small}\begin{thebibliography}{" number.label int.to.str$ * "}" * + write$ newline$ + "\expandafter\ifx\csname natexlab\endcsname\relax\def\natexlab#1{#1}\fi" + write$ newline$ + "\providecommand{\bibinfo}[2]{#2}" + write$ newline$ + "\ifx\xfnm\relax \def\xfnm[#1]{\unskip,\space#1}\fi" + write$ newline$ +} +EXECUTE {begin.bib} +EXECUTE {init.state.consts} +ITERATE {call.type$} +FUNCTION {end.bib} +{ newline$ + "\end{thebibliography}\end{small}" write$ newline$ +} +EXECUTE {end.bib} diff --git a/thys/Forcing/document/root.tex b/thys/Forcing/document/root.tex new file mode 100644 --- /dev/null +++ b/thys/Forcing/document/root.tex @@ -0,0 +1,110 @@ +\documentclass[11pt,a4paper]{article} +\usepackage{isabelle,isabellesym} +\usepackage[numbers]{natbib} + +% further packages required for unusual symbols (see also +% isabellesym.sty), use only when needed + +\usepackage{amssymb} + %for \, \, \, \, \, \, + %\, \, \, \, \, + %\, \, \ + +%\usepackage{eurosym} + %for \ + +%\usepackage[only,bigsqcap]{stmaryrd} + %for \ + +%\usepackage{eufrak} + %for \ ... \, \ ... \ (also included in amssymb) + +%\usepackage{textcomp} + %for \, \, \, \, \, + %\ + +% this should be the last package used +\usepackage{pdfsetup} + +% urls in roman style, theory text in math-similar italics +\urlstyle{rm} +\isabellestyle{it} + +% for uniform font size +%\renewcommand{\isastyle}{\isastyleminor} + +\renewcommand{\isacharunderscorekeyword}{\mbox{\_}} +\renewcommand{\isacharunderscore}{\mbox{\_}} +\renewcommand{\isasymtturnstile}{\isamath{\Vdash}} +\renewcommand{\isacharminus}{-} +\newcommand{\axiomas}[1]{\mathit{#1}} +\newcommand{\ZFC}{\axiomas{ZFC}} + + +\begin{document} + +\title{Formalization of Forcing in Isabelle/ZF} +\author{Emmanuel Gunther\thanks{Universidad Nacional de C\'ordoba. + Facultad de Matem\'atica, Astronom\'{\i}a, F\'{\i}sica y + Computaci\'on.} + \and + Miguel Pagano\footnotemark[1] + \and + Pedro S\'anchez Terraf\footnotemark[1] \thanks{Centro de Investigaci\'on y Estudios de Matem\'atica + (CIEM-FaMAF), Conicet. C\'ordoba. Argentina. + Supported by Secyt-UNC project 33620180100465CB.} +} +\maketitle + +\begin{abstract} + We formalize the theory of forcing in the set theory framework of + Isabelle/ZF. Under the assumption of the existence of a countable + transitive model of $\ZFC$, we construct a proper generic extension and show + that the latter also satisfies $\ZFC$. +\end{abstract} + + +\tableofcontents + +% sane default for proof documents +\parindent 0pt\parskip 0.5ex + +\section{Introduction} +We formalize the theory of forcing. We work on top of the Isabelle/ZF +framework developed by \citet{DBLP:journals/jar/PaulsonG96}. Our +mechanization is described in more detail in our papers +\cite{2018arXiv180705174G} (LSFA 2018), \cite{2019arXiv190103313G}, +and \cite{2020arXiv200109715G} (IJCAR 2020). + +\subsection*{Release notes} +\label{sec:release-notes} + +We have improved several aspects of our development before submitting +it to the AFP: +\begin{enumerate} +\item Our session \isatt{Forcing} depends on the new release of + \isatt{ZF-Constructible}. +\item We streamlined the commands for synthesizing renames and formulas. +\item The command that synthesizes formulas produces the lemmas for + them (the synthesized term is a formula and the equivalence between + the satisfaction of the synthesized term and the relativized term). +\item Consistently use of structured proofs using Isar (except for one + coming from a schematic goal command). +\end{enumerate} + +A cross-linked HTML version of the development can be found at +\url{https://cs.famaf.unc.edu.ar/~pedro/forcing/}. + +% generated text of all theories +\input{session} + +% optional bibliography +\bibliographystyle{root} +\bibliography{root} + +\end{document} + +%%% Local Variables: +%%% mode: latex +%%% TeX-master: t +%%% End: diff --git a/thys/Gaussian_Integers/Gaussian_Integers.thy b/thys/Gaussian_Integers/Gaussian_Integers.thy new file mode 100644 --- /dev/null +++ b/thys/Gaussian_Integers/Gaussian_Integers.thy @@ -0,0 +1,2376 @@ +(* + File: Gaussian_Integers.thy + Author: Manuel Eberl, TU München +*) +section \Gaussian Integers\ +theory Gaussian_Integers +imports + "HOL-Computational_Algebra.Computational_Algebra" + "HOL-Number_Theory.Number_Theory" +begin + +subsection \Auxiliary material\ + +lemma coprime_iff_prime_factors_disjoint: + fixes x y :: "'a :: factorial_semiring" + assumes "x \ 0" "y \ 0" + shows "coprime x y \ prime_factors x \ prime_factors y = {}" +proof + assume "coprime x y" + have False if "p \ prime_factors x" "p \ prime_factors y" for p + proof - + from that assms have "p dvd x" "p dvd y" + by (auto simp: prime_factors_dvd) + with \coprime x y\ have "p dvd 1" + using coprime_common_divisor by auto + with that assms show False by (auto simp: prime_factors_dvd) + qed + thus "prime_factors x \ prime_factors y = {}" by auto +next + assume disjoint: "prime_factors x \ prime_factors y = {}" + show "coprime x y" + proof (rule coprimeI) + fix d assume d: "d dvd x" "d dvd y" + show "is_unit d" + proof (rule ccontr) + assume "\is_unit d" + moreover from this and d assms have "d \ 0" by auto + ultimately obtain p where p: "prime p" "p dvd d" + using prime_divisor_exists by auto + with d and assms have "p \ prime_factors x \ prime_factors y" + by (auto simp: prime_factors_dvd) + with disjoint show False by auto + qed + qed +qed + +lemma product_dvd_irreducibleD: + fixes a b x :: "'a :: algebraic_semidom" + assumes "irreducible x" + assumes "a * b dvd x" + shows "a dvd 1 \ b dvd 1" +proof - + from assms obtain c where "x = a * b * c" + by auto + hence "x = a * (b * c)" + by (simp add: mult_ac) + from irreducibleD[OF assms(1) this] show "a dvd 1 \ b dvd 1" + by (auto simp: is_unit_mult_iff) +qed + +lemma prime_elem_mult_dvdI: + assumes "prime_elem p" "p dvd c" "b dvd c" "\p dvd b" + shows "p * b dvd c" +proof - + from assms(3) obtain a where c: "c = a * b" + using mult.commute by blast + with assms(2) have "p dvd a * b" + by simp + with assms have "p dvd a" + by (subst (asm) prime_elem_dvd_mult_iff) auto + with c show ?thesis by (auto intro: mult_dvd_mono) +qed + +lemma prime_elem_power_mult_dvdI: + fixes p :: "'a :: algebraic_semidom" + assumes "prime_elem p" "p ^ n dvd c" "b dvd c" "\p dvd b" + shows "p ^ n * b dvd c" +proof (cases "n = 0") + case False + from assms(3) obtain a where c: "c = a * b" + using mult.commute by blast + with assms(2) have "p ^ n dvd b * a" + by (simp add: mult_ac) + hence "p ^ n dvd a" + by (rule prime_power_dvd_multD[OF assms(1)]) (use assms False in auto) + with c show ?thesis by (auto intro: mult_dvd_mono) +qed (use assms in auto) + +lemma prime_mod_4_cases: + fixes p :: nat + assumes "prime p" + shows "p = 2 \ [p = 1] (mod 4) \ [p = 3] (mod 4)" +proof (cases "p = 2") + case False + with prime_gt_1_nat[of p] assms have "p > 2" by auto + have "\4 dvd p" + using assms product_dvd_irreducibleD[of p 2 2] + by (auto simp: prime_elem_iff_irreducible simp flip: prime_elem_nat_iff) + hence "p mod 4 \ 0" + by (auto simp: mod_eq_0_iff_dvd) + moreover have "p mod 4 \ 2" + proof + assume "p mod 4 = 2" + hence "p mod 4 mod 2 = 0" + by (simp add: cong_def) + thus False using \prime p\ \p > 2\ prime_odd_nat[of p] + by (auto simp: mod_mod_cancel) + qed + moreover have "p mod 4 \ {0,1,2,3}" + by auto + ultimately show ?thesis by (auto simp: cong_def) +qed auto + +lemma of_nat_prod_mset: "of_nat (prod_mset A) = prod_mset (image_mset of_nat A)" + by (induction A) auto + +lemma multiplicity_0_left [simp]: "multiplicity 0 x = 0" + by (cases "x = 0") (auto simp: not_dvd_imp_multiplicity_0) + +lemma is_unit_power [intro]: "is_unit x \ is_unit (x ^ n)" + by (subst is_unit_power_iff) auto + +lemma (in factorial_semiring) pow_divides_pow_iff: + assumes "n > 0" + shows "a ^ n dvd b ^ n \ a dvd b" +proof (cases "b = 0") + case False + show ?thesis + proof + assume dvd: "a ^ n dvd b ^ n" + with \b \ 0\ have "a \ 0" + using \n > 0\ by (auto simp: power_0_left) + show "a dvd b" + proof (rule multiplicity_le_imp_dvd) + fix p :: 'a assume p: "prime p" + from dvd \b \ 0\ have "multiplicity p (a ^ n) \ multiplicity p (b ^ n)" + by (intro dvd_imp_multiplicity_le) auto + thus "multiplicity p a \ multiplicity p b" + using p \a \ 0\ \b \ 0\ \n > 0\ by (simp add: prime_elem_multiplicity_power_distrib) + qed fact+ + qed (auto intro: dvd_power_same) +qed (use assms in \auto simp: power_0_left\) + +lemma multiplicity_power_power: + fixes p :: "'a :: {factorial_semiring, algebraic_semidom}" + assumes "n > 0" + shows "multiplicity (p ^ n) (x ^ n) = multiplicity p x" +proof (cases "x = 0 \ p = 0 \ is_unit p") + case True + thus ?thesis using \n > 0\ + by (auto simp: power_0_left is_unit_power_iff multiplicity_unit_left) +next + case False + show ?thesis + proof (intro antisym multiplicity_geI) + have "(p ^ multiplicity p x) ^ n dvd x ^ n" + by (intro dvd_power_same) (simp add: multiplicity_dvd) + thus "(p ^ n) ^ multiplicity p x dvd x ^ n" + by (simp add: mult_ac flip: power_mult) + next + have "(p ^ n) ^ multiplicity (p ^ n) (x ^ n) dvd x ^ n" + by (simp add: multiplicity_dvd) + hence "(p ^ multiplicity (p ^ n) (x ^ n)) ^ n dvd x ^ n" + by (simp add: mult_ac flip: power_mult) + thus "p ^ multiplicity (p ^ n) (x ^ n) dvd x" + by (subst (asm) pow_divides_pow_iff) (use assms in auto) + qed (use False \n > 0\ in \auto simp: is_unit_power_iff\) +qed + +lemma even_square_cong_4_int: + fixes x :: int + assumes "even x" + shows "[x ^ 2 = 0] (mod 4)" +proof - + from assms have "even \x\" + by simp + hence [simp]: "\x\ mod 2 = 0" + by presburger + have "(\x\ ^ 2) mod 4 = ((\x\ mod 4) ^ 2) mod 4" + by (simp add: power_mod) + also from assms have "\x\ mod 4 = 0 \ \x\ mod 4 = 2" + using mod_double_modulus[of 2 "\x\"] by simp + hence "((\x\ mod 4) ^ 2) mod 4 = 0" + by auto + finally show ?thesis by (simp add: cong_def) +qed + +lemma even_square_cong_4_nat: "even (x::nat) \ [x ^ 2 = 0] (mod 4)" + using even_square_cong_4_int[of "int x"] by (auto simp flip: cong_int_iff) + +lemma odd_square_cong_4_int: + fixes x :: int + assumes "odd x" + shows "[x ^ 2 = 1] (mod 4)" +proof - + from assms have "odd \x\" + by simp + hence [simp]: "\x\ mod 2 = 1" + by presburger + have "(\x\ ^ 2) mod 4 = ((\x\ mod 4) ^ 2) mod 4" + by (simp add: power_mod) + also from assms have "\x\ mod 4 = 1 \ \x\ mod 4 = 3" + using mod_double_modulus[of 2 "\x\"] by simp + hence "((\x\ mod 4) ^ 2) mod 4 = 1" + by auto + finally show ?thesis by (simp add: cong_def) +qed + +lemma odd_square_cong_4_nat: "odd (x::nat) \ [x ^ 2 = 1] (mod 4)" + using odd_square_cong_4_int[of "int x"] by (auto simp flip: cong_int_iff) + + +text \ + Gaussian integers will require a notion of an element being a power up to a unit, + so we introduce this here. This should go in the library eventually. +\ +definition is_nth_power_upto_unit where + "is_nth_power_upto_unit n x \ (\u. is_unit u \ is_nth_power n (u * x))" + +lemma is_nth_power_upto_unit_base: "is_nth_power n x \ is_nth_power_upto_unit n x" + by (auto simp: is_nth_power_upto_unit_def intro: exI[of _ 1]) + +lemma is_nth_power_upto_unitI: + assumes "normalize (x ^ n) = normalize y" + shows "is_nth_power_upto_unit n y" +proof - + from associatedE1[OF assms] obtain u where "is_unit u" "u * y = x ^ n" + by metis + thus ?thesis + by (auto simp: is_nth_power_upto_unit_def intro!: exI[of _ u]) +qed + +lemma is_nth_power_upto_unit_conv_multiplicity: + fixes x :: "'a :: factorial_semiring" + assumes "n > 0" + shows "is_nth_power_upto_unit n x \ (\p. prime p \ n dvd multiplicity p x)" +proof (cases "x = 0") + case False + show ?thesis + proof safe + fix p :: 'a assume p: "prime p" + assume "is_nth_power_upto_unit n x" + then obtain u y where uy: "is_unit u" "u * x = y ^ n" + by (auto simp: is_nth_power_upto_unit_def elim!: is_nth_powerE) + from p uy assms False have [simp]: "y \ 0" by (auto simp: power_0_left) + have "multiplicity p (u * x) = multiplicity p (y ^ n)" + by (subst uy(2) [symmetric]) simp + also have "multiplicity p (u * x) = multiplicity p x" + by (simp add: multiplicity_times_unit_right uy(1)) + finally show "n dvd multiplicity p x" + using False and p and uy and assms + by (auto simp: prime_elem_multiplicity_power_distrib) + next + assume *: "\p. prime p \ n dvd multiplicity p x" + have "multiplicity p ((\p\prime_factors x. p ^ (multiplicity p x div n)) ^ n) = + multiplicity p x" if "prime p" for p + proof - + from that and * have "n dvd multiplicity p x" by blast + have "multiplicity p x = 0" if "p \ prime_factors x" + using that and \prime p\ by (simp add: prime_factors_multiplicity) + with that and * and assms show ?thesis unfolding prod_power_distrib power_mult [symmetric] + by (subst multiplicity_prod_prime_powers) (auto simp: in_prime_factors_imp_prime elim: dvdE) + qed + with assms False + have "normalize ((\p\prime_factors x. p ^ (multiplicity p x div n)) ^ n) = normalize x" + by (intro multiplicity_eq_imp_eq) (auto simp: multiplicity_prod_prime_powers) + thus "is_nth_power_upto_unit n x" + by (auto intro: is_nth_power_upto_unitI) + qed +qed (use assms in \auto simp: is_nth_power_upto_unit_def\) + +lemma is_nth_power_upto_unit_0_left [simp, intro]: "is_nth_power_upto_unit 0 x \ is_unit x" +proof + assume "is_unit x" + thus "is_nth_power_upto_unit 0 x" + unfolding is_nth_power_upto_unit_def by (intro exI[of _ "1 div x"]) auto +next + assume "is_nth_power_upto_unit 0 x" + then obtain u where "is_unit u" "u * x = 1" + by (auto simp: is_nth_power_upto_unit_def) + thus "is_unit x" + by (metis dvd_triv_right) +qed + +lemma is_nth_power_upto_unit_unit [simp, intro]: + assumes "is_unit x" + shows "is_nth_power_upto_unit n x" + using assms by (auto simp: is_nth_power_upto_unit_def intro!: exI[of _ "1 div x"]) + +lemma is_nth_power_upto_unit_1_left [simp, intro]: "is_nth_power_upto_unit 1 x" + by (auto simp: is_nth_power_upto_unit_def intro: exI[of _ 1]) + +lemma is_nth_power_upto_unit_mult_coprimeD1: + fixes x y :: "'a :: factorial_semiring" + assumes "coprime x y" "is_nth_power_upto_unit n (x * y)" + shows "is_nth_power_upto_unit n x" +proof - + consider "n = 0" | "x = 0" "n > 0" | "x \ 0" "y = 0" "n > 0" | "n > 0" "x \ 0" "y \ 0" + by force + thus ?thesis + proof cases + assume [simp]: "n = 0" + from assms have "is_unit (x * y)" + by auto + hence "is_unit x" + using is_unit_mult_iff by blast + thus ?thesis using assms by auto + next + assume "x = 0" "n > 0" + thus ?thesis by (auto simp: is_nth_power_upto_unit_def) + next + assume *: "x \ 0" "y = 0" "n > 0" + with assms show ?thesis by auto + next + assume *: "n > 0" and [simp]: "x \ 0" "y \ 0" + show ?thesis + proof (subst is_nth_power_upto_unit_conv_multiplicity[OF \n > 0\]; safe) + fix p :: 'a assume p: "prime p" + show "n dvd multiplicity p x" + proof (cases "p dvd x") + case False + thus ?thesis + by (simp add: not_dvd_imp_multiplicity_0) + next + case True + have "n dvd multiplicity p (x * y)" + using assms(2) \n > 0\ p by (auto simp: is_nth_power_upto_unit_conv_multiplicity) + also have "\ = multiplicity p x + multiplicity p y" + using p by (subst prime_elem_multiplicity_mult_distrib) auto + also have "\p dvd y" + using \coprime x y\ \p dvd x\ p not_prime_unit coprime_common_divisor by blast + hence "multiplicity p y = 0" + by (rule not_dvd_imp_multiplicity_0) + finally show ?thesis by simp + qed + qed + qed +qed + +lemma is_nth_power_upto_unit_mult_coprimeD2: + fixes x y :: "'a :: factorial_semiring" + assumes "coprime x y" "is_nth_power_upto_unit n (x * y)" + shows "is_nth_power_upto_unit n y" + using assms is_nth_power_upto_unit_mult_coprimeD1[of y x] + by (simp_all add: mult_ac coprime_commute) + + +subsection \Definition\ + +text \ + Gaussian integers are the ring $\mathbb{Z}[i]$ which is formed either by formally adjoining + an element $i$ with $i^2 = -1$ to $\mathbb{Z}$ or by taking all the complex numbers with + integer real and imaginary part. + + We define them simply by giving an appropriate ring structure to $\mathbb{Z}^2$, with the first + component representing the real part and the second component the imaginary part: +\ +codatatype gauss_int = Gauss_Int (ReZ: int) (ImZ: int) + +text \ + The following is the imaginary unit $i$ in the Gaussian integers, which we will denote as + \\\<^sub>\\: +\ +primcorec gauss_i where + "ReZ gauss_i = 0" +| "ImZ gauss_i = 1" + +lemma gauss_int_eq_iff: "x = y \ ReZ x = ReZ y \ ImZ x = ImZ y" + by (cases x; cases y) auto + + +(*<*) +bundle gauss_int_notation +begin + +notation gauss_i ("\\<^sub>\") + +end + +bundle no_gauss_int_notation +begin + +no_notation (output) gauss_i ("\\<^sub>\") + +end + +bundle gauss_int_output_notation +begin + +notation (output) gauss_i ("\") + +end + +unbundle gauss_int_notation +(*>*) + + +text \ + Next, we define the canonical injective homomorphism from the Gaussian integers into the + complex numbers: +\ +primcorec gauss2complex where + "Re (gauss2complex z) = of_int (ReZ z)" +| "Im (gauss2complex z) = of_int (ImZ z)" + +declare [[coercion gauss2complex]] + +lemma gauss2complex_eq_iff [simp]: "gauss2complex z = gauss2complex u \ z = u" + by (simp add: complex_eq_iff gauss_int_eq_iff) + +text \ + Gaussian integers also have conjugates, just like complex numbers: +\ +primcorec gauss_cnj where + "ReZ (gauss_cnj z) = ReZ z" +| "ImZ (gauss_cnj z) = -ImZ z" + + +text \ + In the remainder of this section, we prove that Gaussian integers are a commutative ring + of characteristic 0 and several other trivial algebraic properties. +\ + +instantiation gauss_int :: comm_ring_1 +begin + +primcorec zero_gauss_int where + "ReZ zero_gauss_int = 0" +| "ImZ zero_gauss_int = 0" + +primcorec one_gauss_int where + "ReZ one_gauss_int = 1" +| "ImZ one_gauss_int = 0" + +primcorec uminus_gauss_int where + "ReZ (uminus_gauss_int x) = -ReZ x" +| "ImZ (uminus_gauss_int x) = -ImZ x" + +primcorec plus_gauss_int where + "ReZ (plus_gauss_int x y) = ReZ x + ReZ y" +| "ImZ (plus_gauss_int x y) = ImZ x + ImZ y" + +primcorec minus_gauss_int where + "ReZ (minus_gauss_int x y) = ReZ x - ReZ y" +| "ImZ (minus_gauss_int x y) = ImZ x - ImZ y" + +primcorec times_gauss_int where + "ReZ (times_gauss_int x y) = ReZ x * ReZ y - ImZ x * ImZ y" +| "ImZ (times_gauss_int x y) = ReZ x * ImZ y + ImZ x * ReZ y" + +instance + by intro_classes (auto simp: gauss_int_eq_iff algebra_simps) + +end + +lemma gauss_i_times_i [simp]: "\\<^sub>\ * \\<^sub>\ = (-1 :: gauss_int)" + and gauss_cnj_i [simp]: "gauss_cnj \\<^sub>\ = -\\<^sub>\" + by (simp_all add: gauss_int_eq_iff) + +lemma gauss_cnj_eq_0_iff [simp]: "gauss_cnj z = 0 \ z = 0" + by (auto simp: gauss_int_eq_iff) + +lemma gauss_cnj_eq_self: "Im z = 0 \ gauss_cnj z = z" + and gauss_cnj_eq_minus_self: "Re z = 0 \ gauss_cnj z = -z" + by (auto simp: gauss_int_eq_iff) + +lemma ReZ_of_nat [simp]: "ReZ (of_nat n) = of_nat n" + and ImZ_of_nat [simp]: "ImZ (of_nat n) = 0" + by (induction n; simp)+ + +lemma ReZ_of_int [simp]: "ReZ (of_int n) = n" + and ImZ_of_int [simp]: "ImZ (of_int n) = 0" + by (induction n; simp)+ + +lemma ReZ_numeral [simp]: "ReZ (numeral n) = numeral n" + and ImZ_numeral [simp]: "ImZ (numeral n) = 0" + by (subst of_nat_numeral [symmetric], subst ReZ_of_nat ImZ_of_nat, simp)+ + +lemma gauss2complex_0 [simp]: "gauss2complex 0 = 0" + and gauss2complex_1 [simp]: "gauss2complex 1 = 1" + and gauss2complex_i [simp]: "gauss2complex \\<^sub>\ = \" + and gauss2complex_add [simp]: "gauss2complex (x + y) = gauss2complex x + gauss2complex y" + and gauss2complex_diff [simp]: "gauss2complex (x - y) = gauss2complex x - gauss2complex y" + and gauss2complex_mult [simp]: "gauss2complex (x * y) = gauss2complex x * gauss2complex y" + and gauss2complex_uminus [simp]: "gauss2complex (-x) = -gauss2complex x" + and gauss2complex_cnj [simp]: "gauss2complex (gauss_cnj x) = cnj (gauss2complex x)" + by (simp_all add: complex_eq_iff) + +lemma gauss2complex_of_nat [simp]: "gauss2complex (of_nat n) = of_nat n" + by (simp add: complex_eq_iff) + +lemma gauss2complex_eq_0_iff [simp]: "gauss2complex x = 0 \ x = 0" + and gauss2complex_eq_1_iff [simp]: "gauss2complex x = 1 \ x = 1" + and zero_eq_gauss2complex_iff [simp]: "0 = gauss2complex x \ x = 0" + and one_eq_gauss2complex_iff [simp]: "1 = gauss2complex x \ x = 1" + by (simp_all add: complex_eq_iff gauss_int_eq_iff) + +lemma gauss_i_times_gauss_i_times [simp]: "\\<^sub>\ * (\\<^sub>\ * x) = (-x :: gauss_int)" + by (subst mult.assoc [symmetric], subst gauss_i_times_i) auto + +lemma gauss_i_neq_0 [simp]: "\\<^sub>\ \ 0" "0 \ \\<^sub>\" + and gauss_i_neq_1 [simp]: "\\<^sub>\ \ 1" "1 \ \\<^sub>\" + and gauss_i_neq_of_nat [simp]: "\\<^sub>\ \ of_nat n" "of_nat n \ \\<^sub>\" + and gauss_i_neq_of_int [simp]: "\\<^sub>\ \ of_int n" "of_int n \ \\<^sub>\" + and gauss_i_neq_numeral [simp]: "\\<^sub>\ \ numeral m" "numeral m \ \\<^sub>\" + by (auto simp: gauss_int_eq_iff) + +lemma gauss_cnj_0 [simp]: "gauss_cnj 0 = 0" + and gauss_cnj_1 [simp]: "gauss_cnj 1 = 1" + and gauss_cnj_cnj [simp]: "gauss_cnj (gauss_cnj z) = z" + and gauss_cnj_uminus [simp]: "gauss_cnj (-a) = -gauss_cnj a" + and gauss_cnj_add [simp]: "gauss_cnj (a + b) = gauss_cnj a + gauss_cnj b" + and gauss_cnj_diff [simp]: "gauss_cnj (a - b) = gauss_cnj a - gauss_cnj b" + and gauss_cnj_mult [simp]: "gauss_cnj (a * b) = gauss_cnj a * gauss_cnj b" + and gauss_cnj_of_nat [simp]: "gauss_cnj (of_nat n1) = of_nat n1" + and gauss_cnj_of_int [simp]: "gauss_cnj (of_int n2) = of_int n2" + and gauss_cnj_numeral [simp]: "gauss_cnj (numeral n3) = numeral n3" + by (simp_all add: gauss_int_eq_iff) + +lemma gauss_cnj_power [simp]: "gauss_cnj (a ^ n) = gauss_cnj a ^ n" + by (induction n) auto + +lemma gauss_cnj_sum [simp]: "gauss_cnj (sum f A) = (\x\A. gauss_cnj (f x))" + by (induction A rule: infinite_finite_induct) auto + +lemma gauss_cnj_prod [simp]: "gauss_cnj (prod f A) = (\x\A. gauss_cnj (f x))" + by (induction A rule: infinite_finite_induct) auto + +lemma of_nat_dvd_of_nat: + assumes "a dvd b" + shows "of_nat a dvd (of_nat b :: 'a :: comm_semiring_1)" + using assms by auto + +lemma of_int_dvd_imp_dvd_gauss_cnj: + fixes z :: gauss_int + assumes "of_int n dvd z" + shows "of_int n dvd gauss_cnj z" +proof - + from assms obtain u where "z = of_int n * u" by blast + hence "gauss_cnj z = of_int n * gauss_cnj u" + by simp + thus ?thesis by auto +qed + +lemma of_nat_dvd_imp_dvd_gauss_cnj: + fixes z :: gauss_int + assumes "of_nat n dvd z" + shows "of_nat n dvd gauss_cnj z" + using of_int_dvd_imp_dvd_gauss_cnj[of "int n"] assms by simp + +lemma of_int_dvd_of_int_gauss_int_iff: + "(of_int m :: gauss_int) dvd of_int n \ m dvd n" +proof + assume "of_int m dvd (of_int n :: gauss_int)" + then obtain a :: gauss_int where "of_int n = of_int m * a" + by blast + thus "m dvd n" + by (auto simp: gauss_int_eq_iff) +qed auto + +lemma of_nat_dvd_of_nat_gauss_int_iff: + "(of_nat m :: gauss_int) dvd of_nat n \ m dvd n" + using of_int_dvd_of_int_gauss_int_iff[of "int m" "int n"] by simp + +lemma gauss_cnj_dvd: + assumes "a dvd b" + shows "gauss_cnj a dvd gauss_cnj b" +proof - + from assms obtain c where "b = a * c" + by blast + hence "gauss_cnj b = gauss_cnj a * gauss_cnj c" + by simp + thus ?thesis by auto +qed + +lemma gauss_cnj_dvd_iff: "gauss_cnj a dvd gauss_cnj b \ a dvd b" + using gauss_cnj_dvd[of a b] gauss_cnj_dvd[of "gauss_cnj a" "gauss_cnj b"] by auto + +lemma gauss_cnj_dvd_left_iff: "gauss_cnj a dvd b \ a dvd gauss_cnj b" + by (subst gauss_cnj_dvd_iff [symmetric]) auto + +lemma gauss_cnj_dvd_right_iff: "a dvd gauss_cnj b \ gauss_cnj a dvd b" + by (rule gauss_cnj_dvd_left_iff [symmetric]) + + +instance gauss_int :: idom +proof + fix z u :: gauss_int + assume "z \ 0" "u \ 0" + hence "gauss2complex z * gauss2complex u \ 0" + by simp + also have "gauss2complex z * gauss2complex u = gauss2complex (z * u)" + by simp + finally show "z * u \ 0" + unfolding gauss2complex_eq_0_iff . +qed + +instance gauss_int :: ring_char_0 + by intro_classes (auto intro!: injI simp: gauss_int_eq_iff) + + +subsection \Pretty-printing\ + +text \ + The following lemma collection provides better pretty-printing of Gaussian integers so that + e.g.\ evaluation with the `value' command produces nicer results. +\ +lemma gauss_int_code_post [code_post]: + "Gauss_Int 0 0 = 0" + "Gauss_Int 0 1 = \\<^sub>\" + "Gauss_Int 0 (-1) = -\\<^sub>\" + "Gauss_Int 1 0 = 1" + "Gauss_Int 1 1 = 1 + \\<^sub>\" + "Gauss_Int 1 (-1) = 1 - \\<^sub>\" + "Gauss_Int (-1) 0 = -1" + "Gauss_Int (-1) 1 = -1 + \\<^sub>\" + "Gauss_Int (-1) (-1) = -1 - \\<^sub>\" + "Gauss_Int (numeral b) 0 = numeral b" + "Gauss_Int (-numeral b) 0 = -numeral b" + "Gauss_Int (numeral b) 1 = numeral b + \\<^sub>\" + "Gauss_Int (-numeral b) 1 = -numeral b + \\<^sub>\" + "Gauss_Int (numeral b) (-1) = numeral b - \\<^sub>\" + "Gauss_Int (-numeral b) (-1) = -numeral b - \\<^sub>\" + "Gauss_Int 0 (numeral b) = numeral b * \\<^sub>\" + "Gauss_Int 0 (-numeral b) = -numeral b * \\<^sub>\" + "Gauss_Int 1 (numeral b) = 1 + numeral b * \\<^sub>\" + "Gauss_Int 1 (-numeral b) = 1 - numeral b * \\<^sub>\" + "Gauss_Int (-1) (numeral b) = -1 + numeral b * \\<^sub>\" + "Gauss_Int (-1) (-numeral b) = -1 - numeral b * \\<^sub>\" + "Gauss_Int (numeral a) (numeral b) = numeral a + numeral b * \\<^sub>\" + "Gauss_Int (numeral a) (-numeral b) = numeral a - numeral b * \\<^sub>\" + "Gauss_Int (-numeral a) (numeral b) = -numeral a + numeral b * \\<^sub>\" + "Gauss_Int (-numeral a) (-numeral b) = -numeral a - numeral b * \\<^sub>\" + by (simp_all add: gauss_int_eq_iff) + +value "\\<^sub>\ ^ 3" +value "2 * (3 + \\<^sub>\)" +value "(2 + \\<^sub>\) * (2 - \\<^sub>\)" + + +subsection \Norm\ + +text \ + The square of the complex norm (or complex modulus) on the Gaussian integers gives us a norm + that always returns a natural number. We will later show that this is also a Euclidean norm + (in the sense of a Euclidean ring). +\ +definition gauss_int_norm :: "gauss_int \ nat" where + "gauss_int_norm z = nat (ReZ z ^ 2 + ImZ z ^ 2)" + +lemma gauss_int_norm_0 [simp]: "gauss_int_norm 0 = 0" + and gauss_int_norm_1 [simp]: "gauss_int_norm 1 = 1" + and gauss_int_norm_i [simp]: "gauss_int_norm \\<^sub>\ = 1" + and gauss_int_norm_cnj [simp]: "gauss_int_norm (gauss_cnj z) = gauss_int_norm z" + and gauss_int_norm_of_nat [simp]: "gauss_int_norm (of_nat n) = n ^ 2" + and gauss_int_norm_of_int [simp]: "gauss_int_norm (of_int m) = nat (m ^ 2)" + and gauss_int_norm_of_numeral [simp]: "gauss_int_norm (numeral n') = numeral (Num.sqr n')" + by (simp_all add: gauss_int_norm_def nat_power_eq) + +lemma gauss_int_norm_uminus [simp]: "gauss_int_norm (-z) = gauss_int_norm z" + by (simp add: gauss_int_norm_def) + +lemma gauss_int_norm_eq_0_iff [simp]: "gauss_int_norm z = 0 \ z = 0" +proof + assume "gauss_int_norm z = 0" + hence "ReZ z ^ 2 + ImZ z ^ 2 \ 0" + by (simp add: gauss_int_norm_def) + moreover have "ReZ z ^ 2 + ImZ z ^ 2 \ 0" + by simp + ultimately have "ReZ z ^ 2 + ImZ z ^ 2 = 0" + by linarith + thus "z = 0" + by (auto simp: gauss_int_eq_iff) +qed auto + +lemma gauss_int_norm_pos_iff [simp]: "gauss_int_norm z > 0 \ z \ 0" + using gauss_int_norm_eq_0_iff[of z] by (auto intro: Nat.gr0I) + +lemma real_gauss_int_norm: "real (gauss_int_norm z) = norm (gauss2complex z) ^ 2" + by (auto simp: cmod_def gauss_int_norm_def) + +lemma gauss_int_norm_mult: "gauss_int_norm (z * u) = gauss_int_norm z * gauss_int_norm u" +proof - + have "real (gauss_int_norm (z * u)) = real (gauss_int_norm z * gauss_int_norm u)" + unfolding of_nat_mult by (simp add: real_gauss_int_norm norm_power norm_mult power_mult_distrib) + thus ?thesis by (subst (asm) of_nat_eq_iff) +qed + +lemma self_mult_gauss_cnj: "z * gauss_cnj z = of_nat (gauss_int_norm z)" + by (simp add: gauss_int_norm_def gauss_int_eq_iff algebra_simps power2_eq_square) + +lemma gauss_cnj_mult_self: "gauss_cnj z * z = of_nat (gauss_int_norm z)" + by (subst mult.commute, rule self_mult_gauss_cnj) + +lemma self_plus_gauss_cnj: "z + gauss_cnj z = of_int (2 * ReZ z)" + and self_minus_gauss_cnj: "z - gauss_cnj z = of_int (2 * ImZ z) * \\<^sub>\" + by (auto simp: gauss_int_eq_iff) + +lemma gauss_int_norm_dvd_mono: + assumes "a dvd b" + shows "gauss_int_norm a dvd gauss_int_norm b" +proof - + from assms obtain c where "b = a * c" by blast + hence "gauss_int_norm b = gauss_int_norm (a * c)" + by metis + thus ?thesis by (simp add: gauss_int_norm_mult) +qed + +text \ + A Gaussian integer is a unit iff its norm is 1, and this is the case precisely for the four + elements \\1\ and \\\\: +\ +lemma is_unit_gauss_int_iff: "x dvd 1 \ x \ {1, -1, \\<^sub>\, -\\<^sub>\ :: gauss_int}" + and is_unit_gauss_int_iff': "x dvd 1 \ gauss_int_norm x = 1" +proof - + have "x dvd 1" if "x \ {1, -1, \\<^sub>\, -\\<^sub>\}" + proof - + from that have *: "x * gauss_cnj x = 1" + by (auto simp: gauss_int_norm_def) + show "x dvd 1" by (subst * [symmetric]) simp + qed + moreover have "gauss_int_norm x = 1" if "x dvd 1" + using gauss_int_norm_dvd_mono[OF that] by simp + moreover have "x \ {1, -1, \\<^sub>\, -\\<^sub>\}" if "gauss_int_norm x = 1" + proof - + from that have *: "(ReZ x)\<^sup>2 + (ImZ x)\<^sup>2 = 1" + by (auto simp: gauss_int_norm_def nat_eq_iff) + hence "ReZ x ^ 2 \ 1" and "ImZ x ^ 2 \ 1" + using zero_le_power2[of "ImZ x"] zero_le_power2[of "ReZ x"] by linarith+ + hence "\ReZ x\ \ 1" and "\ImZ x\ \ 1" + by (auto simp: abs_square_le_1) + hence "ReZ x \ {-1, 0, 1}" and "ImZ x \ {-1, 0, 1}" + by auto + thus "x \ {1, -1, \\<^sub>\, -\\<^sub>\ :: gauss_int}" + using * by (auto simp: gauss_int_eq_iff) + qed + ultimately show "x dvd 1 \ x \ {1, -1, \\<^sub>\, -\\<^sub>\ :: gauss_int}" + and "x dvd 1 \ gauss_int_norm x = 1" + by blast+ +qed + +lemma is_unit_gauss_i [simp, intro]: "(gauss_i :: gauss_int) dvd 1" + by (simp add: is_unit_gauss_int_iff) + +lemma gauss_int_norm_eq_Suc_0_iff: "gauss_int_norm x = Suc 0 \ x dvd 1" + by (simp add: is_unit_gauss_int_iff') + +lemma is_unit_gauss_cnj [intro]: "z dvd 1 \ gauss_cnj z dvd 1" + by (simp add: is_unit_gauss_int_iff') + +lemma is_unit_gauss_cnj_iff [simp]: "gauss_cnj z dvd 1 \ z dvd 1" + by (simp add: is_unit_gauss_int_iff') + + +subsection \Division and normalisation\ + +text \ + We define a rounding operation that takes a complex number and returns a Gaussian integer + by rounding the real and imaginary parts separately: +\ +primcorec round_complex :: "complex \ gauss_int" where + "ReZ (round_complex z) = round (Re z)" +| "ImZ (round_complex z) = round (Im z)" + +text \ + The distance between a rounded complex number and the original one is no more than + $\frac{1}{2}\sqrt{2}$: +\ +lemma norm_round_complex_le: "norm (z - gauss2complex (round_complex z)) ^ 2 \ 1 / 2" +proof - + have "(Re z - ReZ (round_complex z)) ^ 2 \ (1 / 2) ^ 2" + using of_int_round_abs_le[of "Re z"] + by (subst abs_le_square_iff [symmetric]) (auto simp: abs_minus_commute) + moreover have "(Im z - ImZ (round_complex z)) ^ 2 \ (1 / 2) ^ 2" + using of_int_round_abs_le[of "Im z"] + by (subst abs_le_square_iff [symmetric]) (auto simp: abs_minus_commute) + ultimately have "(Re z - ReZ (round_complex z)) ^ 2 + (Im z - ImZ (round_complex z)) ^ 2 \ + (1 / 2) ^ 2 + (1 / 2) ^ 2" + by (rule add_mono) + thus "norm (z - gauss2complex (round_complex z)) ^ 2 \ 1 / 2" + by (simp add: cmod_def power2_eq_square) +qed + +lemma dist_round_complex_le: "dist z (gauss2complex (round_complex z)) \ sqrt 2 / 2" +proof - + have "dist z (gauss2complex (round_complex z)) ^ 2 = + norm (z - gauss2complex (round_complex z)) ^ 2" + by (simp add: dist_norm) + also have "\ \ 1 / 2" + by (rule norm_round_complex_le) + also have "\ = (sqrt 2 / 2) ^ 2" + by (simp add: power2_eq_square) + finally show ?thesis + by (rule power2_le_imp_le) auto +qed + + +text \ + We can now define division on Gaussian integers simply by performing the division in the + complex numbers and rounding the result. This also gives us a remainder operation defined + accordingly for which the norm of the remainder is always smaller than the norm of the divisor. + + We can also define a normalisation operation that returns a canonical representative for each + association class. Since the four units of the Gaussian integers are \\1\ and \\\\, each + association class (other than \0\) has four representatives, one in each quadrant. We simply + define the on in the upper-right quadrant (i.e.\ the one with non-negative imaginary part + and positive real part) as the canonical one. + + Thus, the Gaussian integers form a Euclidean ring. This gives us many things, most importantly + the existence of GCDs and LCMs and unique factorisation. +\ +instantiation gauss_int :: algebraic_semidom +begin + +definition divide_gauss_int :: "gauss_int \ gauss_int \ gauss_int" where + "divide_gauss_int a b = round_complex (gauss2complex a / gauss2complex b)" + +instance proof + fix a :: gauss_int + show "a div 0 = 0" + by (auto simp: gauss_int_eq_iff divide_gauss_int_def) +next + fix a b :: gauss_int assume "b \ 0" + thus "a * b div b = a" + by (auto simp: gauss_int_eq_iff divide_gauss_int_def) +qed + +end + +instantiation gauss_int :: semidom_divide_unit_factor +begin + +definition unit_factor_gauss_int :: "gauss_int \ gauss_int" where + "unit_factor_gauss_int z = + (if z = 0 then 0 else + if ImZ z \ 0 \ ReZ z > 0 then 1 + else if ReZ z \ 0 \ ImZ z > 0 then \\<^sub>\ + else if ImZ z \ 0 \ ReZ z < 0 then -1 + else -\\<^sub>\)" + +instance proof + show "unit_factor (0 :: gauss_int) = 0" + by (simp add: unit_factor_gauss_int_def) +next + fix z :: gauss_int + assume "is_unit z" + thus "unit_factor z = z" + by (subst (asm) is_unit_gauss_int_iff) (auto simp: unit_factor_gauss_int_def) +next + fix z :: gauss_int + assume z: "z \ 0" + thus "is_unit (unit_factor z)" + by (subst is_unit_gauss_int_iff) (auto simp: unit_factor_gauss_int_def) +next + fix z u :: gauss_int + assume "is_unit z" + hence "z \ {1, -1, \\<^sub>\, -\\<^sub>\}" + by (subst (asm) is_unit_gauss_int_iff) + thus "unit_factor (z * u) = z * unit_factor u" + by (safe; auto simp: unit_factor_gauss_int_def gauss_int_eq_iff[of u 0]) +qed + +end + +instantiation gauss_int :: normalization_semidom +begin + +definition normalize_gauss_int :: "gauss_int \ gauss_int" where + "normalize_gauss_int z = + (if z = 0 then 0 else + if ImZ z \ 0 \ ReZ z > 0 then z + else if ReZ z \ 0 \ ImZ z > 0 then -\\<^sub>\ * z + else if ImZ z \ 0 \ ReZ z < 0 then -z + else \\<^sub>\ * z)" + +instance proof + show "normalize (0 :: gauss_int) = 0" + by (simp add: normalize_gauss_int_def) +next + fix z :: gauss_int + show "unit_factor z * normalize z = z" + by (auto simp: normalize_gauss_int_def unit_factor_gauss_int_def algebra_simps) +qed + +end + +lemma normalize_gauss_int_of_nat [simp]: "normalize (of_nat n :: gauss_int) = of_nat n" + and normalize_gauss_int_of_int [simp]: "normalize (of_int m :: gauss_int) = of_int \m\" + and normalize_gauss_int_of_numeral [simp]: "normalize (numeral n' :: gauss_int) = numeral n'" + by (auto simp: normalize_gauss_int_def) + +lemma normalize_gauss_i [simp]: "normalize \\<^sub>\ = 1" + by (simp add: normalize_gauss_int_def) + +lemma gauss_int_norm_normalize [simp]: "gauss_int_norm (normalize x) = gauss_int_norm x" + by (simp add: normalize_gauss_int_def gauss_int_norm_mult) + +lemma normalized_gauss_int: + assumes "normalize z = z" + shows "ReZ z \ 0" "ImZ z \ 0" + using assms + by (cases "ReZ z" "0 :: int" rule: linorder_cases; + cases "ImZ z" "0 :: int" rule: linorder_cases; + simp add: normalize_gauss_int_def gauss_int_eq_iff)+ + +lemma normalized_gauss_int': + assumes "normalize z = z" "z \ 0" + shows "ReZ z > 0" "ImZ z \ 0" + using assms + by (cases "ReZ z" "0 :: int" rule: linorder_cases; + cases "ImZ z" "0 :: int" rule: linorder_cases; + simp add: normalize_gauss_int_def gauss_int_eq_iff)+ + +lemma normalized_gauss_int_iff: + "normalize z = z \ z = 0 \ ReZ z > 0 \ ImZ z \ 0" + by (cases "ReZ z" "0 :: int" rule: linorder_cases; + cases "ImZ z" "0 :: int" rule: linorder_cases; + simp add: normalize_gauss_int_def gauss_int_eq_iff)+ + +instantiation gauss_int :: idom_modulo +begin + +definition modulo_gauss_int :: "gauss_int \ gauss_int \ gauss_int" where + "modulo_gauss_int a b = a - a div b * b" + +instance proof + fix a b :: gauss_int + show "a div b * b + a mod b = a" + by (simp add: modulo_gauss_int_def) +qed + +end + +lemma gauss_int_norm_mod_less_aux: + assumes [simp]: "b \ 0" + shows "2 * gauss_int_norm (a mod b) \ gauss_int_norm b" +proof - + define a' b' where "a' = gauss2complex a" and "b' = gauss2complex b" + have [simp]: "b' \ 0" by (simp add: b'_def) + have "gauss_int_norm (a mod b) = + norm (gauss2complex (a - round_complex (a' / b') * b)) ^ 2" + unfolding modulo_gauss_int_def + by (subst real_gauss_int_norm [symmetric]) (auto simp add: divide_gauss_int_def a'_def b'_def) + also have "gauss2complex (a - round_complex (a' / b') * b) = + a' - gauss2complex (round_complex (a' / b')) * b'" + by (simp add: a'_def b'_def) + also have "\ = (a' / b' - gauss2complex (round_complex (a' / b'))) * b'" + by (simp add: field_simps) + also have "norm \ ^ 2 = norm (a' / b' - gauss2complex (round_complex (a' / b'))) ^ 2 * norm b' ^ 2" + by (simp add: norm_mult power_mult_distrib) + also have "\ \ 1 / 2 * norm b' ^ 2" + by (intro mult_right_mono norm_round_complex_le) auto + also have "norm b' ^ 2 = gauss_int_norm b" + by (simp add: b'_def real_gauss_int_norm) + finally show ?thesis by linarith +qed + +lemma gauss_int_norm_mod_less: + assumes [simp]: "b \ 0" + shows "gauss_int_norm (a mod b) < gauss_int_norm b" +proof - + have "gauss_int_norm b > 0" by simp + thus "gauss_int_norm (a mod b) < gauss_int_norm b" + using gauss_int_norm_mod_less_aux[OF assms, of a] by presburger +qed + +lemma gauss_int_norm_dvd_imp_le: + assumes "b \ 0" + shows "gauss_int_norm a \ gauss_int_norm (a * b)" +proof (cases "a = 0") + case False + thus ?thesis using assms by (intro dvd_imp_le gauss_int_norm_dvd_mono) auto +qed auto + +instantiation gauss_int :: euclidean_ring +begin + +definition euclidean_size_gauss_int :: "gauss_int \ nat" where + [simp]: "euclidean_size_gauss_int = gauss_int_norm" + +instance proof + show "euclidean_size (0 :: gauss_int) = 0" + by simp +next + fix a b :: gauss_int assume [simp]: "b \ 0" + show "euclidean_size (a mod b) < euclidean_size b" + using gauss_int_norm_mod_less[of b a] by simp + show "euclidean_size a \ euclidean_size (a * b)" + by (simp add: gauss_int_norm_dvd_imp_le) +qed + +end + +instance gauss_int :: normalization_euclidean_semiring .. + +instantiation gauss_int :: euclidean_ring_gcd +begin + +definition gcd_gauss_int :: "gauss_int \ gauss_int \ gauss_int" where + "gcd_gauss_int \ normalization_euclidean_semiring_class.gcd" +definition lcm_gauss_int :: "gauss_int \ gauss_int \ gauss_int" where + "lcm_gauss_int \ normalization_euclidean_semiring_class.lcm" +definition Gcd_gauss_int :: "gauss_int set \ gauss_int" where + "Gcd_gauss_int \ normalization_euclidean_semiring_class.Gcd" +definition Lcm_gauss_int :: "gauss_int set \ gauss_int" where + "Lcm_gauss_int \ normalization_euclidean_semiring_class.Lcm" + +instance + by intro_classes + (simp_all add: gcd_gauss_int_def lcm_gauss_int_def Gcd_gauss_int_def Lcm_gauss_int_def) + +end + +lemma multiplicity_gauss_cnj: "multiplicity (gauss_cnj a) (gauss_cnj b) = multiplicity a b" + unfolding multiplicity_def gauss_cnj_power [symmetric] gauss_cnj_dvd_iff .. + +lemma multiplicity_gauss_int_of_nat: + "multiplicity (of_nat a) (of_nat b :: gauss_int) = multiplicity a b" + unfolding multiplicity_def of_nat_power [symmetric] of_nat_dvd_of_nat_gauss_int_iff .. + +lemma gauss_int_dvd_same_norm_imp_associated: + assumes "z1 dvd z2" "gauss_int_norm z1 = gauss_int_norm z2" + shows "normalize z1 = normalize z2" +proof (cases "z1 = 0") + case [simp]: False + from assms(1) obtain u where u: "z2 = z1 * u" by blast + from assms have "gauss_int_norm u = 1" + by (auto simp: gauss_int_norm_mult u) + hence "is_unit u" + by (simp add: is_unit_gauss_int_iff') + with u show ?thesis by simp +qed (use assms in auto) + +lemma gcd_of_int_gauss_int: "gcd (of_int a :: gauss_int) (of_int b) = of_int (gcd a b)" +proof (induction "nat \b\" arbitrary: a b rule: less_induct) + case (less b a) + show ?case + proof (cases "b = 0") + case False + have "of_int (gcd a b) = (of_int (gcd b (a mod b)) :: gauss_int)" + by (subst gcd_red_int) auto + also have "\ = gcd (of_int b) (of_int (a mod b))" + using False by (intro less [symmetric]) (auto intro!: abs_mod_less) + also have "a mod b = (a - a div b * b)" + by (simp add: minus_div_mult_eq_mod) + also have "of_int \ = of_int (-(a div b)) * of_int b + (of_int a :: gauss_int)" + by (simp add: algebra_simps) + also have "gcd (of_int b) \ = gcd (of_int b) (of_int a)" + by (rule gcd_add_mult) + finally show ?thesis by (simp add: gcd.commute) + qed auto +qed + +lemma coprime_of_int_gauss_int: "coprime (of_int a :: gauss_int) (of_int b) = coprime a b" + unfolding coprime_iff_gcd_eq_1 gcd_of_int_gauss_int by auto + +lemma gcd_of_nat_gauss_int: "gcd (of_nat a :: gauss_int) (of_nat b) = of_nat (gcd a b)" + using gcd_of_int_gauss_int[of "int a" "int b"] by simp + +lemma coprime_of_nat_gauss_int: "coprime (of_nat a :: gauss_int) (of_nat b) = coprime a b" + unfolding coprime_iff_gcd_eq_1 gcd_of_nat_gauss_int by auto + +lemma gauss_cnj_dvd_self_iff: "gauss_cnj z dvd z \ ReZ z = 0 \ ImZ z = 0 \ \ReZ z\ = \ImZ z\" +proof + assume "gauss_cnj z dvd z" + hence "normalize (gauss_cnj z) = normalize z" + by (rule gauss_int_dvd_same_norm_imp_associated) auto + then obtain u :: gauss_int where "is_unit u" and u: "gauss_cnj z = u * z" + using associatedE1 by blast + hence "u \ {1, -1, \\<^sub>\, -\\<^sub>\}" + by (simp add: is_unit_gauss_int_iff) + thus "ReZ z = 0 \ ImZ z = 0 \ \ReZ z\ = \ImZ z\" + proof (elim insertE emptyE) + assume [simp]: "u = \\<^sub>\" + have "ReZ z = ReZ (gauss_cnj z)" + by simp + also have "gauss_cnj z = \\<^sub>\ * z" + using u by simp + also have "ReZ \ = -ImZ z" + by simp + finally show "ReZ z = 0 \ ImZ z = 0 \ \ReZ z\ = \ImZ z\" + by auto + next + assume [simp]: "u = -\\<^sub>\" + have "ReZ z = ReZ (gauss_cnj z)" + by simp + also have "gauss_cnj z = -\\<^sub>\ * z" + using u by simp + also have "ReZ \ = ImZ z" + by simp + finally show "ReZ z = 0 \ ImZ z = 0 \ \ReZ z\ = \ImZ z\" + by auto + next + assume [simp]: "u = 1" + have "ImZ z = -ImZ (gauss_cnj z)" + by simp + also have "gauss_cnj z = z" + using u by simp + finally show "ReZ z = 0 \ ImZ z = 0 \ \ReZ z\ = \ImZ z\" + by auto + next + assume [simp]: "u = -1" + have "ReZ z = ReZ (gauss_cnj z)" + by simp + also have "gauss_cnj z = -z" + using u by simp + also have "ReZ \ = -ReZ z" + by simp + finally show "ReZ z = 0 \ ImZ z = 0 \ \ReZ z\ = \ImZ z\" + by auto + qed +next + assume "ReZ z = 0 \ ImZ z = 0 \ \ReZ z\ = \ImZ z\" + thus "gauss_cnj z dvd z" + proof safe + assume "\ReZ z\ = \ImZ z\" + then obtain u :: int where "is_unit u" and u: "ImZ z = u * ReZ z" + using associatedE2[of "ReZ z" "ImZ z"] by auto + from \is_unit u\ have "u \ {1, -1}" + by auto + hence "z = gauss_cnj z * (of_int u * \\<^sub>\)" + using u by (auto simp: gauss_int_eq_iff) + thus ?thesis + by (metis dvd_triv_left) + qed (auto simp: gauss_cnj_eq_self gauss_cnj_eq_minus_self) +qed + +lemma self_dvd_gauss_cnj_iff: "z dvd gauss_cnj z \ ReZ z = 0 \ ImZ z = 0 \ \ReZ z\ = \ImZ z\" + using gauss_cnj_dvd_self_iff[of z] by (subst (asm) gauss_cnj_dvd_left_iff) auto + + +subsection \Prime elements\ + +text \ + Next, we analyse what the prime elements of the Gaussian integers are. First, note that + according to the conventions of Isabelle's computational algebra library, a prime element + is called a prime iff it is also normalised, i.e.\ in our case it lies in the upper right + quadrant. + + As a first fact, we can show that a Gaussian integer whose norm is \\\-prime must be + $\mathbb{Z}[i]$-prime: +\ + +lemma prime_gauss_int_norm_imp_prime_elem: + assumes "prime (gauss_int_norm q)" + shows "prime_elem q" +proof - + have "irreducible q" + proof (rule irreducibleI) + fix a b assume "q = a * b" + hence "gauss_int_norm q = gauss_int_norm a * gauss_int_norm b" + by (simp_all add: gauss_int_norm_mult) + thus "is_unit a \ is_unit b" + using assms by (auto dest!: prime_product simp: gauss_int_norm_eq_Suc_0_iff) + qed (use assms in \auto simp: is_unit_gauss_int_iff'\) + thus "prime_elem q" + using irreducible_imp_prime_elem_gcd by blast +qed + +text \ + Also, a conjugate is a prime element iff the original element is a prime element: +\ +lemma prime_elem_gauss_cnj [intro]: "prime_elem z \ prime_elem (gauss_cnj z)" + by (auto simp: prime_elem_def gauss_cnj_dvd_left_iff) + +lemma prime_elem_gauss_cnj_iff [simp]: "prime_elem (gauss_cnj z) \ prime_elem z" + using prime_elem_gauss_cnj[of z] prime_elem_gauss_cnj[of "gauss_cnj z"] by auto + + +subsubsection \The factorisation of 2\ + +text \ + 2 factors as $-i (1 + i)^2$ in the Gaussian integers, where $-i$ is a unit and + $1 + i$ is prime. +\ + +lemma gauss_int_2_eq: "2 = -\\<^sub>\ * (1 + \\<^sub>\) ^ 2" + by (simp add: gauss_int_eq_iff power2_eq_square) + +lemma prime_elem_one_plus_i_gauss_int: "prime_elem (1 + \\<^sub>\)" + by (rule prime_gauss_int_norm_imp_prime_elem) (auto simp: gauss_int_norm_def) + +lemma prime_one_plus_i_gauss_int: "prime (1 + \\<^sub>\)" + by (simp add: prime_def prime_elem_one_plus_i_gauss_int + gauss_int_eq_iff normalize_gauss_int_def) + +lemma prime_factorization_2_gauss_int: + "prime_factorization (2 :: gauss_int) = {#1 + \\<^sub>\, 1 + \\<^sub>\#}" +proof - + have "prime_factorization (2 :: gauss_int) = + (prime_factorization (prod_mset {#1 + gauss_i, 1 + gauss_i#}))" + by (subst prime_factorization_unique) (auto simp: gauss_int_eq_iff normalize_gauss_int_def) + also have "prime_factorization (prod_mset {#1 + gauss_i, 1 + gauss_i#}) = + {#1 + gauss_i, 1 + gauss_i#}" + using prime_one_plus_i_gauss_int by (subst prime_factorization_prod_mset_primes) auto + finally show ?thesis . +qed + + +subsubsection \Inert primes\ + +text \ + Any \\\-prime congruent 3 modulo 4 is also a Gaussian prime. These primes are called + \<^emph>\inert\, because they do not decompose when moving from \\\ to $\mathbb{Z}[i]$. +\ + +lemma gauss_int_norm_not_3_mod_4: "[gauss_int_norm z \ 3] (mod 4)" +proof - + have A: "ReZ z mod 4 \ {0..3}" "ImZ z mod 4 \ {0..3}" by auto + have B: "{0..3} = {0, 1, 2, 3 :: int}" by auto + + have "[ReZ z ^ 2 + ImZ z ^ 2 = (ReZ z mod 4) ^ 2 + (ImZ z mod 4) ^ 2] (mod 4)" + by (intro cong_add cong_pow) (auto simp: cong_def) + moreover have "((ReZ z mod 4) ^ 2 + (ImZ z mod 4) ^ 2) mod 4 \ 3 mod 4" + using A unfolding B by auto + ultimately have "[ReZ z ^ 2 + ImZ z ^ 2 \ 3] (mod 4)" + unfolding cong_def by metis + hence "[int (nat (ReZ z ^ 2 + ImZ z ^ 2)) \ int 3] (mod (int 4))" + by simp + thus ?thesis unfolding gauss_int_norm_def + by (subst (asm) cong_int_iff) +qed + +lemma prime_elem_gauss_int_of_nat: + fixes n :: nat + assumes prime: "prime n" and "[n = 3] (mod 4)" + shows "prime_elem (of_nat n :: gauss_int)" +proof (intro irreducible_imp_prime_elem irreducibleI) + from assms show "of_nat n \ (0 :: gauss_int)" + by (auto simp: gauss_int_eq_iff) +next + show "\is_unit (of_nat n :: gauss_int)" + using assms by (subst is_unit_gauss_int_iff) (auto simp: gauss_int_eq_iff) +next + fix a b :: gauss_int + assume *: "of_nat n = a * b" + hence "gauss_int_norm (a * b) = gauss_int_norm (of_nat n)" + by metis + hence *: "gauss_int_norm a * gauss_int_norm b = n ^ 2" + by (simp add: gauss_int_norm_mult power2_eq_square flip: nat_mult_distrib) + from prime_power_mult_nat[OF prime this] obtain i j :: nat + where ij: "gauss_int_norm a = n ^ i" "gauss_int_norm b = n ^ j" by blast + + have "i + j = 2" + proof - + have "n ^ (i + j) = n ^ 2" + using ij * by (simp add: power_add) + from prime_power_inj[OF prime this] show ?thesis by simp + qed + hence "i = 0 \ j = 2 \ i = 1 \ j = 1 \ i = 2 \ j = 0" + by auto + thus "is_unit a \ is_unit b" + proof (elim disjE) + assume "i = 1 \ j = 1" + with ij have "gauss_int_norm a = n" + by auto + hence "[gauss_int_norm a = n] (mod 4)" + by simp + also have "[n = 3] (mod 4)" by fact + finally have "[gauss_int_norm a = 3] (mod 4)" . + moreover have "[gauss_int_norm a \ 3] (mod 4)" + by (rule gauss_int_norm_not_3_mod_4) + ultimately show ?thesis by contradiction + qed (use ij in \auto simp: is_unit_gauss_int_iff'\) +qed + +theorem prime_gauss_int_of_nat: + fixes n :: nat + assumes prime: "prime n" and "[n = 3] (mod 4)" + shows "prime (of_nat n :: gauss_int)" + using prime_elem_gauss_int_of_nat[OF assms] + unfolding prime_def by simp + + +subsubsection \Non-inert primes\ + +text \ + Any \\\-prime congruent 1 modulo 4 factors into two conjugate Gaussian primes. +\ + +lemma minimal_QuadRes_neg1: + assumes "QuadRes n (-1)" "n > 1" "odd n" + obtains x :: nat where "x \ (n - 1) div 2" and "[x ^ 2 + 1 = 0] (mod n)" +proof - + from \QuadRes n (-1)\ obtain x where "[x ^ 2 = (-1)] (mod (int n))" + by (auto simp: QuadRes_def) + hence "[x ^ 2 + 1 = -1 + 1] (mod (int n))" + by (intro cong_add) auto + also have "x ^ 2 + 1 = int (nat \x\ ^ 2 + 1)" + by simp + finally have "[int (nat \x\ ^ 2 + 1) = int 0] (mod (int n))" + by simp + hence "[nat \x\ ^ 2 + 1 = 0] (mod n)" + by (subst (asm) cong_int_iff) + + define x' where + "x' = (if nat \x\ mod n \ (n - 1) div 2 then nat \x\ mod n else n - (nat \x\ mod n))" + have x'_quadres: "[x' ^ 2 + 1 = 0] (mod n)" + proof (cases "nat \x\ mod n \ (n - 1) div 2") + case True + hence "[x' ^ 2 + 1 = (nat \x\ mod n) ^ 2 + 1] (mod n)" + by (simp add: x'_def) + also have "[(nat \x\ mod n) ^ 2 + 1 = nat \x\ ^ 2 + 1] (mod n)" + by (intro cong_add cong_pow) (auto simp: cong_def) + also have "[nat \x\ ^ 2 + 1 = 0] (mod n)" by fact + finally show ?thesis . + next + case False + hence "[int (x' ^ 2 + 1) = (int n - int (nat \x\ mod n)) ^ 2 + 1] (mod int n)" + using \n > 1\ by (simp add: x'_def of_nat_diff add_ac) + also have "[(int n - int (nat \x\ mod n)) ^ 2 + 1 = + (0 - int (nat \x\ mod n)) ^ 2 + 1] (mod int n)" + by (intro cong_add cong_pow) (auto simp: cong_def) + also have "[(0 - int (nat \x\ mod n)) ^ 2 + 1 = int ((nat \x\ mod n) ^ 2 + 1)] (mod (int n))" + by (simp add: add_ac) + finally have "[x' ^ 2 + 1 = (nat \x\ mod n)\<^sup>2 + 1] (mod n)" + by (subst (asm) cong_int_iff) + also have "[(nat \x\ mod n)\<^sup>2 + 1 = nat \x\ ^ 2 + 1] (mod n)" + by (intro cong_add cong_pow) (auto simp: cong_def) + also have "[nat \x\ ^ 2 + 1 = 0] (mod n)" by fact + finally show ?thesis . + qed + moreover have x'_le: "x' \ (n - 1) div 2" + using \odd n\ by (auto elim!: oddE simp: x'_def) + ultimately show ?thesis by (intro that[of x']) +qed + +text \ + Let \p\ be some prime number that is congruent 1 modulo 4. +\ +locale noninert_gauss_int_prime = + fixes p :: nat + assumes prime_p: "prime p" and cong_1_p: "[p = 1] (mod 4)" +begin + +lemma p_gt_2: "p > 2" and odd_p: "odd p" +proof - + from prime_p and cong_1_p have "p > 1" "p \ 2" + by (auto simp: prime_gt_Suc_0_nat cong_def) + thus "p > 2" by auto + with prime_p show "odd p" + using primes_dvd_imp_eq two_is_prime_nat by blast +qed + +text \ + -1 is a quadratic residue modulo \p\, so there exists some \x\ such that + $x^2 + 1$ is divisible by \p\. Moreover, we can choose \x\ such that it is positive and + no greater than $\frac{1}{2}(p-1)$: +\ +lemma minimal_QuadRes_neg1: + obtains x where "x > 0" "x \ (p - 1) div 2" "[x ^ 2 + 1 = 0] (mod p)" +proof - + have "[Legendre (-1) (int p) = (- 1) ^ ((p - 1) div 2)] (mod (int p))" + using prime_p p_gt_2 by (intro euler_criterion) auto + also have "[p - 1 = 1 - 1] (mod 4)" + using p_gt_2 by (intro cong_diff_nat cong_refl) (use cong_1_p in auto) + hence "2 * 2 dvd p - 1" + by (simp add: cong_0_iff) + hence "even ((p - 1) div 2)" + using dvd_mult_imp_div by blast + hence "(-1) ^ ((p - 1) div 2) = (1 :: int)" + by simp + finally have "Legendre (-1) (int p) mod p = 1" + using p_gt_2 by (auto simp: cong_def) + hence "Legendre (-1) (int p) = 1" + using p_gt_2 by (auto simp: Legendre_def cong_def zmod_minus1 split: if_splits) + hence "QuadRes p (-1)" + by (simp add: Legendre_def split: if_splits) + from minimal_QuadRes_neg1[OF this] p_gt_2 odd_p + obtain x where x: "x \ (p - 1) div 2" "[x ^ 2 + 1 = 0] (mod p)" by auto + have "x > 0" + using x p_gt_2 by (auto intro!: Nat.gr0I simp: cong_def) + from x and this show ?thesis by (intro that[of x]) auto +qed + +text \ + We can show from this that \p\ is not prime as a Gaussian integer. +\ +lemma not_prime: "\prime_elem (of_nat p :: gauss_int)" +proof + assume prime: "prime_elem (of_nat p :: gauss_int)" + obtain x where x: "x > 0" "x \ (p - 1) div 2" "[x ^ 2 + 1 = 0] (mod p)" + using minimal_QuadRes_neg1 . + + have "of_nat p dvd (of_nat (x ^ 2 + 1) :: gauss_int)" + using x by (intro of_nat_dvd_of_nat) (auto simp: cong_0_iff) + also have eq: "of_nat (x ^ 2 + 1) = ((of_nat x + \\<^sub>\) * (of_nat x - \\<^sub>\) :: gauss_int)" + using \x > 0\ by (simp add: algebra_simps gauss_int_eq_iff power2_eq_square of_nat_diff) + finally have "of_nat p dvd ((of_nat x + \\<^sub>\) * (of_nat x - \\<^sub>\) :: gauss_int)" . + + from prime and this + have "of_nat p dvd (of_nat x + \\<^sub>\ :: gauss_int) \ of_nat p dvd (of_nat x - \\<^sub>\ :: gauss_int)" + by (rule prime_elem_dvd_multD) + hence dvd: "of_nat p dvd (of_nat x + \\<^sub>\ :: gauss_int)" "of_nat p dvd (of_nat x - \\<^sub>\ :: gauss_int)" + by (auto dest: of_nat_dvd_imp_dvd_gauss_cnj) + + have "of_nat (p ^ 2) = (of_nat p * of_nat p :: gauss_int)" + by (simp add: power2_eq_square) + also from dvd have "\ dvd ((of_nat x + \\<^sub>\) * (of_nat x - \\<^sub>\))" + by (intro mult_dvd_mono) + also have "\ = of_nat (x ^ 2 + 1)" + by (rule eq [symmetric]) + finally have "p ^ 2 dvd (x ^ 2 + 1)" + by (subst (asm) of_nat_dvd_of_nat_gauss_int_iff) + hence "p ^ 2 \ x ^ 2 + 1" + by (intro dvd_imp_le) auto + moreover have "p ^ 2 > x ^ 2 + 1" + proof - + have "x ^ 2 + 1 \ ((p - 1) div 2) ^ 2 + 1" + using x by (intro add_mono power_mono) auto + also have "\ \ (p - 1) ^ 2 + 1" + by auto + also have "(p - 1) * (p - 1) < (p - 1) * (p + 1)" + using p_gt_2 by (intro mult_strict_left_mono) auto + hence "(p - 1) ^ 2 + 1 < p ^ 2" + by (simp add: algebra_simps power2_eq_square) + finally show ?thesis . + qed + ultimately show False by linarith +qed + +text \ + Any prime factor of \p\ in the Gaussian integers must have norm \p\. +\ +lemma norm_prime_divisor: + fixes q :: gauss_int + assumes q: "prime_elem q" "q dvd of_nat p" + shows "gauss_int_norm q = p" +proof - + from assms obtain r where r: "of_nat p = q * r" + by auto + have "p ^ 2 = gauss_int_norm (of_nat p)" + by simp + also have "\ = gauss_int_norm q * gauss_int_norm r" + by (auto simp: r gauss_int_norm_mult) + finally have *: "gauss_int_norm q * gauss_int_norm r = p ^ 2" + by simp + hence "\i j. gauss_int_norm q = p ^ i \ gauss_int_norm r = p ^ j" + using prime_p by (intro prime_power_mult_nat) + then obtain i j where ij: "gauss_int_norm q = p ^ i" "gauss_int_norm r = p ^ j" + by blast + have ij_eq_2: "i + j = 2" + proof - + from * have "p ^ (i + j) = p ^ 2" + by (simp add: power_add ij) + thus ?thesis + using p_gt_2 by (subst (asm) power_inject_exp) auto + qed + hence "i = 0 \ j = 2 \ i = 1 \ j = 1 \ i = 2 \ j = 0" by auto + hence "i = 1" + proof (elim disjE) + assume "i = 2 \ j = 0" + hence "is_unit r" + using ij by (simp add: gauss_int_norm_eq_Suc_0_iff) + hence "prime_elem (of_nat p :: gauss_int)" using \prime_elem q\ + by (simp add: prime_elem_mult_unit_left r mult.commute[of _ r]) + with not_prime show "i = 1" by contradiction + qed (use q ij in \auto simp: gauss_int_norm_eq_Suc_0_iff\) + thus ?thesis using ij by simp +qed + +text \ + We now show two lemmas that characterise the two prime factors of \p\ in the + Gaussian integers: they are two conjugates $x\pm iy$ for positive integers \x\ and \y\ such + that $x^2 + y^2 = p$. +\ +lemma prime_divisor_exists: + obtains q where "prime q" "prime_elem (gauss_cnj q)" "ReZ q > 0" "ImZ q > 0" + "of_nat p = q * gauss_cnj q" "gauss_int_norm q = p" +proof - + have "\q::gauss_int. q dvd of_nat p \ prime q" + by (rule prime_divisor_exists) (use prime_p in \auto simp: is_unit_gauss_int_iff'\) + then obtain q :: gauss_int where q: "prime q" "q dvd of_nat p" + by blast + from \prime q\ have [simp]: "q \ 0" by auto + have "normalize q = q" + using q by simp + hence q_signs: "ReZ q > 0" "ImZ q \ 0" + by (subst (asm) normalized_gauss_int_iff; simp)+ + + from q have "gauss_int_norm q = p" + using norm_prime_divisor[of q] by simp + moreover from this have "gauss_int_norm (gauss_cnj q) = p" + by simp + hence "prime_elem (gauss_cnj q)" + using prime_p by (intro prime_gauss_int_norm_imp_prime_elem) auto + moreover have "of_nat p = q * gauss_cnj q" + using \gauss_int_norm q = p\ by (simp add: self_mult_gauss_cnj) + moreover have "ImZ q \ 0" + proof + assume [simp]: "ImZ q = 0" + define m where "m = nat (ReZ q)" + have [simp]: "q = of_nat m" + using q_signs by (auto simp: gauss_int_eq_iff m_def) + with q have "m dvd p" + by (simp add: of_nat_dvd_of_nat_gauss_int_iff) + with prime_p have "m = 1 \ m = p" + using prime_nat_iff by blast + with q show False using not_prime by auto + qed + with q_signs have "ImZ q > 0" by simp + ultimately show ?thesis using q q_signs by (intro that[of q]) +qed + +theorem prime_factorization: + obtains q1 q2 + where "prime q1" "prime q2" "prime_factorization (of_nat p) = {#q1, q2#}" + "gauss_int_norm q1 = p" "gauss_int_norm q2 = p" "q2 = \\<^sub>\ * gauss_cnj q1" + "ReZ q1 > 0" "ImZ q1 > 0" "ReZ q1 > 0" "ImZ q2 > 0" +proof - + obtain q where q: "prime q" "prime_elem (gauss_cnj q)" "ReZ q > 0" "ImZ q > 0" + "of_nat p = q * gauss_cnj q" "gauss_int_norm q = p" + using prime_divisor_exists by metis + from \prime q\ have [simp]: "q \ 0" by auto + define q' where "q' = normalize (gauss_cnj q)" + have "prime_factorization (of_nat p) = prime_factorization (prod_mset {#q, q'#})" + by (subst prime_factorization_unique) (auto simp: q q'_def) + also have "\ = {#q, q'#}" + using q by (subst prime_factorization_prod_mset_primes) (auto simp: q'_def) + finally have "prime_factorization (of_nat p) = {#q, q'#}" . + moreover have "q' = \\<^sub>\ * gauss_cnj q" + using q by (auto simp: normalize_gauss_int_def q'_def) + moreover have "prime q'" + using q by (auto simp: q'_def) + ultimately show ?thesis using q + by (intro that[of q q']) (auto simp: q'_def gauss_int_norm_mult) +qed + +end + +text \ + In particular, a consequence of this is that any prime congruent 1 modulo 4 + can be written as a sum of squares of positive integers. +\ +lemma prime_cong_1_mod_4_gauss_int_norm_exists: + fixes p :: nat + assumes "prime p" "[p = 1] (mod 4)" + shows "\z. gauss_int_norm z = p \ ReZ z > 0 \ ImZ z > 0" +proof - + from assms interpret noninert_gauss_int_prime p + by unfold_locales + from prime_divisor_exists obtain q + where q: "prime q" "of_nat p = q * gauss_cnj q" + "ReZ q > 0" "ImZ q > 0" "gauss_int_norm q = p" by metis + have "p = gauss_int_norm q" + using q by simp + thus ?thesis using q by blast +qed + + +subsubsection \Full classification of Gaussian primes\ + +text \ + Any prime in the ring of Gaussian integers is of the form + + \<^item> \1 + \\<^sub>\\ + + \<^item> \p\ where \p \ \\ is prime in \\\ and congruent 1 modulo 4 + + \<^item> $x + iy$ where $x,y$ are positive integers and $x^2 + y^2$ is a prime congruent 3 modulo 4 + + or an associated element of one of these. +\ +theorem gauss_int_prime_classification: + fixes x :: gauss_int + assumes "prime x" + obtains + (one_plus_i) "x = 1 + \\<^sub>\" + | (cong_3_mod_4) p where "x = of_nat p" "prime p" "[p = 3] (mod 4)" + | (cong_1_mod_4) "prime (gauss_int_norm x)" "[gauss_int_norm x = 1] (mod 4)" + "ReZ x > 0" "ImZ x > 0" "ReZ x \ ImZ x" +proof - + define N where "N = gauss_int_norm x" + have "x dvd x * gauss_cnj x" + by simp + also have "\ = of_nat (gauss_int_norm x)" + by (simp add: self_mult_gauss_cnj) + finally have "x \ prime_factors (of_nat N)" + using assms by (auto simp: in_prime_factors_iff N_def) + also have "N = prod_mset (prime_factorization N)" + using assms unfolding N_def by (subst prod_mset_prime_factorization_nat) auto + also have "(of_nat \ :: gauss_int) = + prod_mset (image_mset of_nat (prime_factorization N))" + by (subst of_nat_prod_mset) auto + also have "prime_factors \ = (\p\prime_factors N. prime_factors (of_nat p))" + by (subst prime_factorization_prod_mset) auto + finally obtain p where p: "p \ prime_factors N" "x \ prime_factors (of_nat p)" + by auto + + have "prime p" + using p by auto + hence "\(2 * 2) dvd p" + using product_dvd_irreducibleD[of p 2 2] + by (auto simp flip: prime_elem_iff_irreducible) + hence "[p \ 0] (mod 4)" + using p by (auto simp: cong_0_iff in_prime_factors_iff) + hence "p mod 4 \ {1,2,3}" by (auto simp: cong_def) + thus ?thesis + proof (elim singletonE insertE) + assume "p mod 4 = 2" + hence "p mod 4 mod 2 = 0" + by simp + hence "p mod 2 = 0" + by (simp add: mod_mod_cancel) + with \prime p\ have [simp]: "p = 2" + using prime_prime_factor two_is_prime_nat by blast + have "prime_factors (of_nat p) = {1 + \\<^sub>\ :: gauss_int}" + by (simp add: prime_factorization_2_gauss_int) + with p show ?thesis using that(1) by auto + next + assume *: "p mod 4 = 3" + hence "prime_factors (of_nat p) = {of_nat p :: gauss_int}" + using prime_gauss_int_of_nat[of p] \prime p\ + by (subst prime_factorization_prime) (auto simp: cong_def) + with p show ?thesis using that(2)[of p] * + by (auto simp: cong_def) + next + assume *: "p mod 4 = 1" + then interpret noninert_gauss_int_prime p + by unfold_locales (use \prime p\ in \auto simp: cong_def\) + obtain q1 q2 :: gauss_int where q12: + "prime q1" "prime q2" "prime_factorization (of_nat p) = {#q1, q2#}" + "gauss_int_norm q1 = p" "gauss_int_norm q2 = p" "q2 = \\<^sub>\ * gauss_cnj q1" + "ReZ q1 > 0" "ImZ q1 > 0" "ReZ q1 > 0" "ImZ q2 > 0" + using prime_factorization by metis + from p q12 have "x = q1 \ x = q2" by auto + with q12 have **: "gauss_int_norm x = p" "ReZ x > 0" "ImZ x > 0" + by auto + have "ReZ x \ ImZ x" + proof + assume "ReZ x = ImZ x" + hence "even (gauss_int_norm x)" + by (auto simp: gauss_int_norm_def nat_mult_distrib) + hence "even p" using \gauss_int_norm x = p\ + by simp + with \p mod 4 = 1\ show False + by presburger + qed + thus ?thesis using that(3) \prime p\ * ** + by (simp add: cong_def) + qed +qed + +lemma prime_gauss_int_norm_squareD: + fixes z :: gauss_int + assumes "prime z" "gauss_int_norm z = p ^ 2" + shows "prime p \ z = of_nat p" + using assms(1) +proof (cases rule: gauss_int_prime_classification) + case one_plus_i + have "prime (2 :: nat)" by simp + also from one_plus_i have "2 = p ^ 2" + using assms(2) by (auto simp: gauss_int_norm_def) + finally show ?thesis by (simp add: prime_power_iff) +next + case (cong_3_mod_4 p) + thus ?thesis using assms by auto +next + case cong_1_mod_4 + with assms show ?thesis + by (auto simp: prime_power_iff) +qed + +lemma gauss_int_norm_eq_prime_squareD: + assumes "prime p" and "[p = 3] (mod 4)" and "gauss_int_norm z = p ^ 2" + shows "normalize z = of_nat p" and "prime_elem z" +proof - + have "\q::gauss_int. q dvd z \ prime q" + by (rule prime_divisor_exists) (use assms in \auto simp: is_unit_gauss_int_iff'\) + then obtain q :: gauss_int where q: "q dvd z" "prime q" by blast + have "gauss_int_norm q dvd gauss_int_norm z" + by (rule gauss_int_norm_dvd_mono) fact + also have "\ = p ^ 2" by fact + finally obtain i where i: "i \ 2" "gauss_int_norm q = p ^ i" + by (subst (asm) divides_primepow_nat) (use assms q in auto) + from i assms q have "i \ 0" + by (auto intro!: Nat.gr0I simp: gauss_int_norm_eq_Suc_0_iff) + moreover from i assms q have "i \ 1" + using gauss_int_norm_not_3_mod_4[of q] by auto + ultimately have "i = 2" using i by auto + with i have "gauss_int_norm q = p ^ 2" by auto + hence [simp]: "q = of_nat p" + using prime_gauss_int_norm_squareD[of q p] q by auto + have "normalize (of_nat p) = normalize z" + using q assms + by (intro gauss_int_dvd_same_norm_imp_associated) auto + thus *: "normalize z = of_nat p" by simp + + have "prime (normalize z)" + using prime_gauss_int_of_nat[of p] assms by (subst *) auto + thus "prime_elem z" by simp +qed + +text \ + The following can be used as a primality test for Gaussian integers. It effectively + reduces checking the primality of a Gaussian integer to checking the primality of an + integer. + + A Gaussian integer is prime if either its norm is either \\\-prime or the square of + a \\\-prime that is congruent 3 modulo 4. +\ +lemma prime_elem_gauss_int_iff: + fixes z :: gauss_int + defines "n \ gauss_int_norm z" + shows "prime_elem z \ prime n \ (\p. n = p ^ 2 \ prime p \ [p = 3] (mod 4))" +proof + assume "prime n \ (\p. n = p ^ 2 \ prime p \ [p = 3] (mod 4))" + thus "prime_elem z" + by (auto intro: gauss_int_norm_eq_prime_squareD(2) + prime_gauss_int_norm_imp_prime_elem simp: n_def) +next + assume "prime_elem z" + hence "prime (normalize z)" by simp + thus "prime n \ (\p. n = p ^ 2 \ prime p \ [p = 3] (mod 4))" + proof (cases rule: gauss_int_prime_classification) + case one_plus_i + have "n = gauss_int_norm (normalize z)" + by (simp add: n_def) + also have "normalize z = 1 + \\<^sub>\" + by fact + also have "gauss_int_norm \ = 2" + by (simp add: gauss_int_norm_def) + finally show ?thesis by simp + next + case (cong_3_mod_4 p) + have "n = gauss_int_norm (normalize z)" + by (simp add: n_def) + also have "normalize z = of_nat p" + by fact + also have "gauss_int_norm \ = p ^ 2" + by simp + finally show ?thesis using cong_3_mod_4 by simp + next + case cong_1_mod_4 + thus ?thesis by (simp add: n_def) + qed +qed + + +subsubsection \Multiplicities of primes\ + +text \ + In this section, we will show some results connecting the multiplicity of a Gaussian prime \p\ + in a Gaussian integer \z\ to the \\\-multiplicity of the norm of \p\ in the norm of \z\. +\ + +text \ + The multiplicity of the Gaussian prime \<^term>\1 + \\<^sub>\\ in an integer \c\ is simply + twice the \\\-multiplicity of 2 in \c\: +\ +lemma multiplicity_prime_1_plus_i_aux: "multiplicity (1 + \\<^sub>\) (of_nat c) = 2 * multiplicity 2 c" +proof (cases "c = 0") + case [simp]: False + have "2 * multiplicity 2 c = multiplicity 2 (c ^ 2)" + by (simp add: prime_elem_multiplicity_power_distrib) + also have "multiplicity 2 (c ^ 2) = multiplicity (of_nat 2) (of_nat c ^ 2 :: gauss_int)" + by (simp flip: multiplicity_gauss_int_of_nat) + also have "of_nat 2 = (-\\<^sub>\) * (1 + \\<^sub>\) ^ 2" + by (simp add: algebra_simps power2_eq_square) + also have "multiplicity \ (of_nat c ^ 2) = multiplicity ((1 + \\<^sub>\) ^ 2) (of_nat c ^ 2)" + by (subst multiplicity_times_unit_left) auto + also have "\ = multiplicity (1 + \\<^sub>\) (of_nat c)" + by (subst multiplicity_power_power) auto + finally show ?thesis .. +qed auto + +text \ + Tha multiplicity of an inert Gaussian prime $q\in\mathbb{Z}$ in a Gaussian integer \z\ is + precisely half the \\\-multiplicity of \q\ in the norm of \z\. +\ +lemma multiplicity_prime_cong_3_mod_4: + assumes "prime (of_nat q :: gauss_int)" + shows "multiplicity q (gauss_int_norm z) = 2 * multiplicity (of_nat q) z" +proof (cases "z = 0") + case [simp]: False + have "multiplicity q (gauss_int_norm z) = + multiplicity (of_nat q) (of_nat (gauss_int_norm z) :: gauss_int)" + by (simp add: multiplicity_gauss_int_of_nat) + also have "\ = multiplicity (of_nat q) (z * gauss_cnj z)" + by (simp add: self_mult_gauss_cnj) + also have "\ = multiplicity (of_nat q) z + multiplicity (gauss_cnj (of_nat q)) (gauss_cnj z)" + using assms by (subst prime_elem_multiplicity_mult_distrib) auto + also have "multiplicity (gauss_cnj (of_nat q)) (gauss_cnj z) = multiplicity (of_nat q) z" + by (subst multiplicity_gauss_cnj) auto + also have "\ + \ = 2 * \" + by simp + finally show ?thesis . +qed auto + +text \ + For Gaussian primes \p\ whose norm is congruent 1 modulo 4, the $\mathbb{Z}[i]$-multiplicity + of \p\ in an integer \c\ is just the \\\-multiplicity of their norm in \c\. +\ +lemma multiplicity_prime_cong_1_mod_4_aux: + fixes p :: gauss_int + assumes "prime_elem p" "ReZ p > 0" "ImZ p > 0" "ImZ p \ ReZ p" + shows "multiplicity p (of_nat c) = multiplicity (gauss_int_norm p) c" +proof (cases "c = 0") + case [simp]: False + show ?thesis + proof (intro antisym multiplicity_geI) + define k where "k = multiplicity p (of_nat c)" + have "p ^ k dvd of_nat c" + by (simp add: multiplicity_dvd k_def) + moreover have "gauss_cnj p ^ k dvd of_nat c" + using multiplicity_dvd[of "gauss_cnj p" "of_nat c"] + multiplicity_gauss_cnj[of p "of_nat c"] by (simp add: k_def) + moreover have "\p dvd gauss_cnj p" + using assms by (subst self_dvd_gauss_cnj_iff) auto + hence "\p dvd gauss_cnj p ^ k" + using assms prime_elem_dvd_power by blast + ultimately have "p ^ k * gauss_cnj p ^ k dvd of_nat c" + using assms by (intro prime_elem_power_mult_dvdI) auto + also have "p ^ k * gauss_cnj p ^ k = of_nat (gauss_int_norm p ^ k)" + by (simp flip: self_mult_gauss_cnj add: power_mult_distrib) + finally show "gauss_int_norm p ^ k dvd c" + by (subst (asm) of_nat_dvd_of_nat_gauss_int_iff) + next + define k where "k = multiplicity (gauss_int_norm p) c" + have "p ^ k dvd (p * gauss_cnj p) ^ k" + by (intro dvd_power_same) auto + also have "\ = of_nat (gauss_int_norm p ^ k)" + by (simp add: self_mult_gauss_cnj) + also have "\ dvd of_nat c" + unfolding of_nat_dvd_of_nat_gauss_int_iff by (auto simp: k_def multiplicity_dvd) + finally show "p ^ k dvd of_nat c" . + qed (use assms in \auto simp: gauss_int_norm_eq_Suc_0_iff\) +qed auto + +text \ + The multiplicity of a Gaussian prime with norm congruent 1 modulo 4 in some Gaussian integer \z\ + and the multiplicity of its conjugate in \z\ sum to the the \\\-multiplicity of their norm in + the norm of \z\: +\ +lemma multiplicity_prime_cong_1_mod_4: + fixes p :: gauss_int + assumes "prime_elem p" "ReZ p > 0" "ImZ p > 0" "ImZ p \ ReZ p" + shows "multiplicity (gauss_int_norm p) (gauss_int_norm z) = + multiplicity p z + multiplicity (gauss_cnj p) z" +proof (cases "z = 0") + case [simp]: False + have "multiplicity (gauss_int_norm p) (gauss_int_norm z) = + multiplicity p (of_nat (gauss_int_norm z))" + using assms by (subst multiplicity_prime_cong_1_mod_4_aux) auto + also have "\ = multiplicity p (z * gauss_cnj z)" + by (simp add: self_mult_gauss_cnj) + also have "\ = multiplicity p z + multiplicity p (gauss_cnj z)" + using assms by (subst prime_elem_multiplicity_mult_distrib) auto + also have "multiplicity p (gauss_cnj z) = multiplicity (gauss_cnj p) z" + by (subst multiplicity_gauss_cnj [symmetric]) auto + finally show ?thesis . +qed auto + +text \ + The multiplicity of the Gaussian prime \<^term>\1 + \\<^sub>\\ in a Gaussian integer \z\ is precisely + the \\\-multiplicity of 2 in the norm of \z\: +\ +lemma multiplicity_prime_1_plus_i: "multiplicity (1 + \\<^sub>\) z = multiplicity 2 (gauss_int_norm z)" +proof (cases "z = 0") + case [simp]: False + note [simp] = prime_elem_one_plus_i_gauss_int + have "2 * multiplicity 2 (gauss_int_norm z) = multiplicity (1 + \\<^sub>\) (of_nat (gauss_int_norm z))" + by (rule multiplicity_prime_1_plus_i_aux [symmetric]) + also have "\ = multiplicity (1 + \\<^sub>\) (z * gauss_cnj z)" + by (simp add: self_mult_gauss_cnj) + also have "\ = multiplicity (1 + \\<^sub>\) z + multiplicity (gauss_cnj (1 - \\<^sub>\)) (gauss_cnj z)" + by (subst prime_elem_multiplicity_mult_distrib) auto + also have "multiplicity (gauss_cnj (1 - \\<^sub>\)) (gauss_cnj z) = multiplicity (1 - \\<^sub>\) z" + by (subst multiplicity_gauss_cnj) auto + also have "1 - \\<^sub>\ = (-\\<^sub>\) * (1 + \\<^sub>\)" + by (simp add: algebra_simps) + also have "multiplicity \ z = multiplicity (1 + \\<^sub>\) z" + by (subst multiplicity_times_unit_left) auto + also have "\ + \ = 2 * \" + by simp + finally show ?thesis by simp +qed auto + + +subsection \Coprimality of an element and its conjugate\ + +text \ + Using the classification of the primes, we now show that if the real and imaginary parts of a + Gaussian integer are coprime and its norm is odd, then it is coprime to its own conjugate. +\ +lemma coprime_self_gauss_cnj: + assumes "coprime (ReZ z) (ImZ z)" and "odd (gauss_int_norm z)" + shows "coprime z (gauss_cnj z)" +proof (rule coprimeI) + fix d assume "d dvd z" "d dvd gauss_cnj z" + have *: False if "p \ prime_factors z" "p \ prime_factors (gauss_cnj z)" for p + proof - + from that have p: "prime p" "p dvd z" "p dvd gauss_cnj z" + by auto + + define p' where "p' = gauss_cnj p" + define d where "d = gauss_int_norm p" + have of_nat_d_eq: "of_nat d = p * p'" + by (simp add: p'_def self_mult_gauss_cnj d_def) + have "prime_elem p" "prime_elem p'" "p dvd z" "p' dvd z" "p dvd gauss_cnj z" "p' dvd gauss_cnj z" + using that by (auto simp: in_prime_factors_iff p'_def gauss_cnj_dvd_left_iff) + + have "prime p" + using that by auto + then obtain q where q: "prime q" "of_nat q dvd z" + proof (cases rule: gauss_int_prime_classification) + case one_plus_i + hence "2 = gauss_int_norm p" + by (auto simp: gauss_int_norm_def) + also have "gauss_int_norm p dvd gauss_int_norm z" + using p by (intro gauss_int_norm_dvd_mono) auto + finally have "even (gauss_int_norm z)" . + with \odd (gauss_int_norm z)\ show ?thesis + by contradiction + next + case (cong_3_mod_4 q) + thus ?thesis using that[of q] p by simp + next + case cong_1_mod_4 + hence "\p dvd p'" + unfolding p'_def by (subst self_dvd_gauss_cnj_iff) auto + hence "p * p' dvd z" using p + by (intro prime_elem_mult_dvdI) (auto simp: p'_def gauss_cnj_dvd_left_iff) + also have "p * p' = of_nat (gauss_int_norm p)" + by (simp add: p'_def self_mult_gauss_cnj) + finally show ?thesis using that[of "gauss_int_norm p"] cong_1_mod_4 + by simp + qed + + have "of_nat q dvd gcd (2 * of_int (ReZ z)) (2 * \\<^sub>\ * of_int (ImZ z))" + proof (rule gcd_greatest) + have "of_nat q dvd (z + gauss_cnj z)" + using q by (auto simp: gauss_cnj_dvd_right_iff) + also have "\ = 2 * of_int (ReZ z)" + by (simp add: self_plus_gauss_cnj) + finally show "of_nat q dvd (2 * of_int (ReZ z) :: gauss_int)" . + next + have "of_nat q dvd (z - gauss_cnj z)" + using q by (auto simp: gauss_cnj_dvd_right_iff) + also have "\ = 2 * \\<^sub>\ * of_int (ImZ z)" + by (simp add: self_minus_gauss_cnj) + finally show "of_nat q dvd (2 * \\<^sub>\ * of_int (ImZ z))" . + qed + also have "\ = 2" + proof - + have "odd (ReZ z) \ odd (ImZ z)" + using assms by (auto simp: gauss_int_norm_def even_nat_iff) + thus ?thesis + proof + assume "odd (ReZ z)" + hence "coprime (of_int (ReZ z)) (of_int 2 :: gauss_int)" + unfolding coprime_of_int_gauss_int coprime_right_2_iff_odd . + thus ?thesis + using assms + by (subst gcd_mult_left_right_cancel) + (auto simp: coprime_of_int_gauss_int coprime_commute is_unit_left_imp_coprime + is_unit_right_imp_coprime gcd_proj1_if_dvd gcd_proj2_if_dvd) + next + assume "odd (ImZ z)" + hence "coprime (of_int (ImZ z)) (of_int 2 :: gauss_int)" + unfolding coprime_of_int_gauss_int coprime_right_2_iff_odd . + hence "gcd (2 * of_int (ReZ z)) (2 * \\<^sub>\ * of_int (ImZ z)) = gcd (2 * of_int (ReZ z)) (2 * \\<^sub>\)" + using assms + by (subst gcd_mult_right_right_cancel) + (auto simp: coprime_of_int_gauss_int coprime_commute is_unit_left_imp_coprime + is_unit_right_imp_coprime) + also have "\ = normalize (2 * gcd (of_int (ReZ z)) \\<^sub>\)" + by (subst gcd_mult_left) auto + also have "gcd (of_int (ReZ z)) \\<^sub>\ = 1" + by (subst coprime_iff_gcd_eq_1 [symmetric], rule is_unit_right_imp_coprime) auto + finally show ?thesis by simp + qed + qed + finally have "of_nat q dvd (of_nat 2 :: gauss_int)" + by simp + hence "q dvd 2" + by (simp only: of_nat_dvd_of_nat_gauss_int_iff) + with \prime q\ have "q = 2" + using primes_dvd_imp_eq two_is_prime_nat by blast + with q have "2 dvd z" + by auto + + have "2 dvd gauss_int_norm 2" + by simp + also have "\ dvd gauss_int_norm z" + using \2 dvd z\ by (intro gauss_int_norm_dvd_mono) + finally show False using \odd (gauss_int_norm z)\ by contradiction + qed + + fix d :: gauss_int + assume d: "d dvd z" "d dvd gauss_cnj z" + show "is_unit d" + proof (rule ccontr) + assume "\is_unit d" + moreover from d assms have "d \ 0" + by auto + ultimately obtain p where p: "prime p" "p dvd d" + using prime_divisorE by blast + with d have "p \ prime_factors z" "p \ prime_factors (gauss_cnj z)" + using assms by (auto simp: in_prime_factors_iff) + with *[of p] show False by blast + qed +qed + + +subsection \Square decompositions of prime numbers congruent 1 mod 4\ + +lemma prime_1_mod_4_sum_of_squares_unique_aux: + fixes p x y :: nat + assumes "prime p" "[p = 1] (mod 4)" "x ^ 2 + y ^ 2 = p" + shows "x > 0 \ y > 0 \ x \ y" +proof safe + from assms show "x > 0" "y > 0" + by (auto intro!: Nat.gr0I simp: prime_power_iff) +next + assume "x = y" + with assms have "p = 2 * x ^ 2" + by simp + with \prime p\ have "p = 2" + by (auto dest: prime_product) + with \[p = 1] (mod 4)\ show False + by (simp add: cong_def) +qed + +text \ + Any prime number congruent 1 modulo 4 can be written \<^emph>\uniquely\ as a sum of two squares + $x^2 + y^2$ (up to commutativity of the addition). Additionally, we have shown above that + \x\ and \y\ are both positive and \x \ y\. +\ +lemma prime_1_mod_4_sum_of_squares_unique: + fixes p :: nat + assumes "prime p" "[p = 1] (mod 4)" + shows "\!(x,y). x \ y \ x ^ 2 + y ^ 2 = p" +proof (rule ex_ex1I) + obtain z where z: "gauss_int_norm z = p" + using prime_cong_1_mod_4_gauss_int_norm_exists[OF assms] by blast + show "\z. case z of (x,y) \ x \ y \ x ^ 2 + y ^ 2 = p" + proof (cases "\ReZ z\ \ \ImZ z\") + case True + with z show ?thesis by + (intro exI[of _ "(nat \ReZ z\, nat \ImZ z\)"]) + (auto simp: gauss_int_norm_def nat_add_distrib simp flip: nat_power_eq) + next + case False + with z show ?thesis by + (intro exI[of _ "(nat \ImZ z\, nat \ReZ z\)"]) + (auto simp: gauss_int_norm_def nat_add_distrib simp flip: nat_power_eq) + qed +next + fix z1 z2 + assume z1: "case z1 of (x, y) \ x \ y \ x\<^sup>2 + y\<^sup>2 = p" + assume z2: "case z2 of (x, y) \ x \ y \ x\<^sup>2 + y\<^sup>2 = p" + define z1' :: gauss_int where "z1' = of_nat (fst z1) + \\<^sub>\ * of_nat (snd z1)" + define z2' :: gauss_int where "z2' = of_nat (fst z2) + \\<^sub>\ * of_nat (snd z2)" + from assms interpret noninert_gauss_int_prime p + by unfold_locales auto + have norm_z1': "gauss_int_norm z1' = p" + using z1 by (simp add: z1'_def gauss_int_norm_def case_prod_unfold nat_add_distrib nat_power_eq) + have norm_z2': "gauss_int_norm z2' = p" + using z2 by (simp add: z2'_def gauss_int_norm_def case_prod_unfold nat_add_distrib nat_power_eq) + + have sgns: "fst z1 > 0" "snd z1 > 0" "fst z2 > 0" "snd z2 > 0" "fst z1 \ snd z1" "fst z2 \ snd z2" + using prime_1_mod_4_sum_of_squares_unique_aux[OF assms, of "fst z1" "snd z1"] z1 + prime_1_mod_4_sum_of_squares_unique_aux[OF assms, of "fst z2" "snd z2"] z2 by auto + have [simp]: "normalize z1' = z1'" "normalize z2' = z2'" + using sgns by (subst normalized_gauss_int_iff; simp add: z1'_def z2'_def)+ + have "prime z1'" "prime z2'" + using norm_z1' norm_z2' assms unfolding prime_def + by (auto simp: prime_gauss_int_norm_imp_prime_elem) + + have "of_nat p = z1' * gauss_cnj z1'" + by (simp add: self_mult_gauss_cnj norm_z1') + hence "z1' dvd of_nat p" + by simp + also have "of_nat p = z2' * gauss_cnj z2'" + by (simp add: self_mult_gauss_cnj norm_z2') + finally have "z1' dvd z2' \ z1' dvd gauss_cnj z2'" using assms + by (subst (asm) prime_elem_dvd_mult_iff) + (simp add: norm_z1' prime_gauss_int_norm_imp_prime_elem) + thus "z1 = z2" + proof + assume "z1' dvd z2'" + with \prime z1'\ \prime z2'\ have "z1' = z2'" + by (simp add: primes_dvd_imp_eq) + thus ?thesis + by (simp add: z1'_def z2'_def gauss_int_eq_iff prod_eq_iff) + next + assume dvd: "z1' dvd gauss_cnj z2'" + have "normalize (\\<^sub>\ * gauss_cnj z2') = \\<^sub>\ * gauss_cnj z2'" + using sgns by (subst normalized_gauss_int_iff) (auto simp: z2'_def) + moreover have "prime_elem (\\<^sub>\ * gauss_cnj z2')" + by (rule prime_gauss_int_norm_imp_prime_elem) + (simp add: gauss_int_norm_mult norm_z2' \prime p\) + ultimately have "prime (\\<^sub>\ * gauss_cnj z2')" + by (simp add: prime_def) + moreover from dvd have "z1' dvd \\<^sub>\ * gauss_cnj z2'" + by simp + ultimately have "z1' = \\<^sub>\ * gauss_cnj z2'" + using \prime z1'\ by (simp add: primes_dvd_imp_eq) + hence False using z1 z2 sgns + by (auto simp: gauss_int_eq_iff z1'_def z2'_def) + thus ?thesis .. + qed +qed + +lemma two_sum_of_squares_nat_iff: "(x :: nat) ^ 2 + y ^ 2 = 2 \ x = 1 \ y = 1" +proof + assume eq: "x ^ 2 + y ^ 2 = 2" + have square_neq_2: "n ^ 2 \ 2" for n :: nat + proof + assume *: "n ^ 2 = 2" + have "prime (2 :: nat)" + by simp + thus False by (subst (asm) * [symmetric]) (auto simp: prime_power_iff) + qed + + from eq have "x ^ 2 < 2 ^ 2" "y ^ 2 < 2 ^ 2" + by simp_all + hence "x < 2" "y < 2" + using power2_less_imp_less[of x 2] power2_less_imp_less[of y 2] by auto + moreover have "x > 0" "y > 0" + using eq square_neq_2[of x] square_neq_2[of y] by (auto intro!: Nat.gr0I) + ultimately show "x = 1 \ y = 1" + by auto +qed auto + +lemma prime_sum_of_squares_unique: + fixes p :: nat + assumes "prime p" "p = 2 \ [p = 1] (mod 4)" + shows "\!(x,y). x \ y \ x ^ 2 + y ^ 2 = p" + using assms(2) +proof + assume [simp]: "p = 2" + have **: "(\(x,y). x \ y \ x ^ 2 + y ^ 2 = p) = (\z. z = (1,1 :: nat))" + using two_sum_of_squares_nat_iff by (auto simp: fun_eq_iff) + thus ?thesis + by (subst **) auto +qed (use prime_1_mod_4_sum_of_squares_unique[of p] assms in auto) + +text \ + We now give a simple and inefficient algorithm to compute the canonical decomposition + $x ^ 2 + y ^ 2$ with $x\leq y$. +\ +definition prime_square_sum_nat_decomp :: "nat \ nat \ nat" where + "prime_square_sum_nat_decomp p = + (if prime p \ (p = 2 \ [p = 1] (mod 4)) + then THE (x,y). x \ y \ x ^ 2 + y ^ 2 = p else (0, 0))" + +lemma prime_square_sum_nat_decomp_eqI: + assumes "prime p" "x ^ 2 + y ^ 2 = p" "x \ y" + shows "prime_square_sum_nat_decomp p = (x, y)" +proof - + have "[gauss_int_norm (of_nat x + \\<^sub>\ * of_nat y) \ 3] (mod 4)" + by (rule gauss_int_norm_not_3_mod_4) + also have "gauss_int_norm (of_nat x + \\<^sub>\ * of_nat y) = p" + using assms by (auto simp: gauss_int_norm_def nat_add_distrib nat_power_eq) + finally have "[p \ 3] (mod 4)" . + with prime_mod_4_cases[of p] assms have *: "p = 2 \ [p = 1] (mod 4)" + by auto + + have "prime_square_sum_nat_decomp p = (THE (x,y). x \ y \ x ^ 2 + y ^ 2 = p)" + using * \prime p\ by (simp add: prime_square_sum_nat_decomp_def) + also have "\ = (x, y)" + proof (rule the1_equality) + show "\!(x,y). x \ y \ x ^ 2 + y ^ 2 = p" + using \prime p\ * by (rule prime_sum_of_squares_unique) + qed (use assms in auto) + finally show ?thesis . +qed + +lemma prime_square_sum_nat_decomp_correct: + assumes "prime p" "p = 2 \ [p = 1] (mod 4)" + defines "z \ prime_square_sum_nat_decomp p" + shows "fst z ^ 2 + snd z ^ 2 = p" "fst z \ snd z" +proof - + define z' where "z' = (THE (x,y). x \ y \ x ^ 2 + y ^ 2 = p)" + have "z = z'" + unfolding z_def z'_def using assms by (simp add: prime_square_sum_nat_decomp_def) + also have"\!(x,y). x \ y \ x ^ 2 + y ^ 2 = p" + using assms by (intro prime_sum_of_squares_unique) + hence "case z' of (x, y) \ x \ y \ x ^ 2 + y ^ 2 = p" + unfolding z'_def by (rule theI') + finally show "fst z ^ 2 + snd z ^ 2 = p" "fst z \ snd z" + by auto +qed + +lemma sum_of_squares_nat_bound: + fixes x y n :: nat + assumes "x ^ 2 + y ^ 2 = n" + shows "x \ n" +proof (cases "x = 0") + case False + hence "x * 1 \ x ^ 2" + unfolding power2_eq_square by (intro mult_mono) auto + also have "\ \ x ^ 2 + y ^ 2" + by simp + also have "\ = n" + by fact + finally show ?thesis by simp +qed auto + +lemma sum_of_squares_nat_bound': + fixes x y n :: nat + assumes "x ^ 2 + y ^ 2 = n" + shows "y \ n" + using sum_of_squares_nat_bound[of y x] assms by (simp add: add.commute) + +lemma is_singleton_conv_Ex1: + "is_singleton A \ (\!x. x \ A)" +proof + assume "is_singleton A" + thus "\!x. x \ A" + by (auto elim!: is_singletonE) +next + assume "\!x. x \ A" + thus "is_singleton A" + by (metis equals0D is_singletonI') +qed + +lemma the_elemI: + assumes "is_singleton A" + shows "the_elem A \ A" + using assms by (elim is_singletonE) auto + +lemma prime_square_sum_nat_decomp_code_aux: + assumes "prime p" "p = 2 \ [p = 1] (mod 4)" + defines "z \ the_elem (Set.filter (\(x,y). x ^ 2 + y ^ 2 = p) (SIGMA x:{0..p}. {x..p}))" + shows "prime_square_sum_nat_decomp p = z" +proof - + let ?A = "Set.filter (\(x,y). x ^ 2 + y ^ 2 = p) (SIGMA x:{0..p}. {x..p})" + have eq: "?A = {(x,y). x \ y \ x ^ 2 + y ^ 2 = p}" + using sum_of_squares_nat_bound sum_of_squares_nat_bound' by auto + have z: "z \ Set.filter (\(x,y). x ^ 2 + y ^ 2 = p) (SIGMA x:{0..p}. {x..p})" + unfolding z_def eq using prime_sum_of_squares_unique[OF assms(1,2)] + by (intro the_elemI) (simp add: is_singleton_conv_Ex1) + have "prime_square_sum_nat_decomp p = (fst z, snd z)" + using z by (intro prime_square_sum_nat_decomp_eqI[OF assms(1)]) auto + also have "\ = z" + by simp + finally show ?thesis . +qed + +lemma prime_square_sum_nat_decomp_code [code]: + "prime_square_sum_nat_decomp p = + (if prime p \ (p = 2 \ [p = 1] (mod 4)) + then the_elem (Set.filter (\(x,y). x ^ 2 + y ^ 2 = p) (SIGMA x:{0..p}. {x..p})) + else (0, 0))" + using prime_square_sum_nat_decomp_code_aux[of p] + by (auto simp: prime_square_sum_nat_decomp_def) + + +subsection \Executable factorisation of Gaussian integers\ + +text \ + Lastly, we use all of the above to give an executable (albeit not very efficient) factorisation + algorithm for Gaussian integers based on factorisation of regular integers. Note that we will + only compute the set of prime factors without multiplicity, but given that, it would be fairly + easy to determine the multiplicity as well. + + First, we need the following function that computes the Gaussian integer factors of a + \\\-prime \p\: +\ +definition factor_gauss_int_prime_nat :: "nat \ gauss_int list" where + "factor_gauss_int_prime_nat p = + (if p = 2 then [1 + \\<^sub>\] + else if [p = 3] (mod 4) then [of_nat p] + else case prime_square_sum_nat_decomp p of + (x, y) \ [of_nat x + \\<^sub>\ * of_nat y, of_nat y + \\<^sub>\ * of_nat x])" + +lemma factor_gauss_int_prime_nat_correct: + assumes "prime p" + shows "set (factor_gauss_int_prime_nat p) = prime_factors (of_nat p)" + using prime_mod_4_cases[OF assms] +proof (elim disjE) + assume "p = 2" + thus ?thesis + by (auto simp: prime_factorization_2_gauss_int factor_gauss_int_prime_nat_def) +next + assume *: "[p = 3] (mod 4)" + with assms have "prime (of_nat p :: gauss_int)" + by (intro prime_gauss_int_of_nat) + thus ?thesis using assms * + by (auto simp: prime_factorization_prime factor_gauss_int_prime_nat_def cong_def) +next + assume *: "[p = 1] (mod 4)" + then interpret noninert_gauss_int_prime p + using \prime p\ by unfold_locales + define z where "z = prime_square_sum_nat_decomp p" + define x y where "x = fst z" and "y = snd z" + have xy: "x ^ 2 + y ^ 2 = p" "x \ y" + using prime_square_sum_nat_decomp_correct[of p] * assms + by (auto simp: x_def y_def z_def) + from xy have xy_signs: "x > 0" "y > 0" + using prime_1_mod_4_sum_of_squares_unique_aux[of p x y] assms * by auto + have norms: "gauss_int_norm (of_nat x + \\<^sub>\ * of_nat y) = p" + "gauss_int_norm (of_nat y + \\<^sub>\ * of_nat x) = p" + using xy by (auto simp: gauss_int_norm_def nat_add_distrib nat_power_eq) + have prime: "prime (of_nat x + \\<^sub>\ * of_nat y)" "prime (of_nat y + \\<^sub>\ * of_nat x)" + using norms xy_signs \prime p\ unfolding prime_def normalized_gauss_int_iff + by (auto intro!: prime_gauss_int_norm_imp_prime_elem) + + have "normalize ((of_nat x + \\<^sub>\ * of_nat y) * (of_nat y + \\<^sub>\ * of_nat x)) = of_nat p" + proof - + have "(of_nat x + \\<^sub>\ * of_nat y) * (of_nat y + \\<^sub>\ * of_nat x) = (\\<^sub>\ * of_nat p :: gauss_int)" + by (subst xy(1) [symmetric]) (auto simp: gauss_int_eq_iff power2_eq_square) + also have "normalize \ = of_nat p" + by simp + finally show ?thesis . + qed + hence "prime_factorization (of_nat p) = + prime_factorization (prod_mset {#of_nat x + \\<^sub>\ * of_nat y, of_nat y + \\<^sub>\ * of_nat x#})" + using assms xy by (subst prime_factorization_unique) (auto simp: gauss_int_eq_iff) + also have "\ = {#of_nat x + \\<^sub>\ * of_nat y, of_nat y + \\<^sub>\ * of_nat x#}" + using prime by (subst prime_factorization_prod_mset_primes) auto + finally have "prime_factors (of_nat p) = {of_nat x + \\<^sub>\ * of_nat y, of_nat y + \\<^sub>\ * of_nat x}" + by simp + also have "\ = set (factor_gauss_int_prime_nat p)" + using * unfolding factor_gauss_int_prime_nat_def case_prod_unfold + by (auto simp: cong_def x_def y_def z_def) + finally show ?thesis .. +qed + +text \ + Next, we lift this to compute the prime factorisation of any integer in the Gaussian integers: +\ +definition prime_factors_gauss_int_of_nat :: "nat \ gauss_int set" where + "prime_factors_gauss_int_of_nat n = (if n = 0 then {} else + (\p\prime_factors n. set (factor_gauss_int_prime_nat p)))" + +lemma prime_factors_gauss_int_of_nat_correct: + "prime_factors_gauss_int_of_nat n = prime_factors (of_nat n)" +proof (cases "n = 0") + case False + from False have [simp]: "n > 0" by auto + have "prime_factors (of_nat n :: gauss_int) = + prime_factors (of_nat (prod_mset (prime_factorization n)))" + by (subst prod_mset_prime_factorization_nat [symmetric]) auto + also have "\ = prime_factors (prod_mset (image_mset of_nat (prime_factorization n)))" + by (subst of_nat_prod_mset) auto + also have "\ = (\p\prime_factors n. prime_factors (of_nat p))" + by (subst prime_factorization_prod_mset) auto + also have "\ = (\p\prime_factors n. set (factor_gauss_int_prime_nat p))" + by (intro SUP_cong refl factor_gauss_int_prime_nat_correct [symmetric]) auto + finally show ?thesis by (simp add: prime_factors_gauss_int_of_nat_def) +qed (auto simp: prime_factors_gauss_int_of_nat_def) + +text \ + We can now use this to factor any Gaussian integer by computing a factorisation of its + norm and removing all the prime divisors that do not actually divide it. +\ +definition prime_factors_gauss_int :: "gauss_int \ gauss_int set" where + "prime_factors_gauss_int z = (if z = 0 then {} + else Set.filter (\p. p dvd z) (prime_factors_gauss_int_of_nat (gauss_int_norm z)))" + +lemma prime_factors_gauss_int_correct [code_unfold]: "prime_factors z = prime_factors_gauss_int z" +proof (cases "z = 0") + case [simp]: False + define n where "n = gauss_int_norm z" + from False have [simp]: "n > 0" by (auto simp: n_def) + + have "prime_factors_gauss_int z = Set.filter (\p. p dvd z) (prime_factors (of_nat n))" + by (simp add: prime_factors_gauss_int_of_nat_correct prime_factors_gauss_int_def n_def) + also have "of_nat n = z * gauss_cnj z" + by (simp add: n_def self_mult_gauss_cnj) + also have "prime_factors \ = prime_factors z \ prime_factors (gauss_cnj z)" + by (subst prime_factors_product) auto + also have "Set.filter (\p. p dvd z) \ = prime_factors z" + by (auto simp: in_prime_factors_iff) + finally show ?thesis by simp +qed (auto simp: prime_factors_gauss_int_def) + +(*<*) +unbundle no_gauss_int_notation +(*>*) + +end \ No newline at end of file diff --git a/thys/Gaussian_Integers/Gaussian_Integers_Everything.thy b/thys/Gaussian_Integers/Gaussian_Integers_Everything.thy new file mode 100644 --- /dev/null +++ b/thys/Gaussian_Integers/Gaussian_Integers_Everything.thy @@ -0,0 +1,16 @@ +(* + File: Gaussian_Integers_Everything.thy + Author: Manuel Eberl, TU München + + Dummy theory to import everything in the session to ensure the theories are loaded in the + right order for the document. +*) +theory Gaussian_Integers_Everything +imports + Gaussian_Integers + Gaussian_Integers_Test + Gaussian_Integers_Sums_Of_Two_Squares + Gaussian_Integers_Pythagorean_Triples +begin + +end \ No newline at end of file diff --git a/thys/Gaussian_Integers/Gaussian_Integers_Pythagorean_Triples.thy b/thys/Gaussian_Integers/Gaussian_Integers_Pythagorean_Triples.thy new file mode 100644 --- /dev/null +++ b/thys/Gaussian_Integers/Gaussian_Integers_Pythagorean_Triples.thy @@ -0,0 +1,266 @@ +(* + File: Gaussian_Integers_Pythagorean_Triples.thy + Author: Manuel Eberl, TU München + + Application of Gaussian integers to deriving Euclid's formula for primitive Pythagorean triples +*) +subsection \Primitive Pythagorean triples\ +theory Gaussian_Integers_Pythagorean_Triples + imports Gaussian_Integers +begin + +text \ + In this section, we derive Euclid's formula for primitive Pythagorean triples using + Gaussian integers, following Stillwell~\cite{stillwell}. +\ +definition prim_pyth_triple :: "nat \ nat \ nat \ bool" where + "prim_pyth_triple x y z \ x > 0 \ y > 0 \ coprime x y \ x\<^sup>2 + y\<^sup>2 = z\<^sup>2" + +lemma prim_pyth_triple_commute: "prim_pyth_triple x y z \ prim_pyth_triple y x z" + by (simp add: prim_pyth_triple_def coprime_commute add_ac conj_ac) + +lemma prim_pyth_triple_aux: + fixes u v :: nat + assumes "v \ u" + shows "(2 * u * v) ^ 2 + (u ^ 2 - v ^ 2) ^ 2 = (u ^ 2 + v ^ 2) ^ 2" +proof - + have "int ((2 * u * v) ^ 2 + (u ^ 2 - v ^ 2) ^ 2) = + (2 * int u * int v) ^ 2 + (int u ^ 2 - int v ^ 2) ^ 2" + using assms by (simp add: of_nat_diff) + also have "\ = (int u ^ 2 + int v ^ 2) ^ 2" + by (simp add: power2_eq_square algebra_simps) + also have "\ = int ((u ^ 2 + v ^ 2) ^ 2)" + by simp + finally show ?thesis + by (simp only: of_nat_eq_iff) +qed + +lemma prim_pyth_tripleI1: + assumes "0 < v" "v < u" "coprime u v" "\(odd u \ odd v)" + shows "prim_pyth_triple (2 * u * v) (u\<^sup>2 - v\<^sup>2) (u\<^sup>2 + v\<^sup>2)" +proof - + have "v ^ 2 < u ^ 2" + using assms by (intro power_strict_mono) auto + hence "\u ^ 2 < v ^ 2" by linarith + + from assms have "coprime (int u) (int v ^ 2)" + by auto + hence "coprime (int u) (int u * int u + (-(int v ^ 2)))" + unfolding coprime_iff_gcd_eq_1 by (subst gcd_add_mult) auto + also have "int u * int u + (-(int v ^ 2)) = int (u ^ 2 - v ^ 2)" + using \v < u\ by (simp add: of_nat_diff flip: power2_eq_square) + finally have coprime1: "coprime u (u ^ 2 - v ^ 2)" + by auto + + from assms have "coprime (int v) (int u ^ 2)" + by (auto simp: coprime_commute) + hence "coprime (int v) ((-int v) * int v + int u ^ 2)" + unfolding coprime_iff_gcd_eq_1 by (subst gcd_add_mult) auto + also have "(-int v) * int v + int u ^ 2 = int (u ^ 2 - v ^ 2)" + using \v < u\ by (simp add: of_nat_diff flip: power2_eq_square) + finally have coprime2: "coprime v (u ^ 2 - v ^ 2)" + by auto + + have "(2 * u * v) ^ 2 + (u ^ 2 - v ^ 2) ^ 2 = (u ^ 2 + v ^ 2) ^ 2" + using \v < u\ by (intro prim_pyth_triple_aux) auto + moreover have "coprime (2 * u * v) (u ^ 2 - v ^ 2)" + using assms \\u ^ 2 < v ^ 2\ coprime1 coprime2 by auto + ultimately show ?thesis using assms \v ^ 2 < u ^ 2\ + by (simp add: prim_pyth_triple_def) +qed + +lemma prim_pyth_tripleI2: + assumes "0 < v" "v < u" "coprime u v" "\(odd u \ odd v)" + shows "prim_pyth_triple (u\<^sup>2 - v\<^sup>2) (2 * u * v) (u\<^sup>2 + v\<^sup>2)" + using prim_pyth_tripleI1[OF assms] by (simp add: prim_pyth_triple_commute) + +lemma primitive_pythagorean_tripleE_int: + assumes "z ^ 2 = x ^ 2 + y ^ 2" + assumes "coprime x y" + obtains u v :: int + where "coprime u v" and "\(odd u \ odd v)" + and "x = 2 * u * v \ y = u\<^sup>2 - v\<^sup>2 \ x = u\<^sup>2 - v\<^sup>2 \ y = 2 * u * v" +proof - + have "\(even x \ even y)" + using not_coprimeI[of 2 x y] \coprime x y\ by auto + moreover have "\(odd x \ odd y)" + proof safe + assume "odd x" "odd y" + hence "[x ^ 2 + y ^ 2 = 1 + 1] (mod 4)" + by (intro cong_add odd_square_cong_4_int) + hence "[z ^ 2 = 2] (mod 4)" + by (simp add: assms) + moreover have "[z ^ 2 = 0] (mod 4) \ [z ^ 2 = 1] (mod 4)" + using even_square_cong_4_int[of z] odd_square_cong_4_int[of z] + by (cases "even z") auto + ultimately show False + by (auto simp: cong_def) + qed + ultimately have "even y \ odd x" + by blast + + have "even z \ even (z ^ 2)" + by auto + also have "even (z ^ 2) \ even (x ^ 2 + y ^ 2)" + by (subst assms(1)) auto + finally have "odd z" + by (cases "even x") (auto simp: \even y \ \even x\) + + define t where "t = of_int x + \\<^sub>\ * of_int y" + from assms have t_mult_cnj: "t * gauss_cnj t = of_int z ^ 2" + by (simp add: t_def power2_eq_square algebra_simps flip: of_int_mult of_int_add) + + have "gauss_int_norm t = z ^ 2" + by (simp add: gauss_int_norm_def t_def assms) + with \coprime x y\ and \odd z\ have "coprime t (gauss_cnj t)" + by (intro coprime_self_gauss_cnj) + (auto simp: t_def gauss_int_norm_def assms(1) [symmetric] even_nat_iff) + moreover have "is_square (t * gauss_cnj t)" + by (subst t_mult_cnj) auto + hence "is_nth_power_upto_unit 2 (t * gauss_cnj t)" + by (auto intro: is_nth_power_upto_unit_base) + ultimately have "is_nth_power_upto_unit 2 t" + by (rule is_nth_power_upto_unit_mult_coprimeD1) + then obtain a b where ab: "is_unit a" "a * t = b ^ 2" + by (auto simp: is_nth_power_upto_unit_def is_nth_power_def) + from ab(1) have "a \ {1, -1, \\<^sub>\, -\\<^sub>\}" + by (auto simp: is_unit_gauss_int_iff) + then obtain u v :: int where "ReZ t = 2 * u * v \ ImZ t = u ^ 2 - v ^ 2 \ + ImZ t = 2 * u * v \ ReZ t = u ^ 2 - v ^ 2" + proof safe + assume [simp]: "a = 1" + have "ReZ t = ReZ b ^ 2 - ImZ b ^ 2" "ImZ t = 2 * ReZ b * ImZ b" using ab(2) + by (auto simp: gauss_int_eq_iff power2_eq_square) + thus ?thesis using that by blast + next + assume [simp]: "a = -1" + have "ReZ t = ImZ b ^ 2 - (-ReZ b) ^ 2" "ImZ t = 2 * ImZ b * (-ReZ b)" using ab(2) + by (auto simp: gauss_int_eq_iff power2_eq_square algebra_simps) + thus ?thesis using that by blast + next + assume [simp]: "a = \\<^sub>\" + hence "ImZ t = ImZ b ^ 2 - ReZ b ^ 2" "ReZ t = 2 * ImZ b * ReZ b" using ab(2) + by (auto simp: gauss_int_eq_iff power2_eq_square algebra_simps) + thus ?thesis using that by blast + next + assume [simp]: "a = -\\<^sub>\" + hence "ImZ t = (-ReZ b) ^ 2 - ImZ b ^ 2" "ReZ t = 2 * (-ReZ b) * ImZ b" using ab(2) + by (auto simp: gauss_int_eq_iff power2_eq_square algebra_simps) + thus ?thesis using that by blast + qed + also have "ReZ t = x" + by (simp add: t_def) + also have "ImZ t = y" + by (simp add: t_def) + finally have xy: "x = 2 * u * v \ y = u\<^sup>2 - v\<^sup>2 \ x = u\<^sup>2 - v\<^sup>2 \ y = 2 * u * v" + by blast + + have not_both_odd: "\(odd u \ odd v)" + proof safe + assume "odd u" "odd v" + hence "even x" "even y" + using xy by auto + with \coprime x y\ show False + by auto + qed + + have "coprime u v" + proof (rule coprimeI) + fix d assume "d dvd u" "d dvd v" + hence "d dvd (u\<^sup>2 - v\<^sup>2)" "d dvd 2 * u * v" + by (auto simp: power2_eq_square) + with xy have "d dvd x" "d dvd y" + by auto + with \coprime x y\ show "is_unit d" + using not_coprimeI by blast + qed + with xy not_both_odd show ?thesis + using that[of u v] by blast +qed + +lemma prim_pyth_tripleE: + assumes "prim_pyth_triple x y z" + obtains u v :: nat + where "0 < v" and "v < u" and "coprime u v" and "\(odd u \ odd v)" and "z = u\<^sup>2 + v\<^sup>2" + and "x = 2 * u * v \ y = u\<^sup>2 - v\<^sup>2 \ x = u\<^sup>2 - v\<^sup>2 \ y = 2 * u * v" +proof - + have *: "(int z) ^ 2 = (int x) ^ 2 + (int y) ^ 2" "coprime (int x) (int y)" + using assms by (auto simp flip: of_nat_power of_nat_add simp: prim_pyth_triple_def) + obtain u v + where uv: "coprime u v" "\(odd u \ odd v)" + "int x = 2 * u * v \ int y = u\<^sup>2 - v\<^sup>2 \ int x = u\<^sup>2 - v\<^sup>2 \ int y = 2 * u * v" + using primitive_pythagorean_tripleE_int[OF *] by metis + define u' v' where "u' = nat \u\" and "v' = nat \v\" + + have **: "a = 2 * u' * v'" if "int a = 2 * u * v" for a + proof - + from that have "nat \int a\ = nat \2 * u * v\" + by (simp only: ) + thus "a = 2 * u' * v'" + by (simp add: u'_def v'_def abs_mult nat_mult_distrib) + qed + have ***: "a = u' ^ 2 - v' ^ 2" "v' \ u'" if "int a = u ^ 2 - v ^ 2" for a + proof - + have "v ^ 2 \ v ^ 2 + int a" + by simp + also have "\ = u ^ 2" + using that by simp + finally have "\v\ \ \u\" + using abs_le_square_iff by blast + thus "v' \ u'" + by (simp add: v'_def u'_def) + + from that have "u ^ 2 = v ^ 2 + int a" + by simp + hence "nat \u ^ 2\ = nat \v ^ 2 + int a\" + by (simp only: ) + also have "nat \u ^ 2\ = u' ^ 2" + by (simp add: u'_def flip: nat_power_eq) + also have "nat \v ^ 2 + int a\ = v' ^ 2 + a" + by (simp add: nat_add_distrib v'_def flip: nat_power_eq) + finally show "a = u' ^ 2 - v' ^ 2" + by simp + qed + + have eq: "x = 2 * u' * v' \ y = u'\<^sup>2 - v'\<^sup>2 \ x = u'\<^sup>2 - v'\<^sup>2 \ y = 2 * u' * v'" and "v' \ u'" + using uv(3) **[of x] **[of y] ***[of x] ***[of y] by blast+ + moreover have "coprime u' v'" + using \coprime u v\ + by (auto simp: u'_def v'_def) + moreover have "\(odd u' \ odd v')" + using uv(2) by (auto simp: u'_def v'_def) + moreover have "v' \ u'" "v' > 0" + using \coprime u' v'\ eq assms by (auto simp: prim_pyth_triple_def) + moreover from this have "v' < u'" + using \v' \ u'\ by auto + moreover have "z = u'\<^sup>2 + v'\<^sup>2" + proof - + from assms have "z ^ 2 = x ^ 2 + y ^ 2" + by (simp add: prim_pyth_triple_def) + also have "\ = (2 * u' * v') ^ 2 + (u' ^ 2 - v' ^ 2) ^ 2" + using eq by (auto simp: add_ac) + also have "\ = (u' ^ 2 + v' ^ 2) ^ 2" + by (intro prim_pyth_triple_aux) fact + finally show ?thesis by simp + qed + ultimately show ?thesis using that[of v' u'] by metis +qed + +theorem prim_pyth_triple_iff: + "prim_pyth_triple x y z \ + (\u v. 0 < v \ v < u \ coprime u v \ \(odd u \ odd v) \ + (x = 2 * u * v \ y = u\<^sup>2 - v\<^sup>2 \ x = u\<^sup>2 - v\<^sup>2 \ y = 2 * u * v) \ z = u\<^sup>2 + v\<^sup>2)" + (is "_ \ ?rhs") +proof + assume "prim_pyth_triple x y z" + from prim_pyth_tripleE[OF this] show ?rhs by metis +next + assume ?rhs + then obtain u v where uv: "0 < v" "v < u" "coprime u v" "\(odd u \ odd v)" "z = u\<^sup>2 + v\<^sup>2" and + eq: "x = 2 * u * v \ y = u\<^sup>2 - v\<^sup>2 \ x = u\<^sup>2 - v\<^sup>2 \ y = 2 * u * v" + by metis + thus "prim_pyth_triple x y z" + using uv prim_pyth_tripleI1[OF uv(1-4)] prim_pyth_tripleI2[OF uv(1-4)] uv(5) eq by auto +qed + +end \ No newline at end of file diff --git a/thys/Gaussian_Integers/Gaussian_Integers_Sums_Of_Two_Squares.thy b/thys/Gaussian_Integers/Gaussian_Integers_Sums_Of_Two_Squares.thy new file mode 100644 --- /dev/null +++ b/thys/Gaussian_Integers/Gaussian_Integers_Sums_Of_Two_Squares.thy @@ -0,0 +1,150 @@ +(* + File: Gaussian_Integers_Sums_Of_Two_Squares.thy + Author: Manuel Eberl, TU München + + Application of Gaussian integers to determining which natural numbers can be written as a sum + of two squares +*) +subsection \Sums of two squares\ +theory Gaussian_Integers_Sums_Of_Two_Squares + imports Gaussian_Integers +begin + +text \ + As an application, we can now easily prove that a positive natural number is the + sum of two squares if and only if all prime factors congruent 3 modulo 4 have even + multiplicity. +\ + +inductive sum_of_2_squares_nat :: "nat \ bool" where + "sum_of_2_squares_nat (a ^ 2 + b ^ 2)" + +lemma sum_of_2_squares_nat_altdef: "sum_of_2_squares_nat n \ n \ range gauss_int_norm" +proof (safe elim!: sum_of_2_squares_nat.cases) + fix a b :: nat + have "a ^ 2 + b ^ 2 = gauss_int_norm (of_nat a + \\<^sub>\ * of_nat b)" + by (auto simp: gauss_int_norm_def nat_add_distrib nat_power_eq) + thus "a ^ 2 + b ^ 2 \ range gauss_int_norm" by blast +next + fix z :: gauss_int + have "gauss_int_norm z = nat \ReZ z\ ^ 2 + nat \ImZ z\ ^ 2" + by (auto simp: gauss_int_norm_def nat_add_distrib simp flip: nat_power_eq) + thus "sum_of_2_squares_nat (gauss_int_norm z)" + by (auto intro: sum_of_2_squares_nat.intros) +qed + +lemma sum_of_2_squares_nat_gauss_int_norm [intro]: "sum_of_2_squares_nat (gauss_int_norm z)" + by (auto simp: sum_of_2_squares_nat_altdef) + +lemma sum_of_2_squares_nat_0 [simp, intro]: "sum_of_2_squares_nat 0" + and sum_of_2_squares_nat_1 [simp, intro]: "sum_of_2_squares_nat 1" + and sum_of_2_squares_nat_Suc_0 [simp, intro]: "sum_of_2_squares_nat (Suc 0)" + and sum_of_2_squares_nat_2 [simp, intro]: "sum_of_2_squares_nat 2" + using sum_of_2_squares_nat.intros[of 0 0] sum_of_2_squares_nat.intros[of 0 1] + sum_of_2_squares_nat.intros[of 1 1] by (simp_all add: numeral_2_eq_2) + +lemma sum_of_2_squares_nat_mult [intro]: + assumes "sum_of_2_squares_nat x" "sum_of_2_squares_nat y" + shows "sum_of_2_squares_nat (x * y)" +proof - + from assms obtain z1 z2 where "x = gauss_int_norm z1" "y = gauss_int_norm z2" + by (auto simp: sum_of_2_squares_nat_altdef) + hence "x * y = gauss_int_norm (z1 * z2)" + by (simp add: gauss_int_norm_mult) + thus ?thesis by auto +qed + +lemma sum_of_2_squares_nat_power [intro]: + assumes "sum_of_2_squares_nat m" + shows "sum_of_2_squares_nat (m ^ n)" + using assms by (induction n) auto + +lemma sum_of_2_squares_nat_prod [intro]: + assumes "\x. x \ A \ sum_of_2_squares_nat (f x)" + shows "sum_of_2_squares_nat (\x\A. f x)" + using assms by (induction A rule: infinite_finite_induct) auto + +lemma sum_of_2_squares_nat_prod_mset [intro]: + assumes "\x. x \# A \ sum_of_2_squares_nat x" + shows "sum_of_2_squares_nat (prod_mset A)" + using assms by (induction A) auto + +lemma sum_of_2_squares_nat_necessary: + assumes "sum_of_2_squares_nat n" "n > 0" + assumes "prime p" "[p = 3] (mod 4)" + shows "even (multiplicity p n)" +proof - + define k where "k = multiplicity p n" + from assms obtain z where z: "gauss_int_norm z = n" + by (auto simp: sum_of_2_squares_nat_altdef) + from assms and z have [simp]: "z \ 0" + by auto + have prime': "prime (of_nat p :: gauss_int)" + using assms prime_gauss_int_of_nat by blast + have [simp]: "multiplicity (of_nat p) (gauss_cnj z) = multiplicity (of_nat p) z" + using multiplicity_gauss_cnj[of "of_nat p" z] by simp + have "multiplicity (of_nat p) (of_nat n :: gauss_int) = + multiplicity (of_nat p) (z * gauss_cnj z)" + using z by (simp add: self_mult_gauss_cnj) + also have "\ = 2 * multiplicity (of_nat p) z" + using prime' by (subst prime_elem_multiplicity_mult_distrib) auto + finally have "multiplicity p n = 2 * multiplicity (of_nat p) z" + by (subst (asm) multiplicity_gauss_int_of_nat) + thus ?thesis by auto +qed + +lemma sum_of_2_squares_nat_sufficient: + fixes n :: nat + assumes "n > 0" + assumes "\p. p \ prime_factors n \ [p = 3] (mod 4) \ even (multiplicity p n)" + shows "sum_of_2_squares_nat n" +proof - + define P2 where "P2 = {p\prime_factors n. [p = 1] (mod 4)}" + define P3 where "P3 = {p\prime_factors n. [p = 3] (mod 4)}" + from \n > 0\ have "n = (\p\prime_factors n. p ^ multiplicity p n)" + by (subst prime_factorization_nat) auto + also have "\ = (\p\{2}\P2\P3. p ^ multiplicity p n)" + using prime_mod_4_cases + by (intro prod.mono_neutral_left) + (auto simp: P2_def P3_def in_prime_factors_iff not_dvd_imp_multiplicity_0) + also have "\ = (\p\{2}\P2. p ^ multiplicity p n) * (\p\P3. p ^ multiplicity p n)" + by (intro prod.union_disjoint) (auto simp: P2_def P3_def cong_def) + also have "(\p\{2}\P2. p ^ multiplicity p n) = + 2 ^ multiplicity 2 n * (\p\P2. p ^ multiplicity p n)" + by (subst prod.union_disjoint) (auto simp: P2_def cong_def) + also have "(\p\P3. p ^ multiplicity p n) = (\p\P3. (p ^ 2) ^ (multiplicity p n div 2))" + proof (intro prod.cong refl) + fix p :: nat assume p: "p \ P3" + have "(p ^ 2) ^ (multiplicity p n div 2) = p ^ (2 * (multiplicity p n div 2))" + by (simp add: power_mult) + also have "even (multiplicity p n)" + using assms p by (auto simp: P3_def) + hence "2 * (multiplicity p n div 2) = multiplicity p n" + by simp + finally show "p ^ multiplicity p n = (p ^ 2) ^ (multiplicity p n div 2)" + by simp + qed + finally have "n = 2 ^ multiplicity 2 n * (\p\P2. p ^ multiplicity p n) * + (\p\P3. p\<^sup>2 ^ (multiplicity p n div 2))" . + + also have "sum_of_2_squares_nat \" + proof (intro sum_of_2_squares_nat_mult sum_of_2_squares_nat_prod; rule sum_of_2_squares_nat_power) + fix p :: nat assume p: "p \ P2" + with prime_cong_1_mod_4_gauss_int_norm_exists[of p] show "sum_of_2_squares_nat p" + by (auto simp: P2_def) + next + fix p :: nat assume p: "p \ P3" + have "sum_of_2_squares_nat (gauss_int_norm (of_nat p))" .. + also have "gauss_int_norm (of_nat p) = p ^ 2" + by simp + finally show "sum_of_2_squares_nat (p ^ 2)" . + qed auto + finally show ?thesis . +qed + +theorem sum_of_2_squares_nat_iff: + "sum_of_2_squares_nat n \ + n = 0 \ (\p\prime_factors n. [p = 3] (mod 4) \ even (multiplicity p n))" + using sum_of_2_squares_nat_necessary[of n] sum_of_2_squares_nat_sufficient[of n] by auto + +end \ No newline at end of file diff --git a/thys/Gaussian_Integers/Gaussian_Integers_Test.thy b/thys/Gaussian_Integers/Gaussian_Integers_Test.thy new file mode 100644 --- /dev/null +++ b/thys/Gaussian_Integers/Gaussian_Integers_Test.thy @@ -0,0 +1,32 @@ +(* + File: Gaussian_Integers_Test.thy + Author: Manuel Eberl, TU München + + Test of code generation for executable factorisation algorithm for Gaussian integers +*) +theory Gaussian_Integers_Test +imports + Gaussian_Integers + "Polynomial_Factorization.Prime_Factorization" + "HOL-Library.Code_Target_Numeral" +begin + +text \ + Lastly, we apply our factorisation algorithm to some simple examples: +\ + +(*<*) +context + includes gauss_int_notation +begin +(*>*) + +value "(1234 + 5678 * \\<^sub>\) mod (321 + 654 * \\<^sub>\)" +value "prime_factors (1 + 3 * \\<^sub>\)" +value "prime_factors (4830 + 1610 * \\<^sub>\)" + +(*<*) +end +(*>*) + +end \ No newline at end of file diff --git a/thys/Gaussian_Integers/ROOT b/thys/Gaussian_Integers/ROOT new file mode 100644 --- /dev/null +++ b/thys/Gaussian_Integers/ROOT @@ -0,0 +1,14 @@ +chapter AFP + +session Gaussian_Integers (AFP) = "HOL-Number_Theory" + + options [timeout = 1200] + sessions + "HOL-Computational_Algebra" + "HOL-Library" + Polynomial_Factorization + theories + Gaussian_Integers_Everything + document_files + "root.tex" + "root.bib" + diff --git a/thys/Gaussian_Integers/document/root.bib b/thys/Gaussian_Integers/document/root.bib new file mode 100644 --- /dev/null +++ b/thys/Gaussian_Integers/document/root.bib @@ -0,0 +1,13 @@ +@Inbook{stillwell, +author="Stillwell, John", +title="The Gaussian integers", +bookTitle="Elements of Number Theory", +year="2003", +publisher="Springer New York", +address="New York, NY", +pages="101--116", +isbn="978-0-387-21735-2", +doi="10.1007/978-0-387-21735-2_6", +url="https://doi.org/10.1007/978-0-387-21735-2_6" +} + diff --git a/thys/Gaussian_Integers/document/root.tex b/thys/Gaussian_Integers/document/root.tex new file mode 100644 --- /dev/null +++ b/thys/Gaussian_Integers/document/root.tex @@ -0,0 +1,44 @@ +\documentclass[11pt,a4paper]{article} +\usepackage{isabelle,isabellesym} +\usepackage{amsfonts, amsmath, amssymb} + +% this should be the last package used +\usepackage{pdfsetup} + +% urls in roman style, theory text in math-similar italics +\urlstyle{rm} +\isabellestyle{it} + +\begin{document} + +\title{Gaussian Integers} +\author{Manuel Eberl} +\maketitle + +\begin{abstract} +The Gaussian integers are the subring $\mathbb{Z}[i]$ of the complex numbers, i.\,e.\ the ring of all complex numbers with integral real and imaginary part. This article provides a definition of this ring as well as proofs of various basic properties, such as that they form a Euclidean ring and a full classification of their primes. An executable (albeit not very efficient) factorisation algorithm is also provided. + +Lastly, this Gaussian integer formalisation is used in two short applications: +\begin{enumerate} +\item The characterisation of all positive integers that can be written as sums of two squares +\item Euclid's formula for primitive Pythagorean triples +\end{enumerate} +While elementary proofs for both of these are already available in the AFP, the theory of Gaussian integers provides more concise proofs and a more high-level view. +\end{abstract} + +\newpage +\tableofcontents +\newpage +\parindent 0pt\parskip 0.5ex + +\input{session} + +\bibliographystyle{abbrv} +\bibliography{root} + +\end{document} + +%%% Local Variables: +%%% mode: latex +%%% TeX-master: t +%%% End: diff --git a/thys/Irrational_Series_Erdos_Straus/Irrational_Series_Erdos_Straus.thy b/thys/Irrational_Series_Erdos_Straus/Irrational_Series_Erdos_Straus.thy new file mode 100644 --- /dev/null +++ b/thys/Irrational_Series_Erdos_Straus/Irrational_Series_Erdos_Straus.thy @@ -0,0 +1,2020 @@ +(* Title: Irrational_Series_Erdos_Straus.thy + Author: Angeliki Koutsoukou-Argyraki and Wenda Li, University of Cambridge, UK. + +We formalise certain irrationality criteria for infinite series by P. Erdos and E.G. Straus. +In particular, we formalise Theorem 2.1, Corollary 2.10 and Theorem 3.1 in [1]. The latter is an +application of Theorem 2.1 involving the prime numbers. + +References: +[1] P. Erdos and E.G. Straus, On the irrationality of certain series, Pacific Journal of +Mathematics, Vol. 55, No 1, 1974 https://projecteuclid.org/euclid.pjm/1102911140 +*) + +theory "Irrational_Series_Erdos_Straus" imports + Prime_Number_Theorem.Prime_Number_Theorem + Prime_Distribution_Elementary.PNT_Consequences +begin + +section \Miscellaneous\ + +lemma suminf_comparison: + assumes "summable f" and "\n. norm (g n) \ f n" + shows "suminf g \ suminf f" +proof (rule suminf_le) + show "\n. g n \ f n" + apply rule + subgoal for n using assms(2)[rule_format,of n] by auto + done + show "summable g" + apply (rule summable_comparison_test'[OF \summable f\, of 0]) + using assms(2) by auto + show "summable f" using assms(1) . +qed + +lemma tendsto_of_int_diff_0: + assumes "(\n. f n - of_int(g n)) \ (0::real)" "\\<^sub>F n in sequentially. f n > 0" + shows "\\<^sub>F n in sequentially. 0 \ g n" +proof - + have "\\<^sub>F n in sequentially. \f n - of_int(g n)\ < 1 / 2" + using assms(1)[unfolded tendsto_iff,rule_format,of "1/2"] by auto + then show ?thesis using assms(2) + apply eventually_elim + by linarith +qed + +lemma eventually_mono_sequentially: + assumes "eventually P sequentially" + assumes "\x. P (x+k) \ Q (x+k)" + shows "eventually Q sequentially" + using sequentially_offset[OF assms(1),of k] + apply (subst eventually_sequentially_seg[symmetric,of _ k]) + apply (elim eventually_mono) + by fact + +lemma frequently_eventually_at_top: + fixes P Q::"'a::linorder \ bool" + assumes "frequently P at_top" "eventually Q at_top" + shows "frequently (\x. P x \ (\y\x. Q y) ) at_top" + using assms + unfolding frequently_def eventually_at_top_linorder + by (metis (mono_tags, hide_lams) le_cases order_trans) + +lemma eventually_at_top_mono: + fixes P Q::"'a::linorder \ bool" + assumes event_P:"eventually P at_top" + assumes PQ_imp:"\x. x\z \ \y\x. P y \ Q x" + shows "eventually Q at_top" +proof - + obtain N where N_P:"\n\N. P n" + using event_P[unfolded eventually_at_top_linorder] by auto + define N' where "N' = max N z" + have "Q x" when "x\N'" for x + apply (rule PQ_imp) + using N_P that unfolding N'_def by auto + then show ?thesis unfolding eventually_at_top_linorder + by auto +qed + +lemma frequently_at_top_elim: + fixes P Q::"'a::linorder \ bool" + assumes "\\<^sub>Fx in at_top. P x" + assumes "\i. P i \ \j>i. Q j" + shows "\\<^sub>Fx in at_top. Q x" + using assms unfolding frequently_def eventually_at_top_linorder + by (meson leD le_cases less_le_trans) + +lemma less_Liminf_iff: + fixes X :: "_ \ _ :: complete_linorder" + shows "Liminf F X < C \ (\yx. y \ X x) F)" + apply (subst Not_eq_iff[symmetric]) + apply (simp add:not_less not_frequently not_le le_Liminf_iff) + by force + +lemma sequentially_even_odd_imp: + assumes "\\<^sub>F N in sequentially. P (2*N)" "\\<^sub>F N in sequentially. P (2*N+1)" + shows "\\<^sub>F n in sequentially. P n" +proof - + obtain N where N_P:"\x\N. P (2 * x) \ P (2 * x + 1)" + using eventually_conj[OF assms] + unfolding eventually_at_top_linorder by auto + define N' where "N'=2*N " + have "P n" when "n\2*N" for n + proof - + define n' where "n'= n div 2" + then have "n'\N" using that by auto + then have "P (2 * n') \ P (2 * n' + 1)" + using N_P by auto + then show ?thesis unfolding n'_def + apply (cases "even n") + by auto + qed + then show ?thesis unfolding eventually_at_top_linorder by auto +qed + + +section \Theorem 2.1 and Corollary 2.10\ + +context + fixes a b ::"nat\int " + assumes a_pos:"\ n. a n >0 " and a_large:"\\<^sub>F n in sequentially. a n > 1" + and ab_tendsto: "(\n. \b n\ / (a (n-1)*a n)) \ 0" +begin + +private lemma aux_series_summable: "summable (\n. b n / (\k\n. a k))" +proof - + have "\e>0. \\<^sub>F x in sequentially. \b x\ / (a (x-1) * a x) < e" + using ab_tendsto[unfolded tendsto_iff] + apply (simp add:of_int_abs[symmetric] abs_mult del:of_int_abs) + by (subst (asm) (2) abs_of_pos,use \\ n. a n > 0\ in auto)+ + from this[rule_format,of 1] + have "\\<^sub>F x in sequentially. \real_of_int(b x)\ < (a (x-1) * a x)" + using \\ n. a n >0\ by auto + moreover have "\n. (\k\n. real_of_int (a k)) > 0" + using a_pos by (auto intro!:linordered_semidom_class.prod_pos) + ultimately have "\\<^sub>F n in sequentially. \b n\ / (\k\n. a k) + < (a (n-1) * a n) / (\k\n. a k)" + apply (elim eventually_mono) + by (auto simp add:field_simps) + moreover have "\b n\ / (\k\n. a k) = norm (b n / (\k\n. a k))" for n + using \\n. (\k\n. real_of_int (a k)) > 0\[rule_format,of n] by auto + ultimately have "\\<^sub>F n in sequentially. norm (b n / (\k\n. a k)) + < (a (n-1) * a n) / (\k\n. a k)" + by algebra + moreover have "summable (\n. (a (n-1) * a n) / (\k\n. a k))" + proof - + obtain s where a_gt_1:"\ n\s. a n >1" + using a_large[unfolded eventually_at_top_linorder] by auto + define cc where "cc= (\k0" + unfolding cc_def + apply (rule linordered_semidom_class.prod_pos) + using a_pos by auto + have "(\k\n+s. a k) \ cc * 2^n" for n + proof - + have "prod a {s.. 2^n" + proof (induct n) + case 0 + then show ?case using a_gt_1 by auto + next + case (Suc n) + moreover have "a (s + Suc n) \ 2" + using a_gt_1 by (smt le_add1) + ultimately show ?case + apply (subst prod.atLeastLessThan_Suc,simp) + using mult_mono'[of 2 "a (Suc (s + n))" " 2 ^ n" "prod a {s..cc>0\ unfolding cc_def by (simp add: atLeast0AtMost) + qed + then have "1/(\k\n+s. a k) \ 1/(cc * 2^n)" for n + proof - + assume asm:"\n. cc * 2 ^ n \ prod a {..n + s}" + then have "real_of_int (cc * 2 ^ n) \ prod a {..n + s}" using of_int_le_iff by blast + moreover have "prod a {..n + s} >0" using \cc>0\ by (simp add: a_pos prod_pos) + ultimately show ?thesis using \cc>0\ + by (auto simp:field_simps simp del:of_int_prod) + qed + moreover have "summable (\n. 1/(cc * 2^n))" + proof - + have "summable (\n. 1/(2::int)^n)" + using summable_geometric[of "1/(2::int)"] by (simp add:power_one_over) + from summable_mult[OF this,of "1/cc"] show ?thesis by auto + qed + ultimately have "summable (\n. 1 / (\k\n+s. a k))" + apply (elim summable_comparison_test'[where N=0]) + apply (unfold real_norm_def, subst abs_of_pos) + by (auto simp add: \\n. 0 < (\k\n. real_of_int (a k))\) + then have "summable (\n. 1 / (\k\n. a k))" + apply (subst summable_iff_shift[where k=s,symmetric]) + by simp + then have "summable (\n. (a (n+1) * a (n+2)) / (\k\n+2. a k))" + proof - + assume asm:"summable (\n. 1 / real_of_int (prod a {..n}))" + have "1 / real_of_int (prod a {..n}) = (a (n+1) * a (n+2)) / (\k\n+2. a k)" for n + proof - + have "a (Suc (Suc n)) \ 0" "a (Suc n) \0" + using a_pos by (metis less_irrefl)+ + then show ?thesis + by (simp add: atLeast0_atMost_Suc atMost_atLeast0) + qed + then show ?thesis using asm by auto + qed + then show "summable (\n. (a (n-1) * a n) / (\k\n. a k))" + apply (subst summable_iff_shift[symmetric,of _ 2]) + by auto + qed + ultimately show ?thesis + apply (elim summable_comparison_test_ev[rotated]) + by (simp add: eventually_mono) +qed + +private fun get_c::"(nat \ int) \ (nat \ int) \ int \ nat \ (nat \ int)" where + "get_c a' b' B N 0 = round (B * b' N / a' N)"| + "get_c a' b' B N (Suc n) = get_c a' b' B N n * a' (n+N) - B * b' (n+N)" + +lemma ab_rationality_imp: + assumes ab_rational:"(\n. (b n / (\i \ n. a i))) \ \" + shows "\ (B::int)>0. \ c::nat\ int. + (\\<^sub>F n in sequentially. B*b n = c n * a n - c(n+1) \ \c(n+1)\ (\n. c (Suc n) / a n) \ 0" +proof - + have [simp]:"a n \ 0" for n using a_pos by (metis less_numeral_extra(3)) + obtain A::int and B::int where + AB_eq:"(\n. real_of_int (b n) / real_of_int (prod a {..n})) = A / B" and "B>0" + proof - + obtain q::rat where "(\n. real_of_int (b n) / real_of_int (prod a {..n})) = real_of_rat q" + using ab_rational by (rule Rats_cases) simp + moreover obtain A::int and B::int where "q = Rat.Fract A B" "B > 0" "coprime A B" + by (rule Rat_cases) auto + ultimately show ?thesis by (auto intro!: that[of A B] simp:of_rat_rat) + qed + define f where "f = (\n. b n / real_of_int (prod a {..n}))" + define R where "R = (\N. (\n. B*b (n+N+1) / prod a {N..n+N+1}))" + have all_e_ubound:"\e>0. \\<^sub>F M in sequentially. \n. \B*b (n+M+1) / prod a {M..n+M+1}\ < e/4 * 1/2^n" + proof safe + fix e::real assume "e>0" + obtain N where N_a2: "\n \ N. a n \ 2" + and N_ba: "\n \ N. \b n\ / (a (n-1) * a n) < e/(4*B)" + proof - + have "\\<^sub>F n in sequentially. \b n\ / (a (n - 1) * a n) < e/(4*B)" + using order_topology_class.order_tendstoD[OF ab_tendsto,of "e/(4*B)"] \B>0\ \e>0\ + by auto + moreover have "\\<^sub>F n in sequentially. a n \ 2" + using a_large by (auto elim: eventually_mono) + ultimately have "\\<^sub>F n in sequentially. \b n\ / (a (n - 1) * a n) < e/(4*B) \ a n \ 2" + by eventually_elim auto + then show ?thesis unfolding eventually_at_top_linorder using that + by auto + qed + have geq_N_bound:"\B*b (n+M+1) / prod a {M..n+M+1}\ < e/4 * 1/2^n" when "M\N" for n M + proof - + define D where "D = B*b (n+M+1)/ (a (n+M) * a (n+M+1))" + have "\B*b (n+M+1) / prod a {M..n+M+1}\ = \D / prod a {M.." + proof - + have "{M..n+M+1} = {M.. {n+M,n+M+1}" by auto + then have "prod a {M..n+M+1} = a (n+M) * a (n+M+1)* prod a {M..e/4 * (1/prod a {M.." + proof - + have "\D\ < e/ 4" + unfolding D_def using N_ba[rule_format, of "n+M+1"] \B>0\ \M \ N\ \e>0\ a_pos + by (auto simp:field_simps abs_mult abs_of_pos) + from mult_strict_right_mono[OF this,of "1/prod a {M..e>0\ + show ?thesis + apply (auto simp:abs_prod abs_mult prod_pos) + by (subst (2) abs_of_pos,auto)+ + qed + also have "... \ e/4 * 1/2^n" + proof - + have "prod a {M.. 2^n" + proof (induct n) + case 0 + then show ?case by simp + next + case (Suc n) + then show ?case + using \M\N\ by (simp add: N_a2 mult.commute mult_mono' prod.atLeastLessThan_Suc) + qed + then have "real_of_int (prod a {M.. 2^n" + using numeral_power_le_of_int_cancel_iff by blast + then show ?thesis using \e>0\ by (auto simp add:divide_simps) + qed + finally show ?thesis . + qed + show "\\<^sub>F M in sequentially. \n. \real_of_int (B * b (n + M + 1)) + / real_of_int (prod a {M..n + M + 1})\ < e / 4 * 1 / 2 ^ n" + apply (rule eventually_sequentiallyI[of N]) + using geq_N_bound by blast + qed + have R_tendsto_0:"R \ 0" + proof (rule tendstoI) + fix e::real assume "e>0" + show "\\<^sub>F x in sequentially. dist (R x) 0 < e" using all_e_ubound[rule_format,OF \e>0\] + proof eventually_elim + case (elim M) + define g where "g = (\n. B*b (n+M+1) / prod a {M..n+M+1})" + have g_lt:"\g n\ < e/4 * 1/2^n" for n + using elim unfolding g_def by auto + have g_abs_summable:"summable (\n. \g n\)" + proof - + have "summable (\n. e/4 * 1/2^n)" + using summable_geometric[of "1/2",THEN summable_mult,of "e/4",simplified] + by (auto simp add:algebra_simps power_divide) + then show ?thesis + apply (elim summable_comparison_test') + using g_lt less_eq_real_def by auto + qed + have "\\n. g n\ \ (\n. \g n\)" by (rule summable_rabs[OF g_abs_summable]) + also have "... \(\n. e/4 * 1/2^n)" + proof (rule suminf_comparison) + show "summable (\n. e/4 * 1/2^n)" + using summable_geometric[of "1/2",THEN summable_mult,of "e/4",simplified] + by (auto simp add:algebra_simps power_divide) + show "\n. norm \g n\ \ e / 4 * 1 / 2 ^ n" using g_lt less_eq_real_def by auto + qed + also have "... = (e/4) * (\n. (1/2)^n)" + apply (subst suminf_mult[symmetric]) + subgoal + apply (rule complete_algebra_summable_geometric) + by simp + subgoal by (auto simp:algebra_simps power_divide) + done + also have "... = e/2" by (simp add:suminf_geometric[of "1/2"]) + finally have "\\n. g n\ \ e / 2" . + then show "dist (R M) 0 < e" unfolding R_def g_def using \e>0\ by auto + qed + qed + + obtain N where R_N_bound:"\M \ N. \R M\ \ 1 / 4" + and N_geometric:"\M\N. \n. \real_of_int (B * b (n + M + 1)) / (prod a {M..n + M + 1})\ < 1 / 2 ^ n" + proof - + obtain N1 where N1:"\M \ N1. \R M\ \ 1 / 4" + using metric_LIMSEQ_D[OF R_tendsto_0,of "1/4"] all_e_ubound[rule_format,of 4,unfolded eventually_sequentially] + by (auto simp:less_eq_real_def) + obtain N2 where N2:"\M\N2. \n. \real_of_int (B * b (n + M + 1)) + / (prod a {M..n + M + 1})\ < 1 / 2 ^ n" + using all_e_ubound[rule_format,of 4,unfolded eventually_sequentially] + by (auto simp:less_eq_real_def) + define N where "N=max N1 N2" + show ?thesis using that[of N] N1 N2 unfolding N_def by simp + qed + + define C where "C = B * prod a {..nn. f n)" + unfolding AB_eq f_def using \B>0\ by auto + also have "... = B * prod a {..nn. f (n+N+1)))" + using suminf_split_initial_segment[OF \summable f\, of "N+1"] by auto + also have "... = B * prod a {..nn. f (n+N+1)))" + using sum.atLeast0_lessThan_Suc by simp + also have "... = C + B * b N / a N + (\n. B*b (n+N+1) / prod a {N..n+N+1})" + proof - + have "B * prod a {.. {N}" using ivl_disj_un_singleton(2) by blast + then show ?thesis unfolding f_def by auto + qed + moreover have "B * prod a {..n. f (n+N+1)) = (\n. B*b (n+N+1) / prod a {N..n+N+1})" + proof - + have "summable (\n. f (n + N + 1))" + using \summable f\ summable_iff_shift[of f "N+1"] by auto + moreover have "prod a {.. {N..n + N + 1}" by auto + then show ?thesis + unfolding f_def + apply simp + apply (subst prod.union_disjoint) + by auto + qed + ultimately show ?thesis + apply (subst suminf_mult[symmetric]) + by (auto simp add: mult.commute mult.left_commute) + qed + ultimately show ?thesis unfolding C_def by (auto simp:algebra_simps) + qed + also have "... = C +B * b N / a N + R N" + unfolding R_def by simp + finally show ?thesis . + qed + have R_bound:"\R M\ \ 1 / 4" and R_Suc:"R (Suc M) = a M * R M - B * b (Suc M) / a (Suc M)" + when "M \ N" for M + proof - + define g where "g = (\n. B*b (n+M+1) / prod a {M..n+M+1})" + have g_abs_summable:"summable (\n. \g n\)" + proof - + have "summable (\n.(1::real)/2^n)" + using summable_geometric[of "(1::real)/2",simplified] + by (auto elim!: back_subst[of "summable"] simp:field_simps) + moreover have "\g n\ < 1/2^n" for n + using N_geometric[rule_format,OF that] unfolding g_def by simp + ultimately show ?thesis + apply (elim summable_comparison_test') + using less_eq_real_def by auto + qed + show "\R M\ \ 1 / 4" using R_N_bound[rule_format,OF that] . + have "R M = (\n. g n)" unfolding R_def g_def by simp + also have "... = g 0 + (\n. g (Suc n))" + apply (subst suminf_split_head) + using summable_rabs_cancel[OF g_abs_summable] by auto + also have "... = g 0 + 1/a M * (\n. a M * g (Suc n))" + apply (subst suminf_mult) + by (auto simp add: g_abs_summable summable_Suc_iff summable_rabs_cancel) + also have "... = g 0 + 1/a M * R (Suc M)" + proof - + have "a M * g (Suc n) = B * b (n + M + 2) / prod a {Suc M..n + M + 2}" for n + proof - + have "{M..Suc (Suc (M + n))} = {M} \ {Suc M..Suc (Suc (M + n))}" by auto + then show ?thesis + unfolding g_def using \B>0\ by (auto simp add:algebra_simps) + qed + then have "(\n. a M * g (Suc n)) = R (Suc M)" + unfolding R_def by auto + then show ?thesis by auto + qed + finally have "R M = g 0 + 1 / a M * R (Suc M)" . + then have "R (Suc M) = a M * R M - g 0 * a M" + by (auto simp add:algebra_simps) + moreover have "{M..Suc M} = {M,Suc M}" by auto + ultimately show "R (Suc M) = a M * R M - B * b (Suc M) / a (Suc M)" + unfolding g_def by auto + qed + + define c where "c = (\n. if n\N then get_c a b B N (n-N) else undefined)" + have c_rec:"c (n+1) = c n * a n - B * b n" when "n \ N" for n + unfolding c_def using that by (auto simp:Suc_diff_le) + have c_R:"c (Suc n) / a n = R n" when "n \ N" for n + using that + proof (induct rule:nat_induct_at_least) + case base + have "\ c (N+1) / a N \ \ 1/2" + proof - + have "c N = round (B * b N / a N)" unfolding c_def by simp + moreover have "c (N+1) / a N = c N - B * b N / a N" + using a_pos[rule_format,of N] + by (auto simp add:c_rec[of N,simplified] divide_simps) + ultimately show ?thesis using of_int_round_abs_le by auto + qed + moreover have "\R N\ \ 1 / 4" using R_bound[of N] by simp + ultimately have "\c (N+1) / a N - R N \ < 1" by linarith + moreover have "c (N+1) / a N - R N \ \" + proof - + have "c (N+1) / a N = c N - B * b N / a N" + using a_pos[rule_format,of N] + by (auto simp add:c_rec[of N,simplified] divide_simps) + moreover have " B * b N / a N + R N \ \" + proof - + have "C = B * (\nn {..n}" if "nB>0\ + apply simp + apply (subst prod.union_disjoint) + by auto + qed + finally have "C = real_of_int (B * (\n \" using Ints_of_int by blast + moreover note \A * prod a {.. + ultimately show ?thesis + by (metis Ints_diff Ints_of_int add.assoc add_diff_cancel_left') + qed + ultimately show ?thesis by (simp add: diff_diff_add) + qed + ultimately have "c (N+1) / a N - R N = 0" + by (metis Ints_cases less_irrefl of_int_0 of_int_lessD) + then show ?case by simp + next + case (Suc n) + have "c (Suc (Suc n)) / a (Suc n) = c (Suc n) - B * b (Suc n) / a (Suc n)" + apply (subst c_rec[of "Suc n",simplified]) + using \N \ n\ by (auto simp add: divide_simps) + also have "... = a n * R n - B * b (Suc n) / a (Suc n)" + using Suc by (auto simp: divide_simps) + also have "... = R (Suc n)" + using R_Suc[OF \N \ n\] by simp + finally show ?case . + qed + have ca_tendsto_zero:"(\n. c (Suc n) / a n) \ 0" + using R_tendsto_0 + apply (elim filterlim_mono_eventually) + using c_R by (auto intro!:eventually_sequentiallyI[of N]) + have ca_bound:"\c (n + 1)\ < a n / 2" when "n \ N" for n + proof - + have "\c (Suc n)\ / a n = \c (Suc n) / a n\" using a_pos[rule_format,of n] by auto + also have "... = \R n\" using c_R[OF that] by auto + also have "... < 1/2" using R_bound[OF that] by auto + finally have "\c (Suc n)\ / a n < 1/2" . + then show ?thesis using a_pos[rule_format,of n] by auto + qed + +(* (* the following part corresponds to (2.7) (2.8) in the original paper, but turns out to be + not necessary. *) + have c_round:"c n = round (B * b n / a n)" when "n \ N" for n + proof (cases "n=N") + case True + then show ?thesis unfolding c_def by simp + next + case False + with \n\N\ obtain n' where n_Suc:"n=Suc n'" and "n' \ N" + by (metis le_eq_less_or_eq lessE less_imp_le_nat) + have "B * b n / a n = c n - R n" + proof - + have "R n = c n - B * b n / a n" + using c_R[OF \n'\N\,symmetric,folded n_Suc] R_Suc[OF \n'\N\,folded n_Suc] + by (auto simp:field_simps) + then show ?thesis by (auto simp:field_simps) + qed + then have "\B * b n / a n - c n\ = \R n\" by auto + then have "\B * b n / a n - c n\ < 1/2" using R_bound[OF \n \ N\] by auto + from round_unique'[OF this] show ?thesis by auto + qed + *) + show "\B>0. \c. (\\<^sub>F n in sequentially. B * b n = c n * a n - c (n + 1) + \ real_of_int \c (n + 1)\ < a n / 2) \ (\n. c (Suc n) / a n) \ 0" + unfolding eventually_at_top_linorder + apply (rule exI[of _ B],use \B>0\ in simp) + apply (intro exI[of _c] exI[of _ N]) + using c_rec ca_bound ca_tendsto_zero + by fastforce +qed + +private lemma imp_ab_rational: + assumes "\ (B::int)>0. \ c::nat\ int. + (\\<^sub>F n in sequentially. B*b n = c n * a n - c(n+1) \ \c(n+1)\n. (b n / (\i \ n. a i))) \ \" +proof - + obtain B::int and c::"nat\int" and N::nat where "B>0" and + large_n:"\n\N. B * b n = c n * a n - c (n + 1) \ real_of_int \c (n + 1)\ < a n / 2 + \ a n\2" + proof - + obtain B c where "B>0" and event1:"\\<^sub>F n in sequentially. B * b n = c n * a n - c (n + 1) + \ real_of_int \c (n + 1)\ < real_of_int (a n) / 2" + using assms by auto + from eventually_conj[OF event1 a_large,unfolded eventually_at_top_linorder] + obtain N where "\n\N. (B * b n = c n * a n - c (n + 1) + \ real_of_int \c (n + 1)\ < real_of_int (a n) / 2) \ 2 \ a n" + by fastforce + then show ?thesis using that[of B N c] \B>0\ by auto + qed + define f where "f=(\n. real_of_int (b n) / real_of_int (prod a {..n}))" + define S where "S = (\n. f n)" + have "summable f" + unfolding f_def by (rule aux_series_summable) + define C where "C=B*prod a {..nn. (c (n+N) * a (n+N)) / prod a {N..n+N})" + define h2 where "h2 = (\n. c (n+N+1) / prod a {N..n+N})" + have f_h12:"B*prod a {..n. B * b (n+N))" + define g2 where "g2 = (\n. prod a {.. {N..n + N}) = prod a {.. {N..n + N}) = prod a {..n+N}" + by (simp add: ivl_disj_un_one(4)) + ultimately show ?thesis + unfolding g2_def + apply simp + using a_pos by (metis less_irrefl) + qed + ultimately have "B*prod a {..nn. f (n+N)))" + using suminf_split_initial_segment[OF \summable f\,of N] + unfolding S_def by (auto simp:algebra_simps) + also have "... = C + B*prod a {..n. f (n+N))" + unfolding C_def by (auto simp:algebra_simps) + also have "... = C + (\n. h1 n - h2 n)" + apply (subst suminf_mult[symmetric]) + subgoal using \summable f\ by (simp add: summable_iff_shift) + subgoal using f_h12 by auto + done + also have "... = C + h1 0" + proof - + have "(\n. \i\n. h1 i - h2 i) \ (\i. h1 i - h2 i)" + proof (rule summable_LIMSEQ') + have "(\i. h1 i - h2 i) = (\i. real_of_int (B * prod a {..i. h1 i - h2 i)" + using \summable f\ by (simp add: summable_iff_shift summable_mult) + qed + moreover have "(\i\n. h1 i - h2 i) = h1 0 - h2 n" for n + proof (induct n) + case 0 + then show ?case by simp + next + case (Suc n) + have "(\i\Suc n. h1 i - h2 i) = (\i\n. h1 i - h2 i) + h1 (n+1) - h2 (n+1)" + by auto + also have "... = h1 0 - h2 n + h1 (n+1) - h2 (n+1)" using Suc by auto + also have "... = h1 0 - h2 (n+1)" + proof - + have "h2 n = h1 (n+1)" + unfolding h2_def h1_def + apply (auto simp:prod.nat_ivl_Suc') + using a_pos by (metis less_numeral_extra(3)) + then show ?thesis by auto + qed + finally show ?case by simp + qed + ultimately have "(\n. h1 0 - h2 n) \ (\i. h1 i - h2 i)" by simp + then have "h2 \ (h1 0 - (\i. h1 i - h2 i))" + apply (elim metric_tendsto_imp_tendsto) + by (auto intro!:eventuallyI simp add:dist_real_def) + moreover have "h2 \ 0" + proof - + have h2_n:"\h2 n\ < (1 / 2)^(n+1)" for n + proof - + have "\h2 n\ = \c (n + N + 1)\ / prod a {N..n + N}" + unfolding h2_def abs_divide + using a_pos by (simp add: abs_of_pos prod_pos) + also have "... < (a (N+n) / 2) / prod a {N..n + N}" + unfolding h2_def + apply (rule divide_strict_right_mono) + subgoal using large_n[rule_format,of "N+n"] by (auto simp add:algebra_simps) + subgoal using a_pos by (simp add: prod_pos) + done + also have "... = 1 / (2*prod a {N.. (1/2)^(n+1)" + proof (induct n) + case 0 + then show ?case by auto + next + case (Suc n) + define P where "P=1 / real_of_int (2 * prod a {N.. ( (1 / 2) ^ (n + 1) ) / a (n+N) " + apply (rule divide_right_mono) + subgoal unfolding P_def using Suc by auto + subgoal by (simp add: a_pos less_imp_le) + done + also have "... \ ( (1 / 2) ^ (n + 1) ) / 2 " + apply (rule divide_left_mono) + using large_n[rule_format,of "n+N",simplified] by auto + also have "... = (1 / 2) ^ (n + 2)" by auto + finally show ?case by simp + qed + finally show ?thesis . + qed + have "(\n. (1 / 2)^(n+1)) \ (0::real)" + using tendsto_mult_right_zero[OF LIMSEQ_abs_realpow_zero2[of "1/2",simplified],of "1/2"] + by auto + then show ?thesis + apply (elim Lim_null_comparison[rotated]) + using h2_n less_eq_real_def by (auto intro!:eventuallyI) + qed + ultimately have "(\i. h1 i - h2 i) = h1 0" + using LIMSEQ_unique by fastforce + then show ?thesis by simp + qed + also have "... = C + c N" + unfolding h1_def using a_pos + by auto (metis less_irrefl) + finally show ?thesis . + qed + then have "S = (C + real_of_int (c N)) / (B*prod a {..0 < B\ a_pos less_irrefl mult.commute mult_pos_pos + nonzero_mult_div_cancel_right of_int_eq_0_iff prod_pos) + moreover have "... \ \" + unfolding C_def f_def by (intro Rats_divide Rats_add Rats_mult Rats_of_int Rats_sum) + ultimately show "S \ \" by auto +qed + +theorem theorem_2_1_Erdos_Straus : + "(\n. (b n / (\i \ n. a i))) \ \ \ (\ (B::int)>0. \ c::nat\ int. + (\\<^sub>F n in sequentially. B*b n = c n * a n - c(n+1) \ \c(n+1)\The following is a Corollary to Theorem 2.1. \ + +corollary corollary_2_10_Erdos_Straus: + assumes ab_event:"\\<^sub>F n in sequentially. b n > 0 \ a (n+1) \ a n" + and ba_lim_leq:"lim (\n. (b(n+1) - b n )/a n) \ 0" + and ba_lim_exist:"convergent (\n. (b(n+1) - b n )/a n)" + and "liminf (\n. a n / b n) = 0 " + shows "(\n. (b n / (\i \ n. a i))) \ \" +proof + assume "(\n. (b n / (\i \ n. a i))) \ \" + then obtain B c where "B>0" and abc_event:"\\<^sub>F n in sequentially. B * b n = c n * a n - c (n + 1) + \ \c (n + 1)\ < a n / 2" and ca_vanish: "(\n. c (Suc n) / a n) \ 0" + using ab_rationality_imp by auto + + have bac_close:"(\n. B * b n / a n - c n) \ 0" + proof - + have "\\<^sub>F n in sequentially. B * b n - c n * a n + c (n + 1) = 0" + using abc_event by (auto elim!:eventually_mono) + then have "\\<^sub>F n in sequentially. (B * b n - c n * a n + c (n+1)) / a n = 0 " + apply eventually_elim + by auto + then have "\\<^sub>F n in sequentially. B * b n / a n - c n + c (n + 1) / a n = 0" + apply eventually_elim + using a_pos by (auto simp:divide_simps) (metis less_irrefl) + then have "(\n. B * b n / a n - c n + c (n + 1) / a n) \ 0" + by (simp add: eventually_mono tendsto_iff) + from tendsto_diff[OF this ca_vanish] + show ?thesis by auto + qed + + have c_pos:"\\<^sub>F n in sequentially. c n > 0" + proof - + from bac_close have *:"\\<^sub>F n in sequentially. c n \ 0" + apply (elim tendsto_of_int_diff_0) + using ab_event a_large apply (eventually_elim) + using \B>0\ by auto + show ?thesis + proof (rule ccontr) + assume "\ (\\<^sub>F n in sequentially. c n > 0)" + moreover have "\\<^sub>F n in sequentially. c (Suc n) \ 0 \ c n\0" + using * eventually_sequentially_Suc[of "\n. c n\0"] + by (metis (mono_tags, lifting) eventually_at_top_linorder le_Suc_eq) + ultimately have "\\<^sub>F n in sequentially. c n = 0 \ c (Suc n) \ 0" + using eventually_elim2 frequently_def by fastforce + moreover have "\\<^sub>F n in sequentially. b n > 0 \ B * b n = c n * a n - c (n + 1)" + using ab_event abc_event by eventually_elim auto + ultimately have "\\<^sub>F n in sequentially. c n = 0 \ c (Suc n) \ 0 \ b n > 0 + \ B * b n = c n * a n - c (n + 1)" + using frequently_eventually_frequently by fastforce + from frequently_ex[OF this] + obtain n where "c n = 0" "c (Suc n) \ 0" "b n > 0" + "B * b n = c n * a n - c (n + 1)" + by auto + then have "B * b n \ 0" by auto + then show False using \b n>0\ \B > 0\ using mult_pos_pos not_le by blast + qed + qed + + have bc_epsilon:"\\<^sub>F n in sequentially. b (n+1) / b n > (c (n+1) - \) / c n" when "\>0" "\<1" for \::real + proof - + have "\\<^sub>F x in sequentially. \c (Suc x) / a x\ < \ / 2" + using ca_vanish[unfolded tendsto_iff,rule_format, of "\/2"] \\>0\ by auto + moreover then have "\\<^sub>F x in sequentially. \c (x+2) / a (x+1)\ < \ / 2" + apply (subst (asm) eventually_sequentially_Suc[symmetric]) + by simp + moreover have "\\<^sub>F n in sequentially. B * b (n+1) = c (n+1) * a (n+1) - c (n + 2)" + using abc_event + apply (subst (asm) eventually_sequentially_Suc[symmetric]) + by (auto elim:eventually_mono) + moreover have "\\<^sub>F n in sequentially. c n > 0 \ c (n+1) > 0 \ c (n+2) > 0" + proof - + have "\\<^sub>F n in sequentially. 0 < c (Suc n)" + using c_pos by (subst eventually_sequentially_Suc) simp + moreover then have "\\<^sub>F n in sequentially. 0 < c (Suc (Suc n))" + using c_pos by (subst eventually_sequentially_Suc) simp + ultimately show ?thesis using c_pos by eventually_elim auto + qed + ultimately show ?thesis using ab_event abc_event + proof eventually_elim + case (elim n) + define \\<^sub>0 \\<^sub>1 where "\\<^sub>0 = c (n+1) / a n" and "\\<^sub>1 = c (n+2) / a (n+1)" + have "\\<^sub>0 > 0" "\\<^sub>1 > 0" "\\<^sub>0 < \/2" "\\<^sub>1 < \/2" using a_pos elim by (auto simp add: \\<^sub>0_def \\<^sub>1_def) + have "(\ - \\<^sub>1) * c n > 0" + apply (rule mult_pos_pos) + using \\\<^sub>1 > 0\ \\\<^sub>1 < \/2\ \\>0\ elim by auto + moreover have "\\<^sub>0 * (c (n+1) - \) > 0" + apply (rule mult_pos_pos[OF \\\<^sub>0 > 0\]) + using elim(4) that(2) by linarith + ultimately have "(\ - \\<^sub>1) * c n + \\<^sub>0 * (c (n+1) - \) > 0" by auto + moreover have "c n - \\<^sub>0 > 0" using \\\<^sub>0 < \ / 2\ elim(4) that(2) by linarith + moreover have "c n > 0" by (simp add: elim(4)) + ultimately have "(c (n+1) - \) / c n < (c (n+1) - \\<^sub>1) / (c n - \\<^sub>0)" + by (auto simp add: field_simps) + also have "... \ (c (n+1) - \\<^sub>1) / (c n - \\<^sub>0) * (a (n+1) / a n)" + proof - + have "(c (n+1) - \\<^sub>1) / (c n - \\<^sub>0) > 0" + by (smt \0 < (\ - \\<^sub>1) * real_of_int (c n)\ \0 < real_of_int (c n) - \\<^sub>0\ + divide_pos_pos elim(4) mult_le_0_iff of_int_less_1_iff that(2)) + moreover have "(a (n+1) / a n) \ 1" + using a_pos elim(5) by auto + ultimately show ?thesis by (metis mult_cancel_left1 real_mult_le_cancel_iff2) + qed + also have "... = (B * b (n+1)) / (B * b n)" + proof - + have "B * b n = c n * a n - c (n + 1)" + using elim by auto + also have "... = a n * (c n - \\<^sub>0)" + using a_pos[rule_format,of n] unfolding \\<^sub>0_def by (auto simp:field_simps) + finally have "B * b n = a n * (c n - \\<^sub>0)" . + moreover have "B * b (n+1) = a (n+1) * (c (n+1) - \\<^sub>1)" + unfolding \\<^sub>1_def + using a_pos[rule_format,of "n+1"] + apply (subst \B * b (n + 1) = c (n + 1) * a (n + 1) - c (n + 2)\) + by (auto simp:field_simps) + ultimately show ?thesis by (simp add: mult.commute) + qed + also have "... = b (n+1) / b n" + using \B>0\ by auto + finally show ?case . + qed + qed + + have eq_2_11:"\\<^sub>F n in sequentially. b (n+1) > b n + (1 - \)^2 * a n / B" + when "\>0" "\<1" "\ (\\<^sub>F n in sequentially. c (n+1) \ c n)" for \::real + proof - + have "\\<^sub>F x in sequentially. c x < c (Suc x) " using that(3) + by (simp add:not_eventually frequently_elim1) + moreover have "\\<^sub>F x in sequentially. \c (Suc x) / a x\ < \" + using ca_vanish[unfolded tendsto_iff,rule_format, of \] \\>0\ by auto + moreover have "\\<^sub>F n in sequentially. c n > 0 \ c (n+1) > 0" + proof - + have "\\<^sub>F n in sequentially. 0 < c (Suc n)" + using c_pos by (subst eventually_sequentially_Suc) simp + then show ?thesis using c_pos by eventually_elim auto + qed + ultimately show ?thesis using ab_event abc_event bc_epsilon[OF \\>0\ \\<1\] + proof (elim frequently_rev_mp,eventually_elim) + case (elim n) + then have "c (n+1) / a n < \" + using a_pos[rule_format,of n] by auto + also have "... \ \ * c n" using elim(7) that(1) by auto + finally have "c (n+1) / a n < \ * c n" . + then have "c (n+1) / c n < \ * a n" + using a_pos[rule_format,of n] elim by (auto simp:field_simps) + then have "(1 - \) * a n < a n - c (n+1) / c n" + by (auto simp:algebra_simps) + then have "(1 - \)^2 * a n / B < (1 - \) * (a n - c (n+1) / c n) / B" + apply (subst (asm) real_mult_less_iff1[symmetric, of "(1-\)/B"]) + using \\<1\ \B>0\ by (auto simp:divide_simps power2_eq_square) + then have "b n + (1 - \)^2 * a n / B < b n + (1 - \) * (a n - c (n+1) / c n) / B" + using \B>0\ by auto + also have "... = b n + (1 - \) * ((c n *a n - c (n+1)) / c n) / B" + using elim by (auto simp:field_simps) + also have "... = b n + (1 - \) * (b n / c n)" + proof - + have "B * b n = c n * a n - c (n + 1)" using elim by auto + from this[symmetric] show ?thesis + using \B>0\ by simp + qed + also have "... = (1+(1-\)/c n) * b n" + by (auto simp:algebra_simps) + also have "... = ((c n+1-\)/c n) * b n" + using elim by (auto simp:divide_simps) + also have "... \ ((c (n+1) -\)/c n) * b n" + proof - + define cp where "cp = c n+1" + have "c (n+1) \ cp" unfolding cp_def using \c n < c (Suc n)\ by auto + moreover have "c n>0" "b n>0" using elim by auto + ultimately show ?thesis + apply (fold cp_def) + by (auto simp:divide_simps) + qed + also have "... < b (n+1)" + using elim by (auto simp:divide_simps) + finally show ?case . + qed + qed + + have "\\<^sub>F n in sequentially. c (n+1) \ c n" + proof (rule ccontr) + assume "\ (\\<^sub>F n in sequentially. c (n + 1) \ c n)" + from eq_2_11[OF _ _ this,of "1/2"] + have "\\<^sub>F n in sequentially. b (n+1) > b n + 1/4 * a n / B" + by (auto simp:algebra_simps power2_eq_square) + then have *:"\\<^sub>F n in sequentially. (b (n+1) - b n) / a n > 1 / (B * 4)" + apply (elim frequently_elim1) + subgoal for n + using a_pos[rule_format,of n] by (auto simp:field_simps) + done + define f where "f = (\n. (b (n+1) - b n) / a n)" + have "f \ lim f" + using convergent_LIMSEQ_iff ba_lim_exist unfolding f_def by auto + from this[unfolded tendsto_iff,rule_format, of "1 / (B*4)"] + have "\\<^sub>F x in sequentially. \f x - lim f\ < 1 / (B * 4)" + using \B>0\ by (auto simp:dist_real_def) + moreover have "\\<^sub>F n in sequentially. f n > 1 / (B * 4)" + using * unfolding f_def by auto + ultimately have "\\<^sub>F n in sequentially. f n > 1 / (B * 4) \ \f n - lim f\ < 1 / (B * 4)" + by (auto elim:frequently_eventually_frequently[rotated]) + from frequently_ex[OF this] + obtain n where "f n > 1 / (B * 4)" "\f n - lim f\ < 1 / (B * 4)" + by auto + moreover have "lim f \ 0" using ba_lim_leq unfolding f_def by auto + ultimately show False by linarith + qed + then obtain N where N_dec:"\n\N. c (n+1) \ c n" by (meson eventually_at_top_linorder) + define max_c where "max_c = (MAX n \ {..N}. c n)" + have max_c:"c n \ max_c" for n + proof (cases "n\N") + case True + then show ?thesis unfolding max_c_def by simp + next + case False + then have "n\N" by auto + then have "c n\c N" + proof (induct rule:nat_induct_at_least) + case base + then show ?case by simp + next + case (Suc n) + then have "c (n+1) \ c n" using N_dec by auto + then show ?case using \c n \ c N\ by auto + qed + moreover have "c N \ max_c" unfolding max_c_def by auto + ultimately show ?thesis by auto + qed + have "max_c > 0 " + proof - + obtain N where "\n\N. 0 < c n" + using c_pos[unfolded eventually_at_top_linorder] by auto + then have "c N > 0" by auto + then show ?thesis using max_c[of N] by simp + qed + have ba_limsup_bound:"1/(B*(B+1)) \ limsup (\n. b n/a n)" + "limsup (\n. b n/a n) \ max_c / B + 1 / (B+1)" + proof - + define f where "f = (\n. b n/a n)" + from tendsto_mult_right_zero[OF bac_close,of "1/B"] + have "(\n. f n - c n / B) \ 0" + unfolding f_def using \B>0\ by (auto simp:algebra_simps) + from this[unfolded tendsto_iff,rule_format,of "1/(B+1)"] + have "\\<^sub>F x in sequentially. \f x - c x / B\ < 1 / (B+1)" + using \B>0\ by auto + then have *:"\\<^sub>F n in sequentially. 1/(B*(B+1)) \ ereal (f n) \ ereal (f n) \ max_c / B + 1 / (B+1)" + using c_pos + proof eventually_elim + case (elim n) + then have "f n - c n / B < 1 / (B+1)" by auto + then have "f n < c n / B + 1 / (B+1)" by simp + also have "... \ max_c / B + 1 / (B+1)" + using max_c[of n] using \B>0\ by (auto simp:divide_simps) + finally have *:"f n < max_c / B + 1 / (B+1)" . + + have "1/(B*(B+1)) = 1/B - 1 / (B+1)" + using \B>0\ by (auto simp:divide_simps) + also have "... \ c n/B - 1 / (B+1)" + using \0 < c n\ \B>0\ by (auto,auto simp:divide_simps) + also have "... < f n" using elim by auto + finally have "1/(B*(B+1)) < f n" . + with * show ?case by simp + qed + show "limsup f \ max_c / B + 1 / (B+1)" + apply (rule Limsup_bounded) + using * by (auto elim:eventually_mono) + have "1/(B*(B+1)) \ liminf f" + apply (rule Liminf_bounded) + using * by (auto elim:eventually_mono) + also have "liminf f \ limsup f" by (simp add: Liminf_le_Limsup) + finally show "1/(B*(B+1)) \ limsup f" . + qed + + have "0 < inverse (ereal (max_c / B + 1 / (B+1)))" + using \max_c > 0\ \B>0\ + by (simp add: pos_add_strict) + also have "... \ inverse (limsup (\n. b n/a n))" + proof (rule ereal_inverse_antimono[OF _ ba_limsup_bound(2)]) + have "0<1/(B*(B+1))" using \B>0\ by auto + also have "... \ limsup (\n. b n/a n)" using ba_limsup_bound(1) . + finally show "0\limsup (\n. b n/a n)" using zero_ereal_def by auto + qed + also have "... = liminf (\n. inverse (ereal ( b n/a n)))" + apply (subst Liminf_inverse_ereal[symmetric]) + using a_pos ab_event by (auto elim!:eventually_mono simp:divide_simps) + also have "... = liminf (\n. ( a n/b n))" + apply (rule Liminf_eq) + using a_pos ab_event + apply (auto elim!:eventually_mono) + by (metis less_int_code(1)) + finally have "liminf (\n. ( a n/b n)) > 0" . + then show False using \liminf (\n. a n / b n) = 0\ by simp +qed + +end + +section\Some auxiliary results on the prime numbers. \ + +lemma nth_prime_nonzero[simp]:"nth_prime n \ 0" + by (simp add: prime_gt_0_nat prime_nth_prime) + +lemma nth_prime_gt_zero[simp]:"nth_prime n >0" + by (simp add: prime_gt_0_nat prime_nth_prime) + +lemma ratio_of_consecutive_primes: + "(\n. nth_prime (n+1)/nth_prime n) \1" +proof - + define f where "f=(\x. real (nth_prime (Suc x)) /real (nth_prime x))" + define g where "g=(\x. (real x * ln (real x)) + / (real (Suc x) * ln (real (Suc x))))" + have p_n:"(\x. real (nth_prime x) / (real x * ln (real x))) \ 1" + using nth_prime_asymptotics[unfolded asymp_equiv_def,simplified] . + moreover have p_sn:"(\n. real (nth_prime (Suc n)) + / (real (Suc n) * ln (real (Suc n)))) \ 1" + using nth_prime_asymptotics[unfolded asymp_equiv_def,simplified + ,THEN LIMSEQ_Suc] . + ultimately have "(\x. f x * g x) \ 1" + using tendsto_divide[OF p_sn p_n] + unfolding f_def g_def by (auto simp:algebra_simps) + moreover have "g \ 1" unfolding g_def + by real_asymp + ultimately have "(\x. if g x = 0 then 0 else f x) \ 1" + apply (drule_tac tendsto_divide[OF _ \g \ 1\]) + by auto + then have "f \ 1" + proof (elim filterlim_mono_eventually) + have "\\<^sub>F x in sequentially. (if g (x+3) = 0 then 0 + else f (x+3)) = f (x+3)" + unfolding g_def by auto + then show "\\<^sub>F x in sequentially. (if g x = 0 then 0 else f x) = f x" + apply (subst (asm) eventually_sequentially_seg) + by simp + qed auto + then show ?thesis unfolding f_def by auto +qed + +lemma nth_prime_double_sqrt_less: + assumes "\ > 0" + shows "\\<^sub>F n in sequentially. (nth_prime (2*n) - nth_prime n) + / sqrt (nth_prime n) < n powr (1/2+\)" +proof - + define pp ll where + "pp=(\n. (nth_prime (2*n) - nth_prime n) / sqrt (nth_prime n))" and + "ll=(\x::nat. x * ln x)" + have pp_pos:"pp (n+1) > 0" for n + unfolding pp_def by simp + + have "(\x. nth_prime (2 * x)) \[sequentially] (\x. (2 * x) * ln (2 * x))" + using nth_prime_asymptotics[THEN asymp_equiv_compose + ,of "(*) 2" sequentially,unfolded comp_def] + using mult_nat_left_at_top pos2 by blast + also have "... \[sequentially] (\x. 2 *x * ln x)" + by real_asymp + finally have "(\x. nth_prime (2 * x)) \[sequentially] (\x. 2 *x * ln x)" . + from this[unfolded asymp_equiv_def, THEN tendsto_mult_left,of 2] + have "(\x. nth_prime (2 * x) / (x * ln x)) \ 2" + unfolding asymp_equiv_def by auto + moreover have *:"(\x. nth_prime x / (x * ln x)) \ 1" + using nth_prime_asymptotics unfolding asymp_equiv_def by auto + ultimately + have "(\x. (nth_prime (2 * x) - nth_prime x) / ll x) \ 1" + unfolding ll_def + apply - + apply (drule (1) tendsto_diff) + apply (subst of_nat_diff,simp) + by (subst diff_divide_distrib,simp) + moreover have "(\x. sqrt (nth_prime x) / sqrt (ll x)) \ 1" + unfolding ll_def + using tendsto_real_sqrt[OF *] + by (auto simp: real_sqrt_divide) + ultimately have "(\x. pp x * (sqrt (ll x) / (ll x))) \ 1" + apply - + apply (drule (1) tendsto_divide,simp) + by (auto simp:field_simps of_nat_diff pp_def) + moreover have "\\<^sub>F x in sequentially. sqrt (ll x) / ll x = 1/sqrt (ll x)" + apply (subst eventually_sequentially_Suc[symmetric]) + by (auto intro!:eventuallyI simp:ll_def divide_simps) + ultimately have "(\x. pp x / sqrt (ll x)) \ 1" + apply (elim filterlim_mono_eventually) + by (auto elim!:eventually_mono) (metis mult.right_neutral times_divide_eq_right) + moreover have "(\x. sqrt (ll x) / x powr (1/2+\)) \ 0" + unfolding ll_def using \\>0\ by real_asymp + ultimately have "(\x. pp x / x powr (1/2+\) * + (sqrt (ll x) / sqrt (ll x))) \ 0" + apply - + apply (drule (1) tendsto_mult) + by (auto elim:filterlim_mono_eventually) + moreover have "\\<^sub>F x in sequentially. sqrt (ll x) / sqrt (ll x) = 1" + apply (subst eventually_sequentially_Suc[symmetric]) + by (auto intro!:eventuallyI simp:ll_def ) + ultimately have "(\x. pp x / x powr (1/2+\)) \ 0" + apply (elim filterlim_mono_eventually) + by (auto elim:eventually_mono) + from tendstoD[OF this, of 1,simplified] + show "\\<^sub>F x in sequentially. pp x < x powr (1 / 2 + \)" + apply (elim eventually_mono_sequentially[of _ 1]) + using pp_pos by auto +qed + + +section \Theorem 3.1\ + +text\Theorem 3.1 is an application of Theorem 2.1 with the sequences considered involving +the prime numbers.\ + +theorem theorem_3_10_Erdos_Straus: + fixes a::"nat \ int" + assumes a_pos:"\ n. a n >0" and "mono a" + and nth_1:"(\n. nth_prime n / (a n)^2) \ 0" + and nth_2:"liminf (\n. a n / nth_prime n) = 0" + shows "(\n. (nth_prime n / (\i \ n. a i))) \ \" +proof + assume asm:"(\n. (nth_prime n / (\i \ n. a i))) \ \" + + have a2_omega:"(\n. (a n)^2) \ \(\x. x * ln x)" + proof - + have "(\n. real (nth_prime n)) \ o(\n. real_of_int ((a n)\<^sup>2))" + apply (rule smalloI_tendsto[OF nth_1]) + using a_pos by (metis (mono_tags, lifting) less_int_code(1) + not_eventuallyD of_int_0_eq_iff zero_eq_power2) + moreover have "(\x. real (nth_prime x)) \ \(\x. real x * ln (real x))" + using nth_prime_bigtheta + by blast + ultimately show ?thesis + using landau_omega.small_big_trans smallo_imp_smallomega by blast + qed + + have a_gt_1:"\\<^sub>F n in sequentially. 1 < a n" + proof - + have "\\<^sub>F x in sequentially. \x * ln x\ \ (a x)\<^sup>2" + using a2_omega[unfolded smallomega_def,simplified,rule_format,of 1] + by auto + then have "\\<^sub>F x in sequentially. \(x+3) * ln (x+3)\ \ (a (x+3))\<^sup>2" + apply (subst (asm) eventually_sequentially_seg[symmetric, of _ 3]) + by simp + then have "\\<^sub>F n in sequentially. 1 < a ( n+3)" + proof (elim eventually_mono) + fix x + assume "\real (x + 3) * ln (real (x + 3))\ \ real_of_int ((a (x + 3))\<^sup>2)" + moreover have "\real (x + 3) * ln (real (x + 3))\ > 3" + proof - + have "ln (real (x + 3)) > 1" + apply simp using ln3_gt_1 ln_gt_1 by force + moreover have "real(x+3) \ 3" by simp + ultimately have "(x+3)*ln (real (x + 3)) > 3*1 " + apply (rule_tac mult_le_less_imp_less) + by auto + then show ?thesis by auto + qed + ultimately have "real_of_int ((a (x + 3))\<^sup>2) > 3" + by auto + then show "1 < a (x + 3)" + by (smt Suc3_eq_add_3 a_pos add.commute of_int_1 one_power2) + qed + then show ?thesis + apply (subst eventually_sequentially_seg[symmetric, of _ 3]) + by auto + qed + + obtain B::int and c where + "B>0" and Bc_large:"\\<^sub>F n in sequentially. B * nth_prime n + = c n * a n - c (n + 1) \ \c (n + 1)\ < a n / 2" + and ca_vanish: "(\n. c (Suc n) / real_of_int (a n)) \ 0" + proof - + note a_gt_1 + moreover have "(\n. real_of_int \int (nth_prime n)\ + / real_of_int (a (n - 1) * a n)) \ 0" + proof - + define f where "f=(\n. nth_prime (n+1) / (a n * a (n+1)))" + define g where "g=(\n. 2*nth_prime n / (a n)^2)" + have "\\<^sub>F x in sequentially. norm (f x) \ g x" + proof - + have "\\<^sub>F n in sequentially. nth_prime (n+1) < 2*nth_prime n" + using ratio_of_consecutive_primes[unfolded tendsto_iff + ,rule_format,of 1,simplified] + apply (elim eventually_mono) + by (auto simp :divide_simps dist_norm) + moreover have "\\<^sub>F n in sequentially. real_of_int (a n * a (n+1)) + \ (a n)^2" + apply (rule eventuallyI) + using \mono a\ by (auto simp:power2_eq_square a_pos incseq_SucD) + ultimately show ?thesis unfolding f_def g_def + apply eventually_elim + apply (subst norm_divide) + apply (rule_tac linordered_field_class.frac_le) + using a_pos[rule_format, THEN order.strict_implies_not_eq ] + by auto + qed + moreover have "g \ 0 " + using nth_1[THEN tendsto_mult_right_zero,of 2] unfolding g_def + by auto + ultimately have "f \ 0" + using Lim_null_comparison[of f g sequentially] + by auto + then show ?thesis + unfolding f_def + by (rule_tac LIMSEQ_imp_Suc) auto + qed + moreover have "(\n. real_of_int (int (nth_prime n)) + / real_of_int (prod a {..n})) \ \" + using asm by simp + ultimately have "\B>0. \c. (\\<^sub>F n in sequentially. + B * int (nth_prime n) = c n * a n - c (n + 1) \ + real_of_int \c (n + 1)\ < real_of_int (a n) / 2) \ + (\n. real_of_int (c (Suc n)) / real_of_int (a n)) \ 0" + using ab_rationality_imp[OF a_pos,of nth_prime] by fast + then show thesis + apply clarify + apply (rule_tac c=c and B=B in that) + by auto + qed + + have bac_close:"(\n. B * nth_prime n / a n - c n) \ 0" + proof - + have "\\<^sub>F n in sequentially. B * nth_prime n - c n * a n + c (n + 1) = 0" + using Bc_large by (auto elim!:eventually_mono) + then have "\\<^sub>F n in sequentially. (B * nth_prime n - c n * a n + c (n+1)) / a n = 0 " + apply eventually_elim + by auto + then have "\\<^sub>F n in sequentially. B * nth_prime n / a n - c n + c (n + 1) / a n = 0" + apply eventually_elim + using a_pos by (auto simp:divide_simps) (metis less_irrefl) + then have "(\n. B * nth_prime n / a n - c n + c (n + 1) / a n) \ 0" + by (simp add: eventually_mono tendsto_iff) + from tendsto_diff[OF this ca_vanish] + show ?thesis by auto + qed + + have c_pos:"\\<^sub>F n in sequentially. c n > 0" + proof - + from bac_close have *:"\\<^sub>F n in sequentially. c n \ 0" + apply (elim tendsto_of_int_diff_0) + using a_gt_1 apply (eventually_elim) + using \B>0\ by auto + show ?thesis + proof (rule ccontr) + assume "\ (\\<^sub>F n in sequentially. c n > 0)" + moreover have "\\<^sub>F n in sequentially. c (Suc n) \ 0 \ c n\0" + using * eventually_sequentially_Suc[of "\n. c n\0"] + by (metis (mono_tags, lifting) eventually_at_top_linorder le_Suc_eq) + ultimately have "\\<^sub>F n in sequentially. c n = 0 \ c (Suc n) \ 0" + using eventually_elim2 frequently_def by fastforce + moreover have "\\<^sub>F n in sequentially. nth_prime n > 0 + \ B * nth_prime n = c n * a n - c (n + 1)" + using Bc_large by eventually_elim auto + ultimately have "\\<^sub>F n in sequentially. c n = 0 \ c (Suc n) \ 0 + \ B * nth_prime n = c n * a n - c (n + 1)" + using frequently_eventually_frequently by fastforce + from frequently_ex[OF this] + obtain n where "c n = 0" "c (Suc n) \ 0" + "B * nth_prime n = c n * a n - c (n + 1)" + by auto + then have "B * nth_prime n \ 0" by auto + then show False using \B > 0\ + by (simp add: mult_le_0_iff) + qed + qed + + have B_nth_prime:"\\<^sub>F n in sequentially. nth_prime n > B" + proof - + have "\\<^sub>F x in sequentially. B+1 \ nth_prime x" + using nth_prime_at_top[unfolded filterlim_at_top_ge[where c="nat B+1"] + ,rule_format,of "nat B + 1",simplified] + + apply (elim eventually_mono) + using \B>0\ by auto + then show ?thesis + apply (elim eventually_mono) + by auto + qed + + have bc_epsilon:"\\<^sub>F n in sequentially. nth_prime (n+1) + / nth_prime n > (c (n+1) - \) / c n" when "\>0" "\<1" for \::real + proof - + have "\\<^sub>F x in sequentially. \c (Suc x) / a x\ < \ / 2" + using ca_vanish[unfolded tendsto_iff,rule_format, of "\/2"] \\>0\ by auto + moreover then have "\\<^sub>F x in sequentially. \c (x+2) / a (x+1)\ < \ / 2" + apply (subst (asm) eventually_sequentially_Suc[symmetric]) + by simp + moreover have "\\<^sub>F n in sequentially. B * nth_prime (n+1) = c (n+1) * a (n+1) - c (n + 2)" + using Bc_large + apply (subst (asm) eventually_sequentially_Suc[symmetric]) + by (auto elim:eventually_mono) + moreover have "\\<^sub>F n in sequentially. c n > 0 \ c (n+1) > 0 \ c (n+2) > 0" + proof - + have "\\<^sub>F n in sequentially. 0 < c (Suc n)" + using c_pos by (subst eventually_sequentially_Suc) simp + moreover then have "\\<^sub>F n in sequentially. 0 < c (Suc (Suc n))" + using c_pos by (subst eventually_sequentially_Suc) simp + ultimately show ?thesis using c_pos by eventually_elim auto + qed + ultimately show ?thesis using Bc_large + proof eventually_elim + case (elim n) + define \\<^sub>0 \\<^sub>1 where "\\<^sub>0 = c (n+1) / a n" and "\\<^sub>1 = c (n+2) / a (n+1)" + have "\\<^sub>0 > 0" "\\<^sub>1 > 0" "\\<^sub>0 < \/2" "\\<^sub>1 < \/2" + using a_pos elim \mono a\ + by (auto simp add: \\<^sub>0_def \\<^sub>1_def abs_of_pos) + have "(\ - \\<^sub>1) * c n > 0" + apply (rule mult_pos_pos) + using \\\<^sub>1 > 0\ \\\<^sub>1 < \/2\ \\>0\ elim by auto + moreover have "\\<^sub>0 * (c (n+1) - \) > 0" + apply (rule mult_pos_pos[OF \\\<^sub>0 > 0\]) + using elim(4) that(2) by linarith + ultimately have "(\ - \\<^sub>1) * c n + \\<^sub>0 * (c (n+1) - \) > 0" by auto + moreover have "c n - \\<^sub>0 > 0" using \\\<^sub>0 < \ / 2\ elim(4) that(2) by linarith + moreover have "c n > 0" by (simp add: elim(4)) + ultimately have "(c (n+1) - \) / c n < (c (n+1) - \\<^sub>1) / (c n - \\<^sub>0)" + by (auto simp add:field_simps) + also have "... \ (c (n+1) - \\<^sub>1) / (c n - \\<^sub>0) * (a (n+1) / a n)" + proof - + have "(c (n+1) - \\<^sub>1) / (c n - \\<^sub>0) > 0" + by (smt \0 < (\ - \\<^sub>1) * real_of_int (c n)\ \0 < real_of_int (c n) - \\<^sub>0\ + divide_pos_pos elim(4) mult_le_0_iff of_int_less_1_iff that(2)) + moreover have "(a (n+1) / a n) \ 1" + using a_pos \mono a\ by (simp add: mono_def) + ultimately show ?thesis by (metis mult_cancel_left1 real_mult_le_cancel_iff2) + qed + also have "... = (B * nth_prime (n+1)) / (B * nth_prime n)" + proof - + have "B * nth_prime n = c n * a n - c (n + 1)" + using elim by auto + also have "... = a n * (c n - \\<^sub>0)" + using a_pos[rule_format,of n] unfolding \\<^sub>0_def by (auto simp:field_simps) + finally have "B * nth_prime n = a n * (c n - \\<^sub>0)" . + moreover have "B * nth_prime (n+1) = a (n+1) * (c (n+1) - \\<^sub>1)" + unfolding \\<^sub>1_def + using a_pos[rule_format,of "n+1"] + apply (subst \B * nth_prime (n + 1) = c (n + 1) * a (n + 1) - c (n + 2)\) + by (auto simp:field_simps) + ultimately show ?thesis by (simp add: mult.commute) + qed + also have "... = nth_prime (n+1) / nth_prime n" + using \B>0\ by auto + finally show ?case . + qed + qed + + + have c_ubound:"\x. \n. c n > x" + proof (rule ccontr) + assume " \ (\x. \n. x < c n)" + then obtain ub where "\n. c n \ ub" "ub > 0" + by (meson dual_order.trans int_one_le_iff_zero_less le_cases not_le) + define pa where "pa = (\n. nth_prime n / a n)" + have pa_pos:"\n. pa n > 0" unfolding pa_def by (simp add: a_pos) + have "liminf (\n. 1 / pa n) = 0" + using nth_2 unfolding pa_def by auto + then have "(\y\<^sub>F x in sequentially. ereal (1 / pa x) \ y)" + apply (subst less_Liminf_iff[symmetric]) + using \0 < B\ \0 < ub\ by auto + then have "\\<^sub>F x in sequentially. 1 / pa x < B/(ub+1)" + by (meson frequently_mono le_less_trans less_ereal.simps(1)) + then have "\\<^sub>F x in sequentially. B*pa x > (ub+1)" + apply (elim frequently_elim1) + by (metis \0 < ub\ mult.left_neutral of_int_0_less_iff pa_pos pos_divide_less_eq + pos_less_divide_eq times_divide_eq_left zless_add1_eq) + moreover have "\\<^sub>F x in sequentially. c x \ ub" + using \\n. c n \ ub\ by simp + ultimately have "\\<^sub>F x in sequentially. B*pa x - c x > 1" + apply (elim frequently_rev_mp eventually_mono) + by linarith + moreover have "(\n. B * pa n - c n) \0" + unfolding pa_def using bac_close by auto + from tendstoD[OF this,of 1] + have "\\<^sub>F n in sequentially. \B * pa n - c n\ < 1" + by auto + ultimately have "\\<^sub>F x in sequentially. B*pa x - c x > 1 \ \B * pa x - c x\ < 1" + using frequently_eventually_frequently by blast + then show False + by (simp add: frequently_def) + qed + + have eq_2_11:"\\<^sub>F n in sequentially. c (n+1)>c n \ + nth_prime (n+1) > nth_prime n + (1 - \)^2 * a n / B" + when "\>0" "\<1" for \::real + proof - + have "\\<^sub>F x in sequentially. \c (Suc x) / a x\ < \" + using ca_vanish[unfolded tendsto_iff,rule_format, of \] \\>0\ by auto + moreover have "\\<^sub>F n in sequentially. c n > 0 \ c (n+1) > 0" + proof - + have "\\<^sub>F n in sequentially. 0 < c (Suc n)" + using c_pos by (subst eventually_sequentially_Suc) simp + then show ?thesis using c_pos by eventually_elim auto + qed + ultimately show ?thesis using Bc_large bc_epsilon[OF \\>0\ \\<1\] + proof (eventually_elim, rule_tac impI) + case (elim n) + assume "c n < c (n + 1)" + have "c (n+1) / a n < \" + using a_pos[rule_format,of n] using elim(1,2) by auto + also have "... \ \ * c n" using elim(2) that(1) by auto + finally have "c (n+1) / a n < \ * c n" . + then have "c (n+1) / c n < \ * a n" + using a_pos[rule_format,of n] elim by (auto simp:field_simps) + then have "(1 - \) * a n < a n - c (n+1) / c n" + by (auto simp:algebra_simps) + then have "(1 - \)^2 * a n / B < (1 - \) * (a n - c (n+1) / c n) / B" + apply (subst (asm) real_mult_less_iff1[symmetric, of "(1-\)/B"]) + using \\<1\ \B>0\ by (auto simp:divide_simps power2_eq_square) + then have "nth_prime n + (1 - \)^2 * a n / B < nth_prime n + (1 - \) * (a n - c (n+1) / c n) / B" + using \B>0\ by auto + also have "... = nth_prime n + (1 - \) * ((c n *a n - c (n+1)) / c n) / B" + using elim by (auto simp:field_simps) + also have "... = nth_prime n + (1 - \) * (nth_prime n / c n)" + proof - + have "B * nth_prime n = c n * a n - c (n + 1)" using elim by auto + from this[symmetric] show ?thesis + using \B>0\ by simp + qed + also have "... = (1+(1-\)/c n) * nth_prime n" + by (auto simp:algebra_simps) + also have "... = ((c n+1-\)/c n) * nth_prime n" + using elim by (auto simp:divide_simps) + also have "... \ ((c (n+1) -\)/c n) * nth_prime n" + proof - + define cp where "cp = c n+1" + have "c (n+1) \ cp" unfolding cp_def using \c n < c (n + 1)\ by auto + moreover have "c n>0" "nth_prime n>0" using elim by auto + ultimately show ?thesis + apply (fold cp_def) + by (auto simp:divide_simps) + qed + also have "... < nth_prime (n+1)" + using elim by (auto simp:divide_simps) + finally show "real (nth_prime n) + (1 - \)\<^sup>2 * real_of_int (a n) + / real_of_int B < real (nth_prime (n + 1))" . + qed + qed + + have c_neq_large:"\\<^sub>F n in sequentially. c (n+1) \ c n" + proof (rule ccontr) + assume "\ (\\<^sub>F n in sequentially. c (n + 1) \ c n)" + then have that:"\\<^sub>F n in sequentially. c (n + 1) = c n" + unfolding frequently_def . + have "\\<^sub>F x in sequentially. (B * int (nth_prime x) = c x * a x - c (x + 1) + \ \real_of_int (c (x + 1))\ < real_of_int (a x) / 2) \ 0 < c x \ B < int (nth_prime x) + \ (c (x+1)>c x \ nth_prime (x+1) > nth_prime x + a x / (2* B))" + using Bc_large c_pos B_nth_prime eq_2_11[of "1-1/ sqrt 2",simplified] + by eventually_elim (auto simp:divide_simps) + then have "\\<^sub>F m in sequentially. nth_prime (m+1) > (1+1/(2*B))*nth_prime m" + proof (elim frequently_eventually_at_top[OF that, THEN frequently_at_top_elim]) + fix n + assume "c (n + 1) = c n \ + (\y\n. (B * int (nth_prime y) = c y * a y - c (y + 1) \ + \real_of_int (c (y + 1))\ < real_of_int (a y) / 2) \ + 0 < c y \ B < int (nth_prime y) \ (c y < c (y + 1) \ + real (nth_prime y) + real_of_int (a y) / real_of_int (2 * B) + < real (nth_prime (y + 1))))" + then have "c (n + 1) = c n" + and Bc_eq:"\y\n. B * int (nth_prime y) = c y * a y - c (y + 1) \ 0 < c y + \ \real_of_int (c (y + 1))\ < real_of_int (a y) / 2 + \ B < int (nth_prime y) + \ (c y < c (y + 1) \ + real (nth_prime y) + real_of_int (a y) / real_of_int (2 * B) + < real (nth_prime (y + 1)))" + by auto + obtain m where "n c n" "c nN. N > n \ c N > c n" + using c_ubound[rule_format, of "MAX x\{..n}. c x"] + by (metis Max_ge atMost_iff dual_order.trans finite_atMost finite_imageI image_eqI + linorder_not_le order_refl) + then obtain N where "N>n" "c N>c n" by auto + define A m where "A={m. n (m+1)\N \ c (m+1) > c n}" and "m = Min A" + have "finite A" unfolding A_def + by (metis (no_types, lifting) A_def add_leE finite_nat_set_iff_bounded_le mem_Collect_eq) + moreover have "N-1\A" unfolding A_def + using \c n < c N\ \n < N\ \c (n + 1) = c n\ + by (smt Suc_diff_Suc Suc_eq_plus1 Suc_leI Suc_pred add.commute + add_diff_inverse_nat add_leD1 diff_is_0_eq' mem_Collect_eq nat_add_left_cancel_less + zero_less_one) + ultimately have "m\A" + using Min_in unfolding m_def by auto + then have "n0" + unfolding m_def A_def by auto + moreover have "c m \ c n" + proof (rule ccontr) + assume " \ c m \ c n" + then have "m-1\A" using \m\A\ \c (n + 1) = c n\ + unfolding A_def + by auto (smt One_nat_def Suc_eq_plus1 Suc_lessI less_diff_conv) + from Min_le[OF \finite A\ this,folded m_def] \m>0\ show False by auto + qed + ultimately show ?thesis using that[of m] by auto + qed + have "(1 + 1 / (2 * B)) * nth_prime m < nth_prime m + a m / (2*B)" + proof - + have "nth_prime m < a m" + proof - + have "B * int (nth_prime m) < c m * (a m - 1)" + using Bc_eq[rule_format,of m] \c m \ c n\ \c n < c (m + 1)\ \n < m\ + by (auto simp:algebra_simps) + also have "... \ c n * (a m - 1)" + by (simp add: \c m \ c n\ a_pos mult_right_mono) + finally have "B * int (nth_prime m) < c n * (a m - 1)" . + moreover have "c n\B" + proof - + have " B * int (nth_prime n) = c n * (a n - 1)" "B < int (nth_prime n)" + and c_a:"\real_of_int (c (n + 1))\ < real_of_int (a n) / 2" + using Bc_eq[rule_format,of n] \c (n + 1) = c n\ by (auto simp:algebra_simps) + from this(1) have " c n dvd (B * int (nth_prime n))" + by simp + moreover have "coprime (c n) (int (nth_prime n))" + proof - + have "c n < int (nth_prime n)" + proof (rule ccontr) + assume "\ c n < int (nth_prime n)" + then have asm:"c n \ int (nth_prime n)" by auto + then have "a n > 2 * nth_prime n" + using c_a \c (n + 1) = c n\ by auto + then have "a n -1 \ 2 * nth_prime n" + by simp + then have "a n - 1 > 2 * B" + using \B < int (nth_prime n)\ by auto + from mult_le_less_imp_less[OF asm this] \B>0\ + have "int (nth_prime n) * (2 * B) < c n * (a n - 1)" + by auto + then show False using \B * int (nth_prime n) = c n * (a n - 1)\ + by (smt \0 < B\ \B < int (nth_prime n)\ combine_common_factor + mult.commute mult_pos_pos) + qed + then have "\ nth_prime n dvd c n" + by (simp add: Bc_eq zdvd_not_zless) + then have "coprime (int (nth_prime n)) (c n)" + by (auto intro!:prime_imp_coprime_int) + then show ?thesis using coprime_commute by blast + qed + ultimately have "c n dvd B" + using coprime_dvd_mult_left_iff by auto + then show ?thesis using \0 < B\ zdvd_imp_le by blast + qed + moreover have "c n > 0 " using Bc_eq by blast + ultimately show ?thesis + using \B>0\ by (smt a_pos mult_mono) + qed + then show ?thesis using \B>0\ by (auto simp:field_simps) + qed + also have "... < nth_prime (m+1)" + using Bc_eq[rule_format, of m] \n \c m \ c n\ \c n < c (m+1)\ + by linarith + finally show "\j>n. (1 + 1 / real_of_int (2 * B)) * real (nth_prime j) + < real (nth_prime (j + 1))" using \m>n\ by auto + qed + then have "\\<^sub>F m in sequentially. nth_prime (m+1)/nth_prime m > (1+1/(2*B))" + by (auto elim:frequently_elim1 simp:field_simps) + moreover have "\\<^sub>F m in sequentially. nth_prime (m+1)/nth_prime m < (1+1/(2*B))" + using ratio_of_consecutive_primes[unfolded tendsto_iff,rule_format,of "1/(2*B)"] + \B>0\ + unfolding dist_real_def + by (auto elim!:eventually_mono simp:algebra_simps) + ultimately show False by (simp add: eventually_mono frequently_def) + qed + + have c_gt_half:"\\<^sub>F N in sequentially. card {n\{N..<2*N}. c n > c (n+1)} > N / 2" + proof - + define h where "h=(\n. (nth_prime (2*n) - nth_prime n) + / sqrt (nth_prime n))" + have "\\<^sub>F n in sequentially. h n < n / 2" + proof - + have "\\<^sub>F n in sequentially. h n < n powr (5/6)" + using nth_prime_double_sqrt_less[of "1/3"] + unfolding h_def by auto + moreover have "\\<^sub>F n in sequentially. n powr (5/6) < (n /2)" + by real_asymp + ultimately show ?thesis + by eventually_elim auto + qed + moreover have "\\<^sub>F n in sequentially. sqrt (nth_prime n) / a n < 1 / (2*B)" + using nth_1[THEN tendsto_real_sqrt,unfolded tendsto_iff + ,rule_format,of "1/(2*B)"] \B>0\ a_pos + by (auto simp:real_sqrt_divide abs_of_pos) + ultimately have "\\<^sub>F x in sequentially. c (x+1) \ c x + \ sqrt (nth_prime x) / a x < 1 / (2*B) + \ h x < x / 2 + \ (c (x+1)>c x \ nth_prime (x+1) > nth_prime x + a x / (2* B))" + using c_neq_large B_nth_prime eq_2_11[of "1-1/ sqrt 2",simplified] + by eventually_elim (auto simp:divide_simps) + then show ?thesis + proof (elim eventually_at_top_mono) + fix N assume "N\1" and N_asm:"\y\N. c (y + 1) \ c y \ + sqrt (real (nth_prime y)) / real_of_int (a y) + < 1 / real_of_int (2 * B) \ h y < y / 2 \ + (c y < c (y + 1) \ + real (nth_prime y) + real_of_int (a y) / real_of_int (2 * B) + < real (nth_prime (y + 1)))" + + define S where "S={n \ {N..<2 * N}. c n < c (n + 1)}" + define g where "g=(\n. (nth_prime (n+1) - nth_prime n) + / sqrt (nth_prime n))" + define f where "f=(\n. nth_prime (n+1) - nth_prime n)" + have g_gt_1:"g n>1" when "n\N" "c n < c (n + 1)" for n + proof - + have "nth_prime n + sqrt (nth_prime n) < nth_prime (n+1)" + proof - + have "nth_prime n + sqrt (nth_prime n) < nth_prime n + a n / (2*B)" + using N_asm[rule_format,OF \n\N\] a_pos + by (auto simp:field_simps) + also have "... < nth_prime (n+1)" + using N_asm[rule_format,OF \n\N\] \c n < c (n + 1)\ by auto + finally show ?thesis . + qed + then show ?thesis unfolding g_def + using \c n < c (n + 1)\ by auto + qed + have g_geq_0:"g n \ 0" for n + unfolding g_def by auto + + have "finite S" "\x\S. x\N \ c x sum g S" + proof (induct S) + case empty + then show ?case by auto + next + case (insert x F) + moreover have "g x>1" + proof - + have "c x < c (x+1)" "x\N" using insert(4) by auto + then show ?thesis using g_gt_1 by auto + qed + ultimately show ?case by simp + qed + also have "... \ sum g {N..<2*N}" + apply (rule sum_mono2) + unfolding S_def using g_geq_0 by auto + also have "... \ sum (\n. f n/sqrt (nth_prime N)) {N..<2*N}" + apply (rule sum_mono) + unfolding f_def g_def by (auto intro!:divide_left_mono) + also have "... = sum f {N..<2*N} / sqrt (nth_prime N)" + unfolding sum_divide_distrib[symmetric] by auto + also have "... = (nth_prime (2*N) - nth_prime N) / sqrt (nth_prime N)" + proof - + have "sum f {N..<2 * N} = nth_prime (2 * N) - nth_prime N" + proof (induct N) + case 0 + then show ?case by simp + next + case (Suc N) + have ?case if "N=0" + proof - + have "sum f {Suc N..<2 * Suc N} = sum f {1}" + using that by (simp add: numeral_2_eq_2) + also have "... = nth_prime 2 - nth_prime 1" + unfolding f_def by (simp add:numeral_2_eq_2) + also have "... = nth_prime (2 * Suc N) - nth_prime (Suc N)" + using that by auto + finally show ?thesis . + qed + moreover have ?case if "N\0" + proof - + have "sum f {Suc N..<2 * Suc N} = sum f {N..<2 * Suc N} - f N" + apply (subst (2) sum.atLeast_Suc_lessThan) + using that by auto + also have "... = sum f {N..<2 * N}+ f (2*N) + f(2*N+1) - f N" + by auto + also have "... = nth_prime (2 * Suc N) - nth_prime (Suc N)" + using Suc unfolding f_def by auto + finally show ?thesis . + qed + ultimately show ?case by blast + qed + then show ?thesis by auto + qed + also have "... = h N" + unfolding h_def by auto + also have "... < N/2" + using N_asm by auto + finally have "card S < N/2" . + + define T where "T={n \ {N..<2 * N}. c n > c (n + 1)}" + have "T \ S = {N..<2 * N}" "T \ S = {}" "finite T" + unfolding T_def S_def using N_asm by fastforce+ + + then have "card T + card S = card {N..<2 * N}" + using card_Un_disjoint \finite S\ by metis + also have "... = N" + by simp + finally have "card T + card S = N" . + with \card S < N/2\ + show "card T > N/2" by linarith + qed + qed + + text\Inequality (3.5) in the original paper required a slight modification: \ + + have a_gt_plus:"\\<^sub>F n in sequentially. c n > c (n+1) \a (n+1) > a n + (a n - c(n+1) - 1) / c (n+1)" + proof - + note a_gt_1[THEN eventually_all_ge_at_top] c_pos[THEN eventually_all_ge_at_top] + moreover have "\\<^sub>F n in sequentially. + B * int (nth_prime (n+1)) = c (n+1) * a (n+1) - c (n + 2)" + using Bc_large + apply (subst (asm) eventually_sequentially_Suc[symmetric]) + by (auto elim:eventually_mono) + moreover have "\\<^sub>F n in sequentially. + B * int (nth_prime n) = c n * a n - c (n + 1) \ \c (n + 1)\ < a n / 2" + using Bc_large by (auto elim:eventually_mono) + ultimately show ?thesis + apply (eventually_elim) + proof (rule impI) + fix n + assume "\y\n. 1 < a y" "\y\n. 0 < c y" + and + Suc_n_eq:"B * int (nth_prime (n + 1)) = c (n + 1) * a (n + 1) - c (n + 2)" and + "B * int (nth_prime n) = c n * a n - c (n + 1) \ + real_of_int \c (n + 1)\ < real_of_int (a n) / 2" + and "c (n + 1) < c n" + then have n_eq:"B * int (nth_prime n) = c n * a n - c (n + 1)" and + c_less_a: "real_of_int \c (n + 1)\ < real_of_int (a n) / 2" + by auto + from \\y\n. 1 < a y\ \\y\n. 0 < c y\ + have *:"a n>1" "a (n+1) > 1" "c n > 0" + "c (n+1) > 0" "c (n+2) > 0" + by auto + then have "(1+1/c (n+1))* (a n - 1)/a (n+1) = (c (n+1)+1) * ((a n - 1) / (c (n+1) * a (n+1)))" + by (auto simp:field_simps) + also have "... \ c n * ((a n - 1) / (c (n+1) * a (n+1)))" + apply (rule mult_right_mono) + subgoal using \c (n + 1) < c n\ by auto + subgoal by (smt \0 < c (n + 1)\ a_pos divide_nonneg_pos mult_pos_pos of_int_0_le_iff + of_int_0_less_iff) + done + also have "... = (c n * (a n - 1)) / (c (n+1) * a (n+1))" by auto + also have "... < (c n * (a n - 1)) / (c (n+1) * a (n+1) - c (n+2))" + apply (rule divide_strict_left_mono) + subgoal using \c (n+2) > 0\ by auto + unfolding Suc_n_eq[symmetric] using * \B>0\ by auto + also have "... < (c n * a n - c (n+1)) / (c (n+1) * a (n+1) - c (n+2))" + apply (rule frac_less) + unfolding Suc_n_eq[symmetric] using * \B>0\ \c (n + 1) < c n\ + by (auto simp:algebra_simps) + also have "... = nth_prime n / nth_prime (n+1)" + unfolding Suc_n_eq[symmetric] n_eq[symmetric] using \B>0\ by auto + also have "... < 1" by auto + finally have "(1 + 1 / real_of_int (c (n + 1))) * real_of_int (a n - 1) + / real_of_int (a (n + 1)) < 1 " . + then show "a n + (a n - c (n + 1) - 1) / (c (n + 1)) < (a (n + 1))" + using * by (auto simp:field_simps) + qed + qed + have a_gt_1:"\\<^sub>F n in sequentially. c n > c (n+1) \ a (n+1) > a n + 1" + using Bc_large a_gt_plus c_pos[THEN eventually_all_ge_at_top] + apply eventually_elim + proof (rule impI) + fix n assume + "c (n + 1) < c n \ a n + (a n - c (n + 1) - 1) / c (n + 1) < a (n + 1)" + "c (n + 1) < c n" and B_eq:"B * int (nth_prime n) = c n * a n - c (n + 1) \ + \real_of_int (c (n + 1))\ < real_of_int (a n) / 2" and c_pos:"\y\n. 0 < c y" + from this(1,2) + have "a n + (a n - c (n + 1) - 1) / c (n + 1) < a (n + 1)" by auto + moreover have "a n - 2 * c (n+1) > 0" + using B_eq c_pos[rule_format,of "n+1"] by auto + then have "a n - 2 * c (n+1) \ 1" by simp + then have "(a n - c (n + 1) - 1) / c (n + 1) \ 1" + using c_pos[rule_format,of "n+1"] by (auto simp:field_simps) + ultimately show "a n + 1 < a (n + 1)" by auto + qed + + text\The following corresponds to inequality (3.6) in the paper, which had to be + slightly corrected: \ + + have a_gt_sqrt:"\\<^sub>F n in sequentially. c n > c (n+1) \ a (n+1) > a n + (sqrt n - 2)" + proof - + have a_2N:"\\<^sub>F N in sequentially. a (2*N) \ N /2 +1" + using c_gt_half a_gt_1[THEN eventually_all_ge_at_top] + proof eventually_elim + case (elim N) + define S where "S={n \ {N..<2 * N}. c (n + 1) < c n}" + define f where "f = (\n. a (Suc n) - a n)" + + have f_1:"\x\S. f x\1" and f_0:"\x. f x\0" + subgoal using elim unfolding S_def f_def by auto + subgoal using \mono a\[THEN incseq_SucD] unfolding f_def by auto + done + have "N / 2 < card S" + using elim unfolding S_def by auto + also have "... \ sum f S" + unfolding of_int_sum + apply (rule sum_bounded_below[of _ 1,simplified]) + using f_1 by auto + also have "... \ sum f {N..<2 * N}" + unfolding of_int_sum + apply (rule sum_mono2) + unfolding S_def using f_0 by auto + also have "... = a (2*N) - a N" + unfolding of_int_sum f_def of_int_diff + apply (rule sum_Suc_diff') + by auto + finally have "N / 2 < a (2*N) - a N" . + then show ?case using a_pos[rule_format,of N] by linarith + qed + + have a_n4:"\\<^sub>F n in sequentially. a n > n/4" + proof - + obtain N where a_N:"\n\N. a (2*n) \ n /2+1" + using a_2N unfolding eventually_at_top_linorder by auto + have "a n>n/4" when "n\2*N" for n + proof - + define n' where "n'=n div 2" + have "n'\N" unfolding n'_def using that by auto + have "n/4 < n' /2+1" + unfolding n'_def by auto + also have "... \ a (2*n')" + using a_N \n'\N\ by auto + also have "... \a n" unfolding n'_def + apply (cases "even n") + subgoal by simp + subgoal by (simp add: assms(2) incseqD) + done + finally show ?thesis . + qed + then show ?thesis + unfolding eventually_at_top_linorder by auto + qed + + have c_sqrt:"\\<^sub>F n in sequentially. c n < sqrt n / 4" + proof - + have "\\<^sub>F x in sequentially. x>1" by simp + moreover have "\\<^sub>F x in sequentially. real (nth_prime x) / (real x * ln (real x)) < 2" + using nth_prime_asymptotics[unfolded asymp_equiv_def,THEN order_tendstoD(2),of 2] + by simp + ultimately have "\\<^sub>F n in sequentially. c n < B*8 *ln n + 1" using a_n4 Bc_large + proof eventually_elim + case (elim n) + from this(4) have "c n=(B*nth_prime n+c (n+1))/a n" + using a_pos[rule_format,of n] + by (auto simp:divide_simps) + also have "... = (B*nth_prime n)/a n+c (n+1)/a n" + by (auto simp:divide_simps) + also have "... < (B*nth_prime n)/a n + 1" + proof - + have "c (n+1)/a n < 1" using elim(4) by auto + then show ?thesis by auto + qed + also have "... < B*8 * ln n + 1" + proof - + have "B*nth_prime n < 2*B*n*ln n" + using \real (nth_prime n) / (real n * ln (real n)) < 2\ \B>0\ \ 1 < n\ + by (auto simp:divide_simps) + moreover have "real n / 4 < real_of_int (a n)" by fact + ultimately have "(B*nth_prime n) / a n < (2*B*n*ln n) / (n/4)" + apply (rule_tac frac_less) + using \B>0\ \ 1 < n\ by auto + also have "... = B*8 * ln n" + using \ 1 < n\ by auto + finally show ?thesis by auto + qed + finally show ?case . + qed + moreover have "\\<^sub>F n in sequentially. B*8 *ln n + 1 < sqrt n / 4" + by real_asymp + ultimately show ?thesis + by eventually_elim auto + qed + + have + "\\<^sub>F n in sequentially. 0 < c (n+1)" + "\\<^sub>F n in sequentially. c (n+1) < sqrt (n+1) / 4" + "\\<^sub>F n in sequentially. n > 4" + "\\<^sub>F n in sequentially. (n - 4) / sqrt (n + 1) + 1 > sqrt n" + subgoal using c_pos[THEN eventually_all_ge_at_top] + by eventually_elim auto + subgoal using c_sqrt[THEN eventually_all_ge_at_top] + by eventually_elim (use le_add1 in blast) + subgoal by simp + subgoal + by real_asymp + done + then show ?thesis using a_gt_plus a_n4 + apply eventually_elim + proof (rule impI) + fix n assume asm:"0 < c (n + 1)" "c (n + 1) < sqrt (real (n + 1)) / 4" and + a_ineq:"c (n + 1) < c n \ a n + (a n - c (n + 1) - 1) / c (n + 1) < a (n + 1)" + "c (n + 1) < c n" and "n / 4 < a n" "n > 4" + and n_neq:" sqrt (real n) < real (n - 4) / sqrt (real (n + 1)) + 1" + + have "(n-4) / sqrt(n+1) = (n/4 - 1)/ (sqrt (real (n + 1)) / 4)" + using \n>4\ by (auto simp:divide_simps) + also have "... < (a n - 1) / c (n + 1)" + apply (rule frac_less) + using \n > 4\ \n / 4 < a n\ \0 < c (n + 1)\ \c (n + 1) < sqrt (real (n + 1)) / 4\ + by auto + also have "... - 1 = (a n - c (n + 1) - 1) / c (n + 1)" + using \0 < c (n + 1)\ by (auto simp:field_simps) + also have "a n + ... < a (n+1)" + using a_ineq by auto + finally have "a n + ((n - 4) / sqrt (n + 1) - 1) < a (n + 1)" by simp + moreover have "(n - 4) / sqrt (n + 1) - 1 > sqrt n - 2" + using n_neq[THEN diff_strict_right_mono,of 2] \n>4\ + by (auto simp:algebra_simps of_nat_diff) + ultimately show "real_of_int (a n) + (sqrt (real n) - 2) < real_of_int (a (n + 1))" + by argo + qed + qed + + text\The following corresponds to inequality $ a_{2N} > N^{3/2}/2$ in the paper, + which had to be slightly corrected: \ + + have a_2N_sqrt:"\\<^sub>F N in sequentially. a (2*N) > real N * (sqrt (real N)/2 - 1)" + using c_gt_half a_gt_sqrt[THEN eventually_all_ge_at_top] eventually_gt_at_top[of 4] + proof eventually_elim + case (elim N) + define S where "S={n \ {N..<2 * N}. c (n + 1) < c n}" + define f where "f = (\n. a (Suc n) - a n)" + + have f_N:"\x\S. f x\sqrt N - 2" + proof + fix x assume "x\S" + then have "sqrt (real x) - 2 < f x" "x\N" + using elim unfolding S_def f_def by auto + moreover have "sqrt x - 2 \ sqrt N - 2" + using \x\N\ by simp + ultimately show "sqrt (real N) - 2 \ real_of_int (f x)" by argo + qed + have f_0:"\x. f x\0" + using \mono a\[THEN incseq_SucD] unfolding f_def by auto + + have "(N / 2) * (sqrt N - 2) < card S * (sqrt N - 2)" + apply (rule mult_strict_right_mono) + subgoal using elim unfolding S_def by auto + subgoal using \N>4\ + by (metis diff_gt_0_iff_gt numeral_less_real_of_nat_iff real_sqrt_four real_sqrt_less_iff) + done + also have "... \ sum f S" + unfolding of_int_sum + apply (rule sum_bounded_below) + using f_N by auto + also have "... \ sum f {N..<2 * N}" + unfolding of_int_sum + apply (rule sum_mono2) + unfolding S_def using f_0 by auto + also have "... = a (2*N) - a N" + unfolding of_int_sum f_def of_int_diff + apply (rule sum_Suc_diff') + by auto + finally have "real N / 2 * (sqrt (real N) - 2) < real_of_int (a (2 * N) - a N)" + . + then have "real N / 2 * (sqrt (real N) - 2) < a (2 * N)" + using a_pos[rule_format,of N] by linarith + then show ?case by (auto simp:field_simps) + qed + + text\The following part is required to derive the final contradiction of the proof.\ + + have a_n_sqrt:"\\<^sub>F n in sequentially. a n > (((n-1)/2) powr (3/2) - (n-1)) /2" + proof (rule sequentially_even_odd_imp) + define f where "f=(\N. ((real (2 * N - 1) / 2) powr (3 / 2) - real (2 * N - 1)) / 2)" + define g where "g=(\N. real N * (sqrt (real N) / 2 - 1))" + have "\\<^sub>F N in sequentially. g N > f N" + unfolding f_def g_def + by real_asymp + moreover have "\\<^sub>F N in sequentially. a (2 * N) > g N" + unfolding g_def using a_2N_sqrt . + ultimately show "\\<^sub>F N in sequentially. f N < a (2 * N)" + by eventually_elim auto + next + define f where "f=(\N. ((real (2 * N + 1 - 1) / 2) powr (3 / 2) + - real (2 * N + 1 - 1)) / 2)" + define g where "g=(\N. real N * (sqrt (real N) / 2 - 1))" + have "\\<^sub>F N in sequentially. g N = f N" + using eventually_gt_at_top[of 0] + apply eventually_elim + unfolding f_def g_def + by (auto simp:algebra_simps powr_half_sqrt[symmetric] powr_mult_base) + moreover have "\\<^sub>F N in sequentially. a (2 * N) > g N" + unfolding g_def using a_2N_sqrt . + moreover have "\\<^sub>F N in sequentially. a (2 * N + 1) \ a (2*N)" + apply (rule eventuallyI) + using \mono a\ by (simp add: incseqD) + ultimately show "\\<^sub>F N in sequentially. f N < (a (2 * N + 1))" + apply eventually_elim + by auto + qed + + have a_nth_prime_gt:"\\<^sub>F n in sequentially. a n / nth_prime n > 1" + proof - + define f where "f=(\n::nat. (((n-1)/2) powr (3/2) - (n-1)) /2)" + have "\\<^sub>F x in sequentially. real (nth_prime x) / (real x * ln (real x)) < 2" + using nth_prime_asymptotics[unfolded asymp_equiv_def,THEN order_tendstoD(2),of 2] + by simp + from this[] eventually_gt_at_top[of 1] + have "\\<^sub>F n in sequentially. real (nth_prime n) < 2*(real n * ln n)" + apply eventually_elim + by (auto simp:field_simps) + moreover have *:"\\<^sub>F N in sequentially. f N >0 " + unfolding f_def + by real_asymp + moreover have " \\<^sub>F n in sequentially. f n < a n" + using a_n_sqrt unfolding f_def . + ultimately have "\\<^sub>F n in sequentially. a n / nth_prime n + > f n / (2*(real n * ln n))" + apply eventually_elim + apply (rule frac_less2) + by auto + moreover have "\\<^sub>F n in sequentially. + (f n)/ (2*(real n * ln n)) > 1" + unfolding f_def + by real_asymp + ultimately show ?thesis + by eventually_elim argo + qed + + have a_nth_prime_lt:"\\<^sub>F n in sequentially. a n / nth_prime n < 1" + proof - + have "liminf (\x. a x / nth_prime x) < 1" + using nth_2 by auto + from this[unfolded less_Liminf_iff] + show ?thesis + apply (auto elim!:frequently_elim1) + by (meson divide_less_eq_1 ereal_less_eq(7) leD leI + nth_prime_nonzero of_nat_eq_0_iff of_nat_less_0_iff order.trans) + qed + + from a_nth_prime_gt a_nth_prime_lt show False + by (simp add: eventually_mono frequently_def) +qed + +section\Acknowledgements\ + +text\A.K.-A. and W.L. were supported by the ERC Advanced Grant ALEXANDRIA (Project 742178) + funded by the European Research Council and led by Professor Lawrence Paulson + at the University of Cambridge, UK.\ + +end \ No newline at end of file diff --git a/thys/Irrational_Series_Erdos_Straus/ROOT b/thys/Irrational_Series_Erdos_Straus/ROOT new file mode 100644 --- /dev/null +++ b/thys/Irrational_Series_Erdos_Straus/ROOT @@ -0,0 +1,13 @@ +chapter AFP + +session Irrational_Series_Erdos_Straus (AFP) = "HOL-Analysis" + + options [timeout = 1200] + sessions + Prime_Number_Theorem + Prime_Distribution_Elementary + theories + Irrational_Series_Erdos_Straus + document_files + "root.tex" + "root.bib" + diff --git a/thys/Irrational_Series_Erdos_Straus/document/root.bib b/thys/Irrational_Series_Erdos_Straus/document/root.bib new file mode 100644 --- /dev/null +++ b/thys/Irrational_Series_Erdos_Straus/document/root.bib @@ -0,0 +1,11 @@ +@article{erdHos1974irrationality, + title={On the irrationality of certain series}, + author={Erd{\H{o}}s, Paul and Straus, Ernst}, + journal={Pacific journal of mathematics}, + volume={55}, + number={1}, + pages={85--92}, + year={1974}, + publisher={Mathematical Sciences Publishers} +} + diff --git a/thys/Irrational_Series_Erdos_Straus/document/root.tex b/thys/Irrational_Series_Erdos_Straus/document/root.tex new file mode 100644 --- /dev/null +++ b/thys/Irrational_Series_Erdos_Straus/document/root.tex @@ -0,0 +1,40 @@ +\documentclass[11pt,a4paper]{article} +\usepackage{isabelle,isabellesym} +\usepackage{amsfonts, amsmath, amssymb} + +% this should be the last package used +\usepackage{pdfsetup} + +% urls in roman style, theory text in math-similar italics +\urlstyle{rm} +\isabellestyle{it} + +\begin{document} + +\title{Irrationality Criteria for Series by Erd\H{o}s and Straus} +\author{Angeliki Koutsoukou-Argyraki and Wenda Li} +\maketitle + +\begin{abstract} +We formalise certain irrationality criteria for infinite series of the form: +\[ +\sum_n\frac{b_n}{\prod_{i \leq n} a_i} +\] +where $b_n$, $a_i$ are integers. The result is due to P. Erd\H{o}s and E.G. Straus \cite{erdHos1974irrationality}, and in particular we formalise Theorem 2.1, Corollary 2.10 and Theorem 3.1. The latter is an application of Theorem 2.1 involving the prime numbers. +\end{abstract} + + +\tableofcontents + +\input{session} + +\nocite{apostol1976analytic} +\bibliographystyle{abbrv} +\bibliography{root} + +\end{document} + +%%% Local Variables: +%%% mode: latex +%%% TeX-master: t +%%% End: diff --git a/thys/LTL_Normal_Form/Normal_Form.thy b/thys/LTL_Normal_Form/Normal_Form.thy new file mode 100644 --- /dev/null +++ b/thys/LTL_Normal_Form/Normal_Form.thy @@ -0,0 +1,423 @@ +(* + Author: Salomon Sickert + License: BSD +*) + +section \A Normal Form for Linear Temporal Logic\ + +theory Normal_Form imports + LTL_Master_Theorem.Master_Theorem +begin + +subsection \LTL Equivalences\ + +text \Several valid laws of LTL relating strong and weak operators that are useful later.\ + +lemma ltln_strong_weak_2: + "w \\<^sub>n \ U\<^sub>n \ \ w \\<^sub>n (\ and\<^sub>n F\<^sub>n \) W\<^sub>n \" (is "?thesis1") + "w \\<^sub>n \ M\<^sub>n \ \ w \\<^sub>n \ R\<^sub>n (\ and\<^sub>n F\<^sub>n \)" (is "?thesis2") +proof - + have "\j. suffix (i + j) w \\<^sub>n \" + if "suffix j w \\<^sub>n \" and "\j\i. \ suffix j w \\<^sub>n \" for i j + proof + from that have "j > i" + by (cases "j > i") auto + thus "suffix (i + (j - i)) w \\<^sub>n \" + using that by auto + qed + thus ?thesis1 + unfolding ltln_strong_weak by auto +next + have "\j. suffix (i + j) w \\<^sub>n \" + if "suffix j w \\<^sub>n \" and "\j suffix j w \\<^sub>n \" for i j + proof + from that have "j \ i" + by (cases "j \ i") auto + thus "suffix (i + (j - i)) w \\<^sub>n \" + using that by auto + qed + thus ?thesis2 + unfolding ltln_strong_weak by auto +qed + +lemma ltln_weak_strong_2: + "w \\<^sub>n \ W\<^sub>n \ \ w \\<^sub>n \ U\<^sub>n (\ or\<^sub>n G\<^sub>n \)" (is "?thesis1") + "w \\<^sub>n \ R\<^sub>n \ \ w \\<^sub>n (\ or\<^sub>n G\<^sub>n \) M\<^sub>n \" (is "?thesis2") +proof - + have "suffix j w \\<^sub>n \" + if "\j. j < i \ suffix j w \\<^sub>n \" and "\j. suffix (i + j) w \\<^sub>n \" for i j + using that(1)[of j] that(2)[of "j - i"] by (cases "j < i") simp_all + thus ?thesis1 + unfolding ltln_weak_strong unfolding semantics_ltln.simps suffix_suffix by blast +next + have "suffix j w \\<^sub>n \" + if "\j. j \ i \ suffix j w \\<^sub>n \" and "\j. suffix (i + j) w \\<^sub>n \" for i j + using that(1)[of j] that(2)[of "j - i"] by (cases "j \ i") simp_all + thus ?thesis2 + unfolding ltln_weak_strong unfolding semantics_ltln.simps suffix_suffix by blast +qed + +subsection \$\evalnu{\psi}{M}$, $\evalmu{\psi}{N}$, $\flatten{\psi}{M}$, and $\flattentwo{\psi}{N}$\ + +text \The following four functions use "promise sets", named $M$ or $N$, to rewrite arbitrary + formulas into formulas from the class $\Sigma_1$-, $\Sigma_2$-, $\Pi_1$-, and $\Pi_2$, + respectively. In general the obtained formulas are not equivalent, but under some conditions + (as outlined below) they are.\ + +no_notation FG_advice ("_[_]\<^sub>\" [90,60] 89) +no_notation GF_advice ("_[_]\<^sub>\" [90,60] 89) + +notation FG_advice ("_[_]\<^sub>\\<^sub>1" [90,60] 89) +notation GF_advice ("_[_]\<^sub>\\<^sub>1" [90,60] 89) + +fun flatten_sigma_2:: "'a ltln \ 'a ltln set \ 'a ltln" ("_[_]\<^sub>\\<^sub>2") +where + "(\ U\<^sub>n \)[M]\<^sub>\\<^sub>2 = (\[M]\<^sub>\\<^sub>2) U\<^sub>n (\[M]\<^sub>\\<^sub>2)" +| "(\ W\<^sub>n \)[M]\<^sub>\\<^sub>2 = (\[M]\<^sub>\\<^sub>2) U\<^sub>n ((\[M]\<^sub>\\<^sub>2) or\<^sub>n (G\<^sub>n \[M]\<^sub>\\<^sub>1))" +| "(\ M\<^sub>n \)[M]\<^sub>\\<^sub>2 = (\[M]\<^sub>\\<^sub>2) M\<^sub>n (\[M]\<^sub>\\<^sub>2)" +| "(\ R\<^sub>n \)[M]\<^sub>\\<^sub>2 = ((\[M]\<^sub>\\<^sub>2) or\<^sub>n (G\<^sub>n \[M]\<^sub>\\<^sub>1)) M\<^sub>n (\[M]\<^sub>\\<^sub>2)" +| "(\ and\<^sub>n \)[M]\<^sub>\\<^sub>2 = (\[M]\<^sub>\\<^sub>2) and\<^sub>n (\[M]\<^sub>\\<^sub>2)" +| "(\ or\<^sub>n \)[M]\<^sub>\\<^sub>2 = (\[M]\<^sub>\\<^sub>2) or\<^sub>n (\[M]\<^sub>\\<^sub>2)" +| "(X\<^sub>n \)[M]\<^sub>\\<^sub>2 = X\<^sub>n (\[M]\<^sub>\\<^sub>2)" +| "\[M]\<^sub>\\<^sub>2 = \" + +fun flatten_pi_2 :: "'a ltln \ 'a ltln set \ 'a ltln" ("_[_]\<^sub>\\<^sub>2") +where + "(\ W\<^sub>n \)[N]\<^sub>\\<^sub>2 = (\[N]\<^sub>\\<^sub>2) W\<^sub>n (\[N]\<^sub>\\<^sub>2)" +| "(\ U\<^sub>n \)[N]\<^sub>\\<^sub>2 = (\[N]\<^sub>\\<^sub>2 and\<^sub>n (F\<^sub>n \[N]\<^sub>\\<^sub>1)) W\<^sub>n (\[N]\<^sub>\\<^sub>2)" +| "(\ R\<^sub>n \)[N]\<^sub>\\<^sub>2 = (\[N]\<^sub>\\<^sub>2) R\<^sub>n (\[N]\<^sub>\\<^sub>2)" +| "(\ M\<^sub>n \)[N]\<^sub>\\<^sub>2 = (\[N]\<^sub>\\<^sub>2) R\<^sub>n ((\[N]\<^sub>\\<^sub>2) and\<^sub>n (F\<^sub>n \[N]\<^sub>\\<^sub>1))" +| "(\ and\<^sub>n \)[N]\<^sub>\\<^sub>2 = (\[N]\<^sub>\\<^sub>2) and\<^sub>n (\[N]\<^sub>\\<^sub>2)" +| "(\ or\<^sub>n \)[N]\<^sub>\\<^sub>2 = (\[N]\<^sub>\\<^sub>2) or\<^sub>n (\[N]\<^sub>\\<^sub>2)" +| "(X\<^sub>n \)[N]\<^sub>\\<^sub>2 = X\<^sub>n (\[N]\<^sub>\\<^sub>2)" +| "\[N]\<^sub>\\<^sub>2 = \" + +lemma GF_advice_restriction: + "\[\\ (\ W\<^sub>n \) w]\<^sub>\\<^sub>1 = \[\\ \ w]\<^sub>\\<^sub>1" + "\[\\ (\ R\<^sub>n \) w]\<^sub>\\<^sub>1 = \[\\ \ w]\<^sub>\\<^sub>1" + by (metis (no_types, lifting) \\_semantics' inf_commute inf_left_commute inf_sup_absorb subformulas\<^sub>\.simps(6) GF_advice_inter_subformulas) + (metis (no_types, lifting) GF_advice_inter \\.simps(5) \\_semantics' \\_subformulas\<^sub>\ inf.commute sup.boundedE) + +lemma FG_advice_restriction: + "\[\\ (\ U\<^sub>n \) w]\<^sub>\\<^sub>1 = \[\\ \ w]\<^sub>\\<^sub>1" + "\[\\ (\ M\<^sub>n \) w]\<^sub>\\<^sub>1 = \[\\ \ w]\<^sub>\\<^sub>1" + by (metis (no_types, lifting) FG_advice_inter \\.simps(4) \\_semantics' \\_subformulas\<^sub>\ inf.commute sup.boundedE) + (metis (no_types, lifting) FG_advice_inter \\.simps(7) \\_semantics' \\_subformulas\<^sub>\ inf.right_idem inf_commute sup.cobounded1) + +lemma flatten_sigma_2_intersection: + "M \ subformulas\<^sub>\ \ \ S \ \[M \ S]\<^sub>\\<^sub>2 = \[M]\<^sub>\\<^sub>2" + by (induction \) (simp; blast intro: GF_advice_inter)+ + +lemma flatten_sigma_2_intersection_eq: + "M \ subformulas\<^sub>\ \ = M' \ \[M']\<^sub>\\<^sub>2 = \[M]\<^sub>\\<^sub>2" + using flatten_sigma_2_intersection by auto + +lemma flatten_sigma_2_monotone: + "w \\<^sub>n \[M]\<^sub>\\<^sub>2 \ M \ M' \ w \\<^sub>n \[M']\<^sub>\\<^sub>2" + by (induction \ arbitrary: w) + (simp; blast dest: GF_advice_monotone)+ + +lemma flatten_pi_2_intersection: + "N \ subformulas\<^sub>\ \ \ S \ \[N \ S]\<^sub>\\<^sub>2 = \[N]\<^sub>\\<^sub>2" + by (induction \) (simp; blast intro: FG_advice_inter)+ + +lemma flatten_pi_2_intersection_eq: + "N \ subformulas\<^sub>\ \ = N' \ \[N']\<^sub>\\<^sub>2 = \[N]\<^sub>\\<^sub>2" + using flatten_pi_2_intersection by auto + +lemma flatten_pi_2_monotone: + "w \\<^sub>n \[N]\<^sub>\\<^sub>2 \ N \ N' \ w \\<^sub>n \[N']\<^sub>\\<^sub>2" + by (induction \ arbitrary: w) + (simp; blast dest: FG_advice_monotone)+ + +lemma ltln_weak_strong_stable_words_1: + "w \\<^sub>n (\ W\<^sub>n \) \ w \\<^sub>n \ U\<^sub>n (\ or\<^sub>n (G\<^sub>n \[\\ \ w]\<^sub>\\<^sub>1))" (is "?lhs \ ?rhs") +proof + assume ?lhs + + moreover + + { + assume assm: "w \\<^sub>n G\<^sub>n \" + moreover + obtain i where "\j. \ \ (suffix i w) \ \\ \ w" + by (metis MOST_nat_le \\_suffix \_stable_def order_refl suffix_\_stable) + hence "\j. \ \ (suffix i (suffix j w)) \ \\ \ w" + by (metis \_suffix \\_\_subset \\_suffix semiring_normalization_rules(24) subset_Un_eq suffix_suffix sup.orderE) + ultimately + have "suffix i w \\<^sub>n G\<^sub>n (\[\\ \ w]\<^sub>\\<^sub>1)" + using GF_advice_a1[OF \\j. \ \ (suffix i (suffix j w)) \ \\ \ w\] + by (simp add: add.commute) + hence "?rhs" + using assm by auto + } + + moreover + + have "w \\<^sub>n \ U\<^sub>n \ \ ?rhs" + by auto + + ultimately + + show ?rhs + using ltln_weak_to_strong(1) by blast +next + assume ?rhs + thus ?lhs + unfolding ltln_weak_strong_2 unfolding semantics_ltln.simps + by (metis \\_suffix order_refl GF_advice_a2) +qed + +lemma ltln_weak_strong_stable_words_2: + "w \\<^sub>n (\ R\<^sub>n \) \ w \\<^sub>n (\ or\<^sub>n (G\<^sub>n \[\\ \ w]\<^sub>\\<^sub>1)) M\<^sub>n \" (is "?lhs \ ?rhs") +proof + assume ?lhs + + moreover + + { + assume assm: "w \\<^sub>n G\<^sub>n \" + moreover + obtain i where "\j. \ \ (suffix i w) \ \\ \ w" + by (metis MOST_nat_le \\_suffix \_stable_def order_refl suffix_\_stable) + hence "\j. \ \ (suffix i (suffix j w)) \ \\ \ w" + by (metis \_suffix \\_\_subset \\_suffix semiring_normalization_rules(24) subset_Un_eq suffix_suffix sup.orderE) + ultimately + have "suffix i w \\<^sub>n G\<^sub>n (\[\\ \ w]\<^sub>\\<^sub>1)" + using GF_advice_a1[OF \\j. \ \ (suffix i (suffix j w)) \ \\ \ w\] + by (simp add: add.commute) + hence "?rhs" + using assm by auto + } + + moreover + + have "w \\<^sub>n \ M\<^sub>n \ \ ?rhs" + by auto + + ultimately + + show ?rhs + using ltln_weak_to_strong by blast +next + assume ?rhs + thus ?lhs + unfolding ltln_weak_strong_2 unfolding semantics_ltln.simps + by (metis GF_advice_a2 \\_suffix order_refl) +qed + +lemma ltln_weak_strong_stable_words: + "w \\<^sub>n (\ W\<^sub>n \) \ w \\<^sub>n \ U\<^sub>n (\ or\<^sub>n (G\<^sub>n \[\\ (\ W\<^sub>n \) w]\<^sub>\\<^sub>1))" + "w \\<^sub>n (\ R\<^sub>n \) \ w \\<^sub>n (\ or\<^sub>n (G\<^sub>n \[\\ (\ R\<^sub>n \) w]\<^sub>\\<^sub>1)) M\<^sub>n \" + unfolding ltln_weak_strong_stable_words_1 ltln_weak_strong_stable_words_2 GF_advice_restriction by simp+ + +lemma flatten_sigma_2_IH_lifting: + assumes "\ \ subfrmlsn \" + assumes "suffix i w \\<^sub>n \[\\ \ (suffix i w)]\<^sub>\\<^sub>2 = suffix i w \\<^sub>n \" + shows "suffix i w \\<^sub>n \[\\ \ w]\<^sub>\\<^sub>2 = suffix i w \\<^sub>n \" + by (metis (no_types, lifting) inf.absorb_iff2 inf_assoc inf_commute assms(2) \\_suffix flatten_sigma_2_intersection_eq[of "\\ \ w" \ "\\ \ w"] \\_semantics' subformulas\<^sub>\_subset[OF assms(1)]) + +lemma flatten_sigma_2_correct: + "w \\<^sub>n \[\\ \ w]\<^sub>\\<^sub>2 \ w \\<^sub>n \" +proof (induction \ arbitrary: w) + case (And_ltln \1 \2) + then show ?case + using flatten_sigma_2_IH_lifting[of _ "\1 and\<^sub>n \2" 0] by simp +next + case (Or_ltln \1 \2) + then show ?case + using flatten_sigma_2_IH_lifting[of _ "\1 or\<^sub>n \2" 0] by simp +next + case (Next_ltln \) + then show ?case + using flatten_sigma_2_IH_lifting[of _ "X\<^sub>n \" 1] by fastforce +next + case (Until_ltln \1 \2) + then show ?case + using flatten_sigma_2_IH_lifting[of _ "\1 U\<^sub>n \2"] by fastforce +next + case (Release_ltln \1 \2) + then show ?case + unfolding ltln_weak_strong_stable_words + using flatten_sigma_2_IH_lifting[of _ "\1 R\<^sub>n \2"] by fastforce +next + case (WeakUntil_ltln \1 \2) + then show ?case + unfolding ltln_weak_strong_stable_words + using flatten_sigma_2_IH_lifting[of _ "\1 W\<^sub>n \2"] by fastforce +next +case (StrongRelease_ltln \1 \2) + then show ?case + using flatten_sigma_2_IH_lifting[of _ "\1 M\<^sub>n \2"] by fastforce +qed auto + +lemma ltln_strong_weak_stable_words_1: + "w \\<^sub>n \ U\<^sub>n \ \ w \\<^sub>n (\ and\<^sub>n (F\<^sub>n \[\\ \ w]\<^sub>\\<^sub>1)) W\<^sub>n \" (is "?lhs \ ?rhs") +proof + assume ?rhs + + moreover + + obtain i where "\_stable \ (suffix i w)" + by (metis MOST_nat less_Suc_eq suffix_\_stable) + hence "\\ \ \\ \ w. suffix i w \\<^sub>n G\<^sub>n \" + using \\_suffix \_elim \_stable_def by blast + + { + assume assm: "w \\<^sub>n G\<^sub>n (\ and\<^sub>n (F\<^sub>n \[\\ \ w]\<^sub>\\<^sub>1))" + hence "suffix i w \\<^sub>n (F\<^sub>n \)[\\ \ w]\<^sub>\\<^sub>1" + by simp + hence "suffix i w \\<^sub>n F\<^sub>n \" + by (blast dest: FG_advice_b2_helper[OF \\\ \ \\ \ w. suffix i w \\<^sub>n G\<^sub>n \\]) + hence "w \\<^sub>n \ U\<^sub>n \" + using assm by auto + } + + ultimately + + show ?lhs + by (meson ltln_weak_to_strong(1) semantics_ltln.simps(5) until_and_left_distrib) +next + assume ?lhs + + moreover + + have "\i. suffix i w \\<^sub>n \ \ suffix i w \\<^sub>n \[\\ \ w]\<^sub>\\<^sub>1" + using \\_suffix by (blast intro: FG_advice_b1) + + ultimately + + show "?rhs" + unfolding ltln_strong_weak_2 by fastforce +qed + +lemma ltln_strong_weak_stable_words_2: + "w \\<^sub>n \ M\<^sub>n \ \ w \\<^sub>n \ R\<^sub>n (\ and\<^sub>n (F\<^sub>n \[\\ \ w]\<^sub>\\<^sub>1))" (is "?lhs \ ?rhs") +proof + assume ?rhs + + moreover + + obtain i where "\_stable \ (suffix i w)" + by (metis MOST_nat less_Suc_eq suffix_\_stable) + hence "\\ \ \\ \ w. suffix i w \\<^sub>n G\<^sub>n \" + using \\_suffix \_elim \_stable_def by blast + + { + assume assm: "w \\<^sub>n G\<^sub>n (\ and\<^sub>n (F\<^sub>n \[\\ \ w]\<^sub>\\<^sub>1))" + hence "suffix i w \\<^sub>n (F\<^sub>n \)[\\ \ w]\<^sub>\\<^sub>1" + by simp + hence "suffix i w \\<^sub>n F\<^sub>n \" + by (blast dest: FG_advice_b2_helper[OF \\\ \ \\ \ w. suffix i w \\<^sub>n G\<^sub>n \\]) + hence "w \\<^sub>n \ M\<^sub>n \" + using assm by auto + } + + ultimately + + show ?lhs + using ltln_weak_to_strong(3) semantics_ltln.simps(5) strong_release_and_right_distrib by blast +next + assume ?lhs + + moreover + + have "\i. suffix i w \\<^sub>n \ \ suffix i w \\<^sub>n \[\\ \ w]\<^sub>\\<^sub>1" + using \\_suffix by (blast intro: FG_advice_b1) + + ultimately + + show "?rhs" + unfolding ltln_strong_weak_2 by fastforce +qed + +lemma ltln_strong_weak_stable_words: + "w \\<^sub>n \ U\<^sub>n \ \ w \\<^sub>n (\ and\<^sub>n (F\<^sub>n \[\\ (\ U\<^sub>n \) w]\<^sub>\\<^sub>1)) W\<^sub>n \" + "w \\<^sub>n \ M\<^sub>n \ \ w \\<^sub>n \ R\<^sub>n (\ and\<^sub>n (F\<^sub>n \[\\ (\ M\<^sub>n \) w]\<^sub>\\<^sub>1))" + unfolding ltln_strong_weak_stable_words_1 ltln_strong_weak_stable_words_2 FG_advice_restriction by simp+ + +lemma flatten_pi_2_IH_lifting: + assumes "\ \ subfrmlsn \" + assumes "suffix i w \\<^sub>n \[\\ \ (suffix i w)]\<^sub>\\<^sub>2 = suffix i w \\<^sub>n \" + shows "suffix i w \\<^sub>n \[\\ \ w]\<^sub>\\<^sub>2 = suffix i w \\<^sub>n \" + by (metis (no_types, lifting) inf.absorb_iff2 inf_assoc inf_commute assms(2) \\_suffix flatten_pi_2_intersection_eq[of "\\ \ w" \ "\\ \ w"] \\_semantics' subformulas\<^sub>\_subset[OF assms(1)]) + +lemma flatten_pi_2_correct: + "w \\<^sub>n \[\\ \ w]\<^sub>\\<^sub>2 \ w \\<^sub>n \" +proof (induction \ arbitrary: w) + case (And_ltln \1 \2) + then show ?case + using flatten_pi_2_IH_lifting[of _ "\1 and\<^sub>n \2" 0] by simp +next + case (Or_ltln \1 \2) + then show ?case + using flatten_pi_2_IH_lifting[of _ "\1 or\<^sub>n \2" 0] by simp +next + case (Next_ltln \) + then show ?case + using flatten_pi_2_IH_lifting[of _ "X\<^sub>n \" 1] by fastforce +next + case (Until_ltln \1 \2) + then show ?case + unfolding ltln_strong_weak_stable_words + using flatten_pi_2_IH_lifting[of _ "\1 U\<^sub>n \2"] by fastforce +next + case (Release_ltln \1 \2) + then show ?case + using flatten_pi_2_IH_lifting[of _ "\1 R\<^sub>n \2"] by fastforce +next + case (WeakUntil_ltln \1 \2) + then show ?case + using flatten_pi_2_IH_lifting[of _ "\1 W\<^sub>n \2"] by fastforce +next +case (StrongRelease_ltln \1 \2) + then show ?case + unfolding ltln_strong_weak_stable_words + using flatten_pi_2_IH_lifting[of _ "\1 M\<^sub>n \2"] by fastforce +qed auto + +subsection \Main Theorem\ + +text \Using the four previously defined functions we obtain our normal form.\ + +theorem normal_form_with_flatten_sigma_2: + "w \\<^sub>n \ \ + (\M \ subformulas\<^sub>\ \. \N \ subformulas\<^sub>\ \. + w \\<^sub>n \[M]\<^sub>\\<^sub>2 \ (\\ \ M. w \\<^sub>n G\<^sub>n (F\<^sub>n \[N]\<^sub>\\<^sub>1)) \ (\\ \ N. w \\<^sub>n F\<^sub>n (G\<^sub>n \[M]\<^sub>\\<^sub>1)))" (is "?lhs \ ?rhs") +proof + assume ?lhs + then have "w \\<^sub>n \[\\ \ w]\<^sub>\\<^sub>2" + using flatten_sigma_2_correct by blast + then show ?rhs + using \\_subformulas\<^sub>\ \\_subformulas\<^sub>\ \\_implies_GF \\_implies_FG by metis +next + assume ?rhs + then obtain M N where "w \\<^sub>n \[M]\<^sub>\\<^sub>2" and "M \ \\ \ w" and "N \ \\ \ w" + using X_\\_Y_\\ by blast + then have "w \\<^sub>n \[\\ \ w]\<^sub>\\<^sub>2" + using flatten_sigma_2_monotone by blast + then show ?lhs + using flatten_sigma_2_correct by blast +qed + +theorem normal_form_with_flatten_pi_2: + "w \\<^sub>n \ \ + (\M \ subformulas\<^sub>\ \. \N \ subformulas\<^sub>\ \. + w \\<^sub>n \[N]\<^sub>\\<^sub>2 \ (\\ \ M. w \\<^sub>n G\<^sub>n (F\<^sub>n \[N]\<^sub>\\<^sub>1)) \ (\\ \ N. w \\<^sub>n F\<^sub>n (G\<^sub>n \[M]\<^sub>\\<^sub>1)))" (is "?lhs \ ?rhs") +proof + assume ?lhs + then have "w \\<^sub>n \[\\ \ w]\<^sub>\\<^sub>2" + using flatten_pi_2_correct by blast + then show ?rhs + using \\_subformulas\<^sub>\ \\_subformulas\<^sub>\ \\_implies_GF \\_implies_FG by metis +next + assume ?rhs + then obtain M N where "w \\<^sub>n \[N]\<^sub>\\<^sub>2" and "M \ \\ \ w" and "N \ \\ \ w" + using X_\\_Y_\\ by metis + then have "w \\<^sub>n \[\\ \ w]\<^sub>\\<^sub>2" + using flatten_pi_2_monotone by metis + then show ?lhs + using flatten_pi_2_correct by blast +qed + +end \ No newline at end of file diff --git a/thys/LTL_Normal_Form/Normal_Form_Code_Export.thy b/thys/LTL_Normal_Form/Normal_Form_Code_Export.thy new file mode 100644 --- /dev/null +++ b/thys/LTL_Normal_Form/Normal_Form_Code_Export.thy @@ -0,0 +1,134 @@ +(* + Author: Salomon Sickert + License: BSD +*) + +section \Code Export\ + +theory Normal_Form_Code_Export imports + LTL.Code_Equations + LTL.Rewriting + LTL.Disjunctive_Normal_Form + HOL.String + Normal_Form +begin + +fun flatten_pi_1_list :: "String.literal ltln \ String.literal ltln list \ String.literal ltln" + where + "flatten_pi_1_list (\\<^sub>1 U\<^sub>n \\<^sub>2) M = (if (\\<^sub>1 U\<^sub>n \\<^sub>2) \ set M then (flatten_pi_1_list \\<^sub>1 M) W\<^sub>n (flatten_pi_1_list \\<^sub>2 M) else false\<^sub>n)" +| "flatten_pi_1_list (\\<^sub>1 W\<^sub>n \\<^sub>2) M = (flatten_pi_1_list \\<^sub>1 M) W\<^sub>n (flatten_pi_1_list \\<^sub>2 M)" +| "flatten_pi_1_list (\\<^sub>1 M\<^sub>n \\<^sub>2) M = (if (\\<^sub>1 M\<^sub>n \\<^sub>2) \ set M then (flatten_pi_1_list \\<^sub>1 M) R\<^sub>n (flatten_pi_1_list \\<^sub>2 M) else false\<^sub>n)" +| "flatten_pi_1_list (\\<^sub>1 R\<^sub>n \\<^sub>2) M = (flatten_pi_1_list \\<^sub>1 M) R\<^sub>n (flatten_pi_1_list \\<^sub>2 M)" +| "flatten_pi_1_list (\\<^sub>1 and\<^sub>n \\<^sub>2) M = (flatten_pi_1_list \\<^sub>1 M) and\<^sub>n (flatten_pi_1_list \\<^sub>2 M)" +| "flatten_pi_1_list (\\<^sub>1 or\<^sub>n \\<^sub>2) M = (flatten_pi_1_list \\<^sub>1 M) or\<^sub>n (flatten_pi_1_list \\<^sub>2 M)" +| "flatten_pi_1_list (X\<^sub>n \) M = X\<^sub>n (flatten_pi_1_list \ M)" +| "flatten_pi_1_list \ _ = \" + +fun flatten_sigma_1_list :: "String.literal ltln \ String.literal ltln list \ String.literal ltln" +where + "flatten_sigma_1_list (\\<^sub>1 U\<^sub>n \\<^sub>2) N = (flatten_sigma_1_list \\<^sub>1 N) U\<^sub>n (flatten_sigma_1_list \\<^sub>2 N)" +| "flatten_sigma_1_list (\\<^sub>1 W\<^sub>n \\<^sub>2) N = (if (\\<^sub>1 W\<^sub>n \\<^sub>2) \ set N then true\<^sub>n else (flatten_sigma_1_list \\<^sub>1 N) U\<^sub>n (flatten_sigma_1_list \\<^sub>2 N))" +| "flatten_sigma_1_list (\\<^sub>1 M\<^sub>n \\<^sub>2) N = (flatten_sigma_1_list \\<^sub>1 N) M\<^sub>n (flatten_sigma_1_list \\<^sub>2 N)" +| "flatten_sigma_1_list (\\<^sub>1 R\<^sub>n \\<^sub>2) N = (if (\\<^sub>1 R\<^sub>n \\<^sub>2) \ set N then true\<^sub>n else (flatten_sigma_1_list \\<^sub>1 N) M\<^sub>n (flatten_sigma_1_list \\<^sub>2 N))" +| "flatten_sigma_1_list (\\<^sub>1 and\<^sub>n \\<^sub>2) N = (flatten_sigma_1_list \\<^sub>1 N) and\<^sub>n (flatten_sigma_1_list \\<^sub>2 N)" +| "flatten_sigma_1_list (\\<^sub>1 or\<^sub>n \\<^sub>2) N = (flatten_sigma_1_list \\<^sub>1 N) or\<^sub>n (flatten_sigma_1_list \\<^sub>2 N)" +| "flatten_sigma_1_list (X\<^sub>n \) N = X\<^sub>n (flatten_sigma_1_list \ N)" +| "flatten_sigma_1_list \ _ = \" + +fun flatten_sigma_2_list :: "String.literal ltln \ String.literal ltln list \ String.literal ltln" +where + "flatten_sigma_2_list (\ U\<^sub>n \) M = (flatten_sigma_2_list \ M) U\<^sub>n (flatten_sigma_2_list \ M)" +| "flatten_sigma_2_list (\ W\<^sub>n \) M = (flatten_sigma_2_list \ M) U\<^sub>n ((flatten_sigma_2_list \ M) or\<^sub>n (G\<^sub>n (flatten_pi_1_list \ M)))" +| "flatten_sigma_2_list (\ M\<^sub>n \) M = (flatten_sigma_2_list \ M) M\<^sub>n (flatten_sigma_2_list \ M)" +| "flatten_sigma_2_list (\ R\<^sub>n \) M = ((flatten_sigma_2_list \ M) or\<^sub>n (G\<^sub>n (flatten_pi_1_list \ M))) M\<^sub>n (flatten_sigma_2_list \ M)" +| "flatten_sigma_2_list (\ and\<^sub>n \) M = (flatten_sigma_2_list \ M) and\<^sub>n (flatten_sigma_2_list \ M)" +| "flatten_sigma_2_list (\ or\<^sub>n \) M = (flatten_sigma_2_list \ M) or\<^sub>n (flatten_sigma_2_list \ M)" +| "flatten_sigma_2_list (X\<^sub>n \) M = X\<^sub>n (flatten_sigma_2_list \ M)" +| "flatten_sigma_2_list \ _ = \" + +lemma flatten_code_equations[simp]: + "\[set M]\<^sub>\\<^sub>1 = flatten_pi_1_list \ M" + "\[set M]\<^sub>\\<^sub>1 = flatten_sigma_1_list \ M" + "\[set M]\<^sub>\\<^sub>2 = flatten_sigma_2_list \ M" + by (induction \) auto + +abbreviation "and_list \ foldl And_ltln true\<^sub>n" + +abbreviation "or_list \ foldl Or_ltln false\<^sub>n" + +definition "normal_form_disjunct (\ :: String.literal ltln) M N + \ (flatten_sigma_2_list \ M) + and\<^sub>n (and_list (map (\\. G\<^sub>n (F\<^sub>n (flatten_sigma_1_list \ N))) M) + and\<^sub>n (and_list (map (\\. F\<^sub>n (G\<^sub>n (flatten_pi_1_list \ M))) N)))" + +definition "normal_form (\ :: String.literal ltln) + \ or_list (map (\(M, N). normal_form_disjunct \ M N) (advice_sets \))" + +lemma and_list_semantic: "w \\<^sub>n and_list xs \ (\x \ set xs. w \\<^sub>n x)" + by (induction xs rule: rev_induct) auto + +lemma or_list_semantic: "w \\<^sub>n or_list xs \ (\x \ set xs. w \\<^sub>n x)" + by (induction xs rule: rev_induct) auto + +theorem normal_form_correct: + "w \\<^sub>n \ \ w \\<^sub>n normal_form \" +proof + assume "w \\<^sub>n \" + then obtain M N where "M \ subformulas\<^sub>\ \" and "N \ subformulas\<^sub>\ \" + and c1: "w \\<^sub>n \[M]\<^sub>\\<^sub>2" and c2: "\\ \ M. w \\<^sub>n G\<^sub>n (F\<^sub>n \[N]\<^sub>\\<^sub>1)" and c3: "\\ \ N. w \\<^sub>n F\<^sub>n (G\<^sub>n \[M]\<^sub>\\<^sub>1)" + using normal_form_with_flatten_sigma_2 by metis + then obtain ms ns where "M = set ms" and "N = set ns" and ms_ns_in: "(ms, ns) \ set (advice_sets \)" + by (meson advice_sets_subformulas) + then have "w \\<^sub>n normal_form_disjunct \ ms ns" + using c1 c2 c3 by (simp add: and_list_semantic normal_form_disjunct_def) + then show "w \\<^sub>n normal_form \" + using normal_form_def or_list_semantic ms_ns_in by fastforce +next + assume "w \\<^sub>n normal_form \" + then obtain ms ns where "(ms, ns) \ set (advice_sets \)" + and "w \\<^sub>n normal_form_disjunct \ ms ns" + unfolding normal_form_def or_list_semantic by force + then have "set ms \ subformulas\<^sub>\ \" and "set ns \ subformulas\<^sub>\ \" + and c1: "w \\<^sub>n \[set ms]\<^sub>\\<^sub>2" and c2: "\\ \ set ms. w \\<^sub>n G\<^sub>n (F\<^sub>n \[set ns]\<^sub>\\<^sub>1)" and c3: "\\ \ set ns. w \\<^sub>n F\<^sub>n (G\<^sub>n \[set ms]\<^sub>\\<^sub>1)" + using advice_sets_element_subfrmlsn + by (auto simp: and_list_semantic normal_form_disjunct_def) blast + then show "w \\<^sub>n \" + using normal_form_with_flatten_sigma_2 by metis +qed + +definition "normal_form_with_simplifier (\ :: String.literal ltln) + \ min_dnf (simplify Slow (normal_form (simplify Slow \)))" + +lemma ltl_semantics_min_dnf: + "w \\<^sub>n \ \ (\C \ min_dnf \. \\. \ |\| C \ w \\<^sub>n \)" (is "?lhs \ ?rhs") +proof + let ?M = "{\. w \\<^sub>n \}" + assume ?lhs + hence "?M \\<^sub>P \" + using ltl_models_equiv_prop_entailment by blast + then obtain M' where "fset M' \ ?M" and "M' \ min_dnf \" + using min_dnf_iff_prop_assignment_subset by blast + thus ?rhs + by (meson in_mono mem_Collect_eq notin_fset) +next + let ?M = "{\. w \\<^sub>n \}" + assume ?rhs + then obtain M' where "fset M' \ ?M" and "M' \ min_dnf \" + using notin_fset by fastforce + hence "?M \\<^sub>P \" + using min_dnf_iff_prop_assignment_subset by blast + thus ?lhs + using ltl_models_equiv_prop_entailment by blast +qed + +theorem + "w \\<^sub>n \ \ (\C \ (normal_form_with_simplifier \). \\. \ |\| C \ w \\<^sub>n \)" (is "?lhs \ ?rhs") + unfolding normal_form_with_simplifier_def ltl_semantics_min_dnf[symmetric] + using normal_form_correct by simp + +text \In order to export the code run \texttt{isabelle build -D [PATH] -e}.\ + +export_code normal_form in SML +export_code normal_form_with_simplifier in SML + +end \ No newline at end of file diff --git a/thys/LTL_Normal_Form/Normal_Form_Complexity.thy b/thys/LTL_Normal_Form/Normal_Form_Complexity.thy new file mode 100644 --- /dev/null +++ b/thys/LTL_Normal_Form/Normal_Form_Complexity.thy @@ -0,0 +1,650 @@ +(* + Author: Salomon Sickert + License: BSD +*) + +section \Size Bounds\ + +text \We prove an exponential upper bound for the normalisation procedure. Moreover, we show that + the number of proper subformulas, which correspond to states very-weak alternating automata + (A1W), is only linear for each disjunct.\ + +theory Normal_Form_Complexity imports + Normal_Form +begin + +subsection \Inequalities and Identities\ + +lemma inequality_1: + "y > 0 \ y + 3 \ (2 :: nat) ^ (y + 1)" + by (induction y) (simp, fastforce) + +lemma inequality_2: + "x > 0 \ y > 0 \ ((2 :: nat) ^ (x + 1)) + (2 ^ (y + 1)) \ (2 ^ (x + y + 1))" + by (induction x; simp; induction y; simp; fastforce) + +lemma size_gr_0: + "size (\ :: 'a ltln) > 0" + by (cases \) simp_all + +lemma sum_associative: + "finite X \ (\x \ X. f x + c) = (\x \ X. f x) + card X * c" + by (induction rule: finite_induct) simp_all + +subsection \Length\ + +text \We prove that the length (size) of the resulting formula in normal form is at most exponential.\ + +lemma flatten_sigma_1_length: + "size (\[N]\<^sub>\\<^sub>1) \ size \" + by (induction \) simp_all + +lemma flatten_pi_1_length: + "size (\[M]\<^sub>\\<^sub>1) \ size \" + by (induction \) simp_all + +lemma flatten_sigma_2_length: + "size (\[M]\<^sub>\\<^sub>2) \ 2 ^ (size \ + 1)" +proof (induction \) + case (And_ltln \1 \2) + hence "size (\1 and\<^sub>n \2)[M]\<^sub>\\<^sub>2 \ (2 ^ (size \1 + 1)) + (2 ^ (size \2 + 1)) + 1" + by simp + also + have "\ \ 2 ^ (size \1 + size \2 + 1) + 1 " + using inequality_2[OF size_gr_0 size_gr_0] by simp + also + have "\ \ 2 ^ (size (\1 and\<^sub>n \2) + 1)" + by simp + finally + show ?case. +next + case (Or_ltln \1 \2) + hence "size (\1 or\<^sub>n \2)[M]\<^sub>\\<^sub>2 \ (2 ^ (size \1 + 1)) + (2 ^ (size \2 + 1)) + 1" + by simp + also + have "\ \ 2 ^ (size \1 + size \2 + 1) + 1 " + using inequality_2[OF size_gr_0 size_gr_0] by simp + also + have "\ \ 2 ^ (size (\1 or\<^sub>n \2) + 1)" + by simp + finally + show ?case. +next + case (Next_ltln \) + then show ?case + using le_Suc_eq by fastforce +next + case (WeakUntil_ltln \1 \2) + hence "size (\1 W\<^sub>n \2)[M]\<^sub>\\<^sub>2 \ 2 ^ (size \1 + 1) + 2 ^ (size \2 + 1) + size \1 + 4" + by (simp, simp add: add.commute add_mono flatten_pi_1_length) + also + have "\ \ 2 ^ (size \2 + 1) + 2 * 2 ^ (size \1 + 1) + 1" + using inequality_1[OF size_gr_0, of \1] by simp + also + have "\ \ 2 * (2 ^ (size \1 + 1) + 2 ^ (size \2 + 1))" + by simp + also + have "\ \ 2 * 2 ^ (size \1 + size \2 + 1)" + using inequality_2[OF size_gr_0 size_gr_0] mult_le_mono2 by blast + also + have "\ = 2 ^ (size (\1 W\<^sub>n \2) + 1)" + by simp + finally + show ?case. +next + case (StrongRelease_ltln \1 \2) + hence "size (\1 M\<^sub>n \2)[M]\<^sub>\\<^sub>2 \ (2 ^ (size \1 + 1)) + (2 ^ (size \2 + 1)) + 1" + by simp + also + have "\ \ 2 ^ (size \1 + size \2 + 1) + 1 " + using inequality_2[OF size_gr_0 size_gr_0] by simp + also + have "\ \ 2 ^ (size (\1 M\<^sub>n \2) + 1)" + by simp + finally + show ?case. +next + case (Until_ltln \1 \2) + hence "size (\1 U\<^sub>n \2)[M]\<^sub>\\<^sub>2 \ (2 ^ (size \1 + 1)) + (2 ^ (size \2 + 1)) + 1" + by simp + also + have "\ \ 2 ^ (size \1 + size \2 + 1) + 1 " + using inequality_2[OF size_gr_0 size_gr_0] by simp + also + have "\ \ 2 ^ (size (\1 U\<^sub>n \2) + 1)" + by simp + finally + show ?case. +next + case (Release_ltln \1 \2) + hence "size (\1 R\<^sub>n \2)[M]\<^sub>\\<^sub>2 \ 2 ^ (size \1 + 1) + 2 ^ (size \2 + 1) + size \2 + 4" + by (simp, simp add: add.commute add_mono flatten_pi_1_length) + also + have "\ \ 2 ^ (size \1 + 1) + 2 * 2 ^ (size \2 + 1) + 1" + using inequality_1[OF size_gr_0, of \2] by simp + also + have "\ \ 2 * (2 ^ (size \1 + 1) + 2 ^ (size \2 + 1))" + by simp + also + have "\ \ 2 * 2 ^ (size \1 + size \2 + 1)" + using inequality_2[OF size_gr_0 size_gr_0] mult_le_mono2 by blast + also + have "\ = 2 ^ (size (\1 R\<^sub>n \2) + 1)" + by simp + finally + show ?case . +qed auto + +lemma flatten_pi_2_length: + "size (\[N]\<^sub>\\<^sub>2) \ 2 ^ (size \ + 1)" +proof (induction \) + case (And_ltln \1 \2) + hence "size (\1 and\<^sub>n \2)[N]\<^sub>\\<^sub>2 \ (2 ^ (size \1 + 1)) + (2 ^ (size \2 + 1)) + 1" + by simp + also + have "\ \ 2 ^ (size \1 + size \2 + 1) + 1 " + using inequality_2[OF size_gr_0 size_gr_0] by simp + also + have "\ \ 2 ^ (size (\1 and\<^sub>n \2) + 1)" + by simp + finally + show ?case. +next + case (Or_ltln \1 \2) + hence "size (\1 or\<^sub>n \2)[N]\<^sub>\\<^sub>2 \ (2 ^ (size \1 + 1)) + (2 ^ (size \2 + 1)) + 1" + by simp + also + have "\ \ 2 ^ (size \1 + size \2 + 1) + 1 " + using inequality_2[OF size_gr_0 size_gr_0] by simp + also + have "\ \ 2 ^ (size (\1 or\<^sub>n \2) + 1)" + by simp + finally + show ?case. +next + case (Next_ltln \) + then show ?case + using le_Suc_eq by fastforce +next + case (Until_ltln \1 \2) + hence "size (\1 U\<^sub>n \2)[N]\<^sub>\\<^sub>2 \ 2 ^ (size \1 + 1) + 2 ^ (size \2 + 1) + size \2 + 4" + by (simp, simp add: add.commute add_mono flatten_sigma_1_length) + also + have "\ \ 2 ^ (size \1 + 1) + 2 * 2 ^ (size \2 + 1) + 1" + using inequality_1[OF size_gr_0, of \2] by simp + also + have "\ \ 2 * (2 ^ (size \1 + 1) + 2 ^ (size \2 + 1))" + by simp + also + have "\ \ 2 * 2 ^ (size \1 + size \2 + 1)" + using inequality_2[OF size_gr_0 size_gr_0] mult_le_mono2 by blast + also + have "\ = 2 ^ (size (\1 U\<^sub>n \2) + 1)" + by simp + finally + show ?case. +next + case (Release_ltln \1 \2) + hence "size (\1 R\<^sub>n \2)[N]\<^sub>\\<^sub>2 \ (2 ^ (size \1 + 1)) + (2 ^ (size \2 + 1)) + 1" + by simp + also + have "\ \ 2 ^ (size \1 + size \2 + 1) + 1 " + using inequality_2[OF size_gr_0 size_gr_0] by simp + also + have "\ \ 2 ^ (size (\1 R\<^sub>n \2) + 1)" + by simp + finally + show ?case. +next + case (WeakUntil_ltln \1 \2) + hence "size (\1 W\<^sub>n \2)[N]\<^sub>\\<^sub>2 \ (2 ^ (size \1 + 1)) + (2 ^ (size \2 + 1)) + 1" + by simp + also + have "\ \ 2 ^ (size \1 + size \2 + 1) + 1 " + using inequality_2[OF size_gr_0 size_gr_0] by simp + also + have "\ \ 2 ^ (size (\1 W\<^sub>n \2) + 1)" + by simp + finally + show ?case. +next + case (StrongRelease_ltln \1 \2) + hence "size (\1 M\<^sub>n \2)[N]\<^sub>\\<^sub>2 \ 2 ^ (size \1 + 1) + 2 ^ (size \2 + 1) + size \1 + 4" + by (simp, simp add: add.commute add_mono flatten_sigma_1_length) + also + have "\ \ 2 ^ (size \2 + 1) + 2 * 2 ^ (size \1 + 1) + 1" + using inequality_1[OF size_gr_0, of \1] by simp + also + have "\ \ 2 * (2 ^ (size \1 + 1) + 2 ^ (size \2 + 1))" + by simp + also + have "\ \ 2 * 2 ^ (size \1 + size \2 + 1)" + using inequality_2[OF size_gr_0 size_gr_0] mult_le_mono2 by blast + also + have "\ = 2 ^ (size (\1 M\<^sub>n \2) + 1)" + by simp + finally + show ?case . +qed auto + +definition "normal_form_length_upper_bound" + where "normal_form_length_upper_bound \ + \ (2 :: nat) ^ (size \) * (2 ^ (size \ + 1) + 2 * (size \ + 2) ^ 2)" + +definition "normal_form_disjunct_with_flatten_pi_2_length" + where "normal_form_disjunct_with_flatten_pi_2_length \ M N + \ size (\[N]\<^sub>\\<^sub>2) + (\\ \ M. size (\[N]\<^sub>\\<^sub>1) + 2) + (\\ \ N. size (\[M]\<^sub>\\<^sub>1) + 2)" + +definition "normal_form_with_flatten_pi_2_length" + where "normal_form_with_flatten_pi_2_length \ + \ \(M, N) \ {(M, N) | M N. M \ subformulas\<^sub>\ \ \ N \ subformulas\<^sub>\ \}. normal_form_disjunct_with_flatten_pi_2_length \ M N" + +definition "normal_form_disjunct_with_flatten_sigma_2_length" + where "normal_form_disjunct_with_flatten_sigma_2_length \ M N + \ size (\[M]\<^sub>\\<^sub>2) + (\\ \ M. size (\[N]\<^sub>\\<^sub>1) + 2) + (\\ \ N. size (\[M]\<^sub>\\<^sub>1) + 2)" + +definition "normal_form_with_flatten_sigma_2_length" + where "normal_form_with_flatten_sigma_2_length \ + \ \(M, N) \ {(M, N) | M N. M \ subformulas\<^sub>\ \ \ N \ subformulas\<^sub>\ \}. normal_form_disjunct_with_flatten_sigma_2_length \ M N" + +lemma normal_form_disjunct_length_upper_bound: + assumes + "M \ subformulas\<^sub>\ \" + "N \ subformulas\<^sub>\ \" + shows + "normal_form_disjunct_with_flatten_sigma_2_length \ M N \ 2 ^ (size \ + 1) + 2 * (size \ + 2) ^ 2" (is "?thesis1") + "normal_form_disjunct_with_flatten_pi_2_length \ M N \ 2 ^ (size \ + 1) + 2 * (size \ + 2) ^ 2" (is "?thesis2") +proof - + let ?n = "size \" + let ?b = "2 ^ (?n + 1) + ?n * (?n + 2) + ?n * (?n + 2)" + + have finite_M: "finite M" and card_M: "card M \ ?n" + by (metis assms(1) finite_subset subformulas\<^sub>\_finite) + (meson assms(1) card_mono order_trans subformulas\<^sub>\_subfrmlsn subfrmlsn_card subfrmlsn_finite) + + have finite_N: "finite N" and card_N: "card N \ ?n" + by (metis assms(2) finite_subset subformulas\<^sub>\_finite) + (meson assms(2) card_mono order_trans subformulas\<^sub>\_subfrmlsn subfrmlsn_card subfrmlsn_finite) + + have size_M: "\\. \ \ M \ size \ \ size \" + and size_N: "\\. \ \ N \ size \ \ size \" + by (metis assms(1) eq_iff in_mono less_imp_le subformulas\<^sub>\_subfrmlsn subfrmlsn_size) + (metis assms(2) eq_iff in_mono less_imp_le subformulas\<^sub>\_subfrmlsn subfrmlsn_size) + + hence size_M': "\\. \ \ M \ size (\[N]\<^sub>\\<^sub>1) \ size \" + and size_N': "\\. \ \ N \ size (\[M]\<^sub>\\<^sub>1) \ size \" + using flatten_sigma_1_length flatten_pi_1_length order_trans by blast+ + + have "(\\ \ M. size (\[N]\<^sub>\\<^sub>1)) \ ?n * ?n" + and "(\\ \ N. size (\[M]\<^sub>\\<^sub>1)) \ ?n * ?n" + using sum_bounded_above[of M, OF size_M'] sum_bounded_above[of N, OF size_N'] + using mult_le_mono[OF card_M] mult_le_mono[OF card_N] by fastforce+ + + hence "(\\ \ M. (size (\[N]\<^sub>\\<^sub>1) + 2)) \ ?n * (?n + 2)" + and "(\\ \ N. (size (\[M]\<^sub>\\<^sub>1) + 2)) \ ?n * (?n + 2)" + unfolding sum_associative[OF finite_M] sum_associative[OF finite_N] + using card_M card_N by simp_all + + hence "normal_form_disjunct_with_flatten_sigma_2_length \ M N \ ?b" + and "normal_form_disjunct_with_flatten_pi_2_length \ M N \ ?b" + unfolding normal_form_disjunct_with_flatten_sigma_2_length_def normal_form_disjunct_with_flatten_pi_2_length_def + by (metis (no_types, lifting) flatten_sigma_2_length flatten_pi_2_length add_le_mono)+ + + thus ?thesis1 and ?thesis2 + by (simp_all add: power2_eq_square) +qed + +theorem normal_form_length_upper_bound: + "normal_form_with_flatten_sigma_2_length \ \ normal_form_length_upper_bound \" (is "?thesis1") + "normal_form_with_flatten_pi_2_length \ \ normal_form_length_upper_bound \" (is "?thesis2") +proof - + let ?n = "size \" + let ?b = "2 ^ (size \ + 1) + 2 * (size \ + 2) ^ 2" + + have "{(M, N) | M N. M \ subformulas\<^sub>\ \ \ N \ subformulas\<^sub>\ \} = {M. M \ subformulas\<^sub>\ \} \ {N. N \ subformulas\<^sub>\ \}" (is "?choices = _") + by simp + + moreover + + have "card {M. M \ subformulas\<^sub>\ \} = (2 :: nat) ^ (card (subformulas\<^sub>\ \))" + and "card {N. N \ subformulas\<^sub>\ \} = (2 :: nat) ^ (card (subformulas\<^sub>\ \))" + using card_Pow unfolding Pow_def using subformulas\<^sub>\_finite subformulas\<^sub>\_finite by auto + + ultimately + + have "card ?choices \ 2 ^ (card (subfrmlsn \))" (is "?f \ _") + by (metis subformulas\<^sub>\\<^sub>\_card card_cartesian_product subformulas\<^sub>\\<^sub>\_subfrmlsn subfrmlsn_finite Suc_1 card_mono lessI power_add power_increasing_iff) + + moreover + + have "(2 :: nat) ^ (card (subfrmlsn \)) \ 2 ^ ?n" + using power_increasing[of _ _ "2 :: nat"] by (simp add: subfrmlsn_card) + + ultimately + + have bar: "of_nat (card ?choices) \ (2 :: nat) ^ ?n" + using of_nat_id by presburger + + moreover + + have "normal_form_with_flatten_sigma_2_length \ \ of_nat (card ?choices) * ?b" + unfolding normal_form_with_flatten_sigma_2_length_def + by (rule sum_bounded_above) (insert normal_form_disjunct_length_upper_bound, auto) + + moreover + + have "normal_form_with_flatten_pi_2_length \ \ of_nat (card ?choices) * ?b" + unfolding normal_form_with_flatten_pi_2_length_def + by (rule sum_bounded_above) (insert normal_form_disjunct_length_upper_bound, auto) + + ultimately + + show ?thesis1 and ?thesis2 + unfolding normal_form_length_upper_bound_def + using mult_le_mono1 order_trans by blast+ +qed + +subsection \Proper Subformulas\ + +text \We prove that the number of (proper) subformulas (sf) in a disjunct is linear and not exponential.\ + +fun sf :: "'a ltln \ 'a ltln set" +where + "sf (\ and\<^sub>n \) = sf \ \ sf \" +| "sf (\ or\<^sub>n \) = sf \ \ sf \" +| "sf (X\<^sub>n \) = {X\<^sub>n \} \ sf \" +| "sf (\ U\<^sub>n \) = {\ U\<^sub>n \} \ sf \ \ sf \" +| "sf (\ R\<^sub>n \) = {\ R\<^sub>n \} \ sf \ \ sf \" +| "sf (\ W\<^sub>n \) = {\ W\<^sub>n \} \ sf \ \ sf \" +| "sf (\ M\<^sub>n \) = {\ M\<^sub>n \} \ sf \ \ sf \" +| "sf \ = {}" + +lemma sf_finite: + "finite (sf \)" + by (induction \) auto + +lemma sf_subset_subfrmlsn: + "sf \ \ subfrmlsn \" + by (induction \) auto + +lemma sf_size: + "\ \ sf \ \ size \ \ size \" + by (induction \) auto + +lemma sf_sf_subset: + "\ \ sf \ \ sf \ \ sf \" + by (induction \) auto + +lemma subfrmlsn_sf_subset: + "\ \ subfrmlsn \ \ sf \ \ sf \" + by (induction \) auto + +lemma sf_subset_insert: + assumes "sf \ \ insert \ X" + assumes "\ \ subfrmlsn \" + assumes "\ \ \" + shows "sf \ \ X" +proof - + have "sf \ \ sf \ - {\}" + using assms(2,3) subfrmlsn_sf_subset sf_size subfrmlsn_size by fastforce + thus "?thesis" + using assms(1) by auto +qed + +lemma flatten_pi_1_sf_subset: + "sf (\[M]\<^sub>\\<^sub>1) \ (\\\sf \. sf (\[M]\<^sub>\\<^sub>1))" + by (induction \) auto + +lemma flatten_sigma_1_sf_subset: + "sf (\[M]\<^sub>\\<^sub>1) \ (\\\sf \. sf (\[M]\<^sub>\\<^sub>1))" + by (induction \) auto + +lemma flatten_sigma_2_sf_subset: + "sf (\[M]\<^sub>\\<^sub>2) \ (\\\sf \. sf (\[M]\<^sub>\\<^sub>2))" + by (induction \) auto + +lemma sf_set1: + "sf (\[M]\<^sub>\\<^sub>2) \ sf (\[M]\<^sub>\\<^sub>1) \ (\\ \ (sf \). (sf (\[M]\<^sub>\\<^sub>2) \ sf (\[M]\<^sub>\\<^sub>1)))" + by (induction \) auto + +(* TODO: could be moved *) +lemma ltln_not_idempotent [simp]: + "\ and\<^sub>n \ \ \" "\ and\<^sub>n \ \ \" "\ \ \ and\<^sub>n \" "\ \ \ and\<^sub>n \" + "\ or\<^sub>n \ \ \" "\ or\<^sub>n \ \ \" "\ \ \ or\<^sub>n \" "\ \ \ or\<^sub>n \" + "X\<^sub>n \ \ \" "\ \ X\<^sub>n \" + "\ U\<^sub>n \ \ \" "\ \ \ U\<^sub>n \" "\ U\<^sub>n \ \ \" "\ \ \ U\<^sub>n \" + "\ R\<^sub>n \ \ \" "\ \ \ R\<^sub>n \" "\ R\<^sub>n \ \ \" "\ \ \ R\<^sub>n \" + "\ W\<^sub>n \ \ \" "\ \ \ W\<^sub>n \" "\ W\<^sub>n \ \ \" "\ \ \ W\<^sub>n \" + "\ M\<^sub>n \ \ \" "\ \ \ M\<^sub>n \" "\ M\<^sub>n \ \ \" "\ \ \ M\<^sub>n \" + by (induction \; force)+ + +lemma flatten_card_sf_induct: + assumes "finite X" + assumes "\x. x \ X \ sf x \ X" + shows "card (\\\X. sf (\[N]\<^sub>\\<^sub>1)) \ card X + \ card (\\\X. sf (\[M]\<^sub>\\<^sub>1)) \ card X + \ card (\\\X. sf (\[M]\<^sub>\\<^sub>2) \ sf (\[M]\<^sub>\\<^sub>1)) \ 3 * card X" + using assms(2) +proof (induction rule: finite_ranking_induct[where f = size, OF \finite X\]) + case (2 \ X) + { + assume "\ \ X" + hence "\\. \ \ X \ sf \ \ X" + using 2(2,4) sf_subset_subfrmlsn subfrmlsn_size by fastforce + hence "card (\\\X. sf (\[N]\<^sub>\\<^sub>1)) \ card X" + and "card (\\\X. sf (\[M]\<^sub>\\<^sub>1)) \ card X" + and "card (\\\X. sf (\[M]\<^sub>\\<^sub>2) \ sf (\[M]\<^sub>\\<^sub>1)) \ 3 * card X" + using 2(3) by simp+ + + moreover + + let ?lower1 = "\\ \ insert \ X. sf (\[N]\<^sub>\\<^sub>1)" + let ?upper1 = "(\\ \ X. sf (\[N]\<^sub>\\<^sub>1)) \ {\[N]\<^sub>\\<^sub>1}" + + let ?lower2 = "\\ \ insert \ X. sf (\[M]\<^sub>\\<^sub>1)" + let ?upper2 = "(\\ \ X. sf (\[M]\<^sub>\\<^sub>1)) \ {\[M]\<^sub>\\<^sub>1}" + + let ?lower3 = "\\ \ insert \ X. sf (\[M]\<^sub>\\<^sub>2) \ sf (\[M]\<^sub>\\<^sub>1)" + let ?upper3_cases = "{\[M]\<^sub>\\<^sub>2, \[M]\<^sub>\\<^sub>1} \ (case \ of (\1 W\<^sub>n \2) \ {G\<^sub>n (\1[M]\<^sub>\\<^sub>1)} | (\1 R\<^sub>n \2) \ {G\<^sub>n (\2[M]\<^sub>\\<^sub>1)} | _ \ {})" + let ?upper3 = "(\\ \ X. sf (\[M]\<^sub>\\<^sub>2) \ sf (\[M]\<^sub>\\<^sub>1)) \ ?upper3_cases" + + have finite_upper1: "finite (?upper1)" + and finite_upper2: "finite (?upper2)" + and finite_upper3: "finite (?upper3)" + using 2(1) sf_finite by auto (cases \, auto) + + have "\x y. card {x, y} \ 3" + and "\x y z. card {x, y, z} \ 3" + by (simp add: card_insert_if le_less)+ + hence card_leq_3: "card (?upper3_cases) \ 3" + by (cases \) (simp_all, fast) + + note card_subset_split_rule = le_trans[OF card_mono card_Un_le] + + have sf_in_X: "sf \ \ insert \ X" + using 2 by blast + + have "?lower1 \ ?upper1 \ ?lower2 \ ?upper2 \ ?lower3 \ ?upper3" + proof (cases \) + case (And_ltln \\<^sub>1 \\<^sub>2) + have *: "sf \\<^sub>1 \ X" "sf \\<^sub>2 \ X" + by (rule sf_subset_insert[OF sf_in_X, unfolded And_ltln]; simp)+ + + have "(sf (\[M]\<^sub>\\<^sub>2)) \ (\\ \ X. sf (\[M]\<^sub>\\<^sub>2))" + and "(sf (\[M]\<^sub>\\<^sub>1)) \ (\\ \ X. sf (\[M]\<^sub>\\<^sub>1))" + and "(sf (\[N]\<^sub>\\<^sub>1)) \ (\\ \ X. sf (\[N]\<^sub>\\<^sub>1))" + subgoal + using flatten_sigma_2_sf_subset[of _ M] * by (force simp: And_ltln) + subgoal + using flatten_pi_1_sf_subset[of _ M] * by (force simp: And_ltln) + subgoal + using flatten_sigma_1_sf_subset * by (force simp: And_ltln) + done + + thus ?thesis + by blast + next + case (Or_ltln \\<^sub>1 \\<^sub>2) + have *: "sf \\<^sub>1 \ X" "sf \\<^sub>2 \ X" + by (rule sf_subset_insert[OF sf_in_X, unfolded Or_ltln]; simp)+ + + have "(sf (\[M]\<^sub>\\<^sub>2)) \ (\\ \ X. sf (\[M]\<^sub>\\<^sub>2))" + and "(sf (\[M]\<^sub>\\<^sub>1)) \ (\\ \ X. sf (\[M]\<^sub>\\<^sub>1))" + and "(sf (\[N]\<^sub>\\<^sub>1)) \ (\\ \ X. sf (\[N]\<^sub>\\<^sub>1))" + subgoal + using flatten_sigma_2_sf_subset[of _ M] * by (force simp: Or_ltln) + subgoal + using flatten_pi_1_sf_subset[of _ M] * by (force simp: Or_ltln) + subgoal + using flatten_sigma_1_sf_subset * by (force simp: Or_ltln) + done + + thus ?thesis + by blast + next + case (Next_ltln \\<^sub>1) + have *: "sf \\<^sub>1 \ X" + by (rule sf_subset_insert[OF sf_in_X, unfolded Next_ltln]) simp_all + + have "(sf (\[M]\<^sub>\\<^sub>2)) \ (\\ \ X. sf (\[M]\<^sub>\\<^sub>2)) \ {\[M]\<^sub>\\<^sub>2}" + and "(sf (\[M]\<^sub>\\<^sub>1)) \ (\\ \ X. sf (\[M]\<^sub>\\<^sub>1)) \ {\[M]\<^sub>\\<^sub>1}" + and "(sf (\[N]\<^sub>\\<^sub>1)) \ (\\ \ X. sf (\[N]\<^sub>\\<^sub>1)) \ {\[N]\<^sub>\\<^sub>1}" + subgoal + using flatten_sigma_2_sf_subset[of _ M] * by (force simp: Next_ltln) + subgoal + using flatten_pi_1_sf_subset[of _ M] * by (force simp: Next_ltln) + subgoal + using flatten_sigma_1_sf_subset * by (force simp: Next_ltln) + done + + thus ?thesis + by blast + next + case (Until_ltln \\<^sub>1 \\<^sub>2) + have *: "sf \\<^sub>1 \ X" "sf \\<^sub>2 \ X" + by (rule sf_subset_insert[OF sf_in_X, unfolded Until_ltln]; simp)+ + + hence "(sf (\[M]\<^sub>\\<^sub>2)) \ (\\ \ X. sf (\[M]\<^sub>\\<^sub>2)) \ {\[M]\<^sub>\\<^sub>2}" + and "(sf (\[M]\<^sub>\\<^sub>1)) \ (\\ \ X. sf (\[M]\<^sub>\\<^sub>1)) \ {\[M]\<^sub>\\<^sub>1}" + and "(sf (\[N]\<^sub>\\<^sub>1)) \ (\\ \ X. sf (\[N]\<^sub>\\<^sub>1)) \ {\[N]\<^sub>\\<^sub>1}" + subgoal + using flatten_sigma_2_sf_subset[of _ M] * by (force simp: Until_ltln) + subgoal + using flatten_pi_1_sf_subset[of _ M] * by (force simp: Until_ltln) + subgoal + using flatten_sigma_1_sf_subset * by (force simp: Until_ltln) + done + + thus ?thesis + by blast + next + case (Release_ltln \\<^sub>1 \\<^sub>2) + have *: "sf \\<^sub>1 \ X" "sf \\<^sub>2 \ X" + by (rule sf_subset_insert[OF sf_in_X, unfolded Release_ltln]; simp)+ + + have "(sf (\[M]\<^sub>\\<^sub>2)) \ (\\ \ X. sf (\[M]\<^sub>\\<^sub>2)) \ {\[M]\<^sub>\\<^sub>2, G\<^sub>n \\<^sub>2[M]\<^sub>\\<^sub>1} \ sf (\\<^sub>2[M]\<^sub>\\<^sub>1)" + and "(sf (\[M]\<^sub>\\<^sub>1)) \ (\\ \ X. sf (\[M]\<^sub>\\<^sub>1)) \ {\[M]\<^sub>\\<^sub>1}" + and "(sf (\[N]\<^sub>\\<^sub>1)) \ (\\ \ X. sf (\[N]\<^sub>\\<^sub>1)) \ {\[N]\<^sub>\\<^sub>1}" + subgoal + using flatten_sigma_2_sf_subset[of _ M] * by (force simp: Release_ltln) + subgoal + using flatten_pi_1_sf_subset[of _ M] * by (force simp: Release_ltln) + subgoal + using flatten_sigma_1_sf_subset * by (force simp: Release_ltln) + done + + moreover + have "sf (\\<^sub>2[M]\<^sub>\\<^sub>1) \ (\\\X. sf \[M]\<^sub>\\<^sub>2 \ sf (\[M]\<^sub>\\<^sub>1)) \ {\[M]\<^sub>\\<^sub>1}" + using \(sf (\[M]\<^sub>\\<^sub>1)) \ (\\ \ X. sf (\[M]\<^sub>\\<^sub>1)) \ {\[M]\<^sub>\\<^sub>1}\ + by (auto simp: Release_ltln) + + ultimately + show ?thesis + by (simp add: Release_ltln) blast + next + case (WeakUntil_ltln \\<^sub>1 \\<^sub>2) + have *: "sf \\<^sub>1 \ X" "sf \\<^sub>2 \ X" + by (rule sf_subset_insert[OF sf_in_X, unfolded WeakUntil_ltln]; simp)+ + + have "(sf (\[M]\<^sub>\\<^sub>2)) \ (\\ \ X. sf (\[M]\<^sub>\\<^sub>2)) \ {\[M]\<^sub>\\<^sub>2, G\<^sub>n \\<^sub>1[M]\<^sub>\\<^sub>1} \ sf (\\<^sub>1[M]\<^sub>\\<^sub>1)" + and "(sf (\[M]\<^sub>\\<^sub>1)) \ (\\ \ X. sf (\[M]\<^sub>\\<^sub>1)) \ {\[M]\<^sub>\\<^sub>1}" + and "(sf (\[N]\<^sub>\\<^sub>1)) \ (\\ \ X. sf (\[N]\<^sub>\\<^sub>1)) \ {\[N]\<^sub>\\<^sub>1}" + subgoal + using flatten_sigma_2_sf_subset[of _ M] * by (force simp: WeakUntil_ltln) + subgoal + using flatten_pi_1_sf_subset[of _ M] * by (force simp: WeakUntil_ltln) + subgoal + using flatten_sigma_1_sf_subset * by (force simp: WeakUntil_ltln) + done + + moreover + have "sf (\\<^sub>1[M]\<^sub>\\<^sub>1) \ (\\\X. sf \[M]\<^sub>\\<^sub>2 \ sf (\[M]\<^sub>\\<^sub>1)) \ {\[M]\<^sub>\\<^sub>1}" + using \(sf (\[M]\<^sub>\\<^sub>1)) \ (\\ \ X. sf (\[M]\<^sub>\\<^sub>1)) \ {\[M]\<^sub>\\<^sub>1}\ + by (auto simp: WeakUntil_ltln) + + ultimately + show ?thesis + by (simp add: WeakUntil_ltln) blast + next + case (StrongRelease_ltln \\<^sub>1 \\<^sub>2) + have *: "sf \\<^sub>1 \ X" "sf \\<^sub>2 \ X" + by (rule sf_subset_insert[OF sf_in_X, unfolded StrongRelease_ltln]; simp)+ + + hence "(sf (\[M]\<^sub>\\<^sub>2)) \ (\\ \ X. sf (\[M]\<^sub>\\<^sub>2)) \ {\[M]\<^sub>\\<^sub>2}" + and "(sf (\[M]\<^sub>\\<^sub>1)) \ (\\ \ X. sf (\[M]\<^sub>\\<^sub>1)) \ {\[M]\<^sub>\\<^sub>1}" + and "(sf (\[N]\<^sub>\\<^sub>1)) \ (\\ \ X. sf (\[N]\<^sub>\\<^sub>1)) \ {\[N]\<^sub>\\<^sub>1}" + subgoal + using flatten_sigma_2_sf_subset[of _ M] * by (force simp: StrongRelease_ltln) + subgoal + using flatten_pi_1_sf_subset[of _ M] * by (force simp: StrongRelease_ltln) + subgoal + using flatten_sigma_1_sf_subset * by (force simp: StrongRelease_ltln) + done + + thus ?thesis + by blast + qed auto + + hence "card ?lower1 \ card (\\ \ X. sf (\[N]\<^sub>\\<^sub>1)) + 1" + and "card ?lower2 \ card (\\ \ X. sf (\[M]\<^sub>\\<^sub>1)) + 1" + and "card ?lower3 \ card (\\ \ X. sf (\[M]\<^sub>\\<^sub>2) \ sf (\[M]\<^sub>\\<^sub>1)) + 3" + using card_subset_split_rule[OF finite_upper1, of ?lower1] + using card_subset_split_rule[OF finite_upper2, of ?lower2] + using card_subset_split_rule[OF finite_upper3, of ?lower3] + using card_leq_3 by simp+ + + moreover + have "card (insert \ X) = card X + 1" + using \\ \ X\ \finite X\ by simp + ultimately + have ?case + by linarith + } + moreover + have "\ \ X \ ?case" + using 2 by (simp add: insert_absorb) + ultimately + show ?case + by meson +qed simp + +theorem flatten_card_sf: + "card (\\ \ sf \. sf (\[M]\<^sub>\\<^sub>1)) \ card (sf \)" (is "?t1") + "card (\\ \ sf \. sf (\[M]\<^sub>\\<^sub>1)) \ card (sf \)" (is "?t2") + "card (sf (\[M]\<^sub>\\<^sub>2) \ sf (\[M]\<^sub>\\<^sub>1)) \ 3 * card (sf \)" (is "?t3") +proof - + have "card (\\ \ sf \. sf \[M]\<^sub>\\<^sub>2 \ sf (\[M]\<^sub>\\<^sub>1)) \ 3 * card (sf \)" + using flatten_card_sf_induct[OF sf_finite sf_sf_subset] by auto + moreover + have "card (sf \[M]\<^sub>\\<^sub>2 \ sf (\[M]\<^sub>\\<^sub>1)) \ card (\\ \ sf \. sf \[M]\<^sub>\\<^sub>2 \ sf (\[M]\<^sub>\\<^sub>1))" + using card_mono[OF _ sf_set1] sf_finite by blast + ultimately + show ?t1 ?t2 ?t3 + using flatten_card_sf_induct[OF sf_finite sf_sf_subset] by auto +qed + +corollary flatten_sigma_2_card_sf: + "card (sf (\[M]\<^sub>\\<^sub>2)) \ 3 * (card (sf \))" + by (metis sf_finite order.trans[OF _ flatten_card_sf(3), of "card (sf (\[M]\<^sub>\\<^sub>2))", OF card_mono] finite_UnI Un_upper1) + +end \ No newline at end of file diff --git a/thys/LTL_Normal_Form/ROOT b/thys/LTL_Normal_Form/ROOT new file mode 100644 --- /dev/null +++ b/thys/LTL_Normal_Form/ROOT @@ -0,0 +1,15 @@ +chapter AFP + +session LTL_Normal_Form (AFP) = "LTL" + + options [timeout = 600] + sessions + LTL_Master_Theorem + theories + "Normal_Form" + "Normal_Form_Complexity" + "Normal_Form_Code_Export" + document_files + "root.tex" + "root.bib" + export_files (in ".") [1] + "LTL_Normal_Form.Normal_Form_Code_Export:**" diff --git a/thys/LTL_Normal_Form/document/root.bib b/thys/LTL_Normal_Form/document/root.bib new file mode 100644 --- /dev/null +++ b/thys/LTL_Normal_Form/document/root.bib @@ -0,0 +1,91 @@ +@inproceedings{DBLP:conf/lop/LichtensteinPZ85, + author = {Orna Lichtenstein and + Amir Pnueli and + Lenore D. Zuck}, + editor = {Rohit Parikh}, + title = {The Glory of the Past}, + booktitle = {Logics of Programs, Conference, Brooklyn College, New York, NY, USA, + June 17-19, 1985, Proceedings}, + series = {Lecture Notes in Computer Science}, + volume = {193}, + pages = {196--218}, + publisher = {Springer}, + year = {1985}, + _url = {https://doi.org/10.1007/3-540-15648-8\_16}, + doi = {10.1007/3-540-15648-8_16}, + timestamp = {Tue, 14 May 2019 10:00:52 +0200}, + biburl = {https://dblp.org/rec/bib/conf/lop/LichtensteinPZ85}, + bibsource = {dblp computer science bibliography, https://dblp.org} +} + +@phdthesis{XXXX:phd/Zuck86, + author = {Lenore D. Zuck}, + title = {{Past Temporal Logic}}, + school = {The Weizmann Institute of Science, Israel}, + year = {1986}, + month = aug +} + +@inproceedings{DBLP:conf/icalp/ChangMP92, + author = {Edward Y. Chang and + Zohar Manna and + Amir Pnueli}, + editor = {Werner Kuich}, + title = {Characterization of Temporal Property Classes}, + booktitle = {Automata, Languages and Programming, 19th International Colloquium, + ICALP92, Vienna, Austria, July 13-17, 1992, Proceedings}, + series = {Lecture Notes in Computer Science}, + volume = {623}, + pages = {474--486}, + publisher = {Springer}, + year = {1992}, + _url = {https://doi.org/10.1007/3-540-55719-9\_97}, + doi = {10.1007/3-540-55719-9_97}, + timestamp = {Tue, 14 May 2019 10:00:44 +0200}, + biburl = {https://dblp.org/rec/bib/conf/icalp/ChangMP92}, + bibsource = {dblp computer science bibliography, https://dblp.org} +} + +@inproceedings{DBLP:conf/mfcs/CernaP03, + author = {Ivana {\v{C}}ern{\'{a}} and + Radek Pel{\'{a}}nek}, + editor = {Branislav Rovan and + Peter Vojt{\'{a}}s}, + title = {Relating Hierarchy of Temporal Properties to Model Checking}, + booktitle = {Mathematical Foundations of Computer Science 2003, 28th International + Symposium, {MFCS} 2003, Bratislava, Slovakia, August 25-29, 2003, + Proceedings}, + series = {Lecture Notes in Computer Science}, + volume = {2747}, + pages = {318--327}, + publisher = {Springer}, + year = {2003}, + _url = {https://doi.org/10.1007/978-3-540-45138-9\_26}, + doi = {10.1007/978-3-540-45138-9_26}, + timestamp = {Tue, 14 May 2019 10:00:37 +0200}, + biburl = {https://dblp.org/rec/bib/conf/mfcs/CernaP03}, + bibsource = {dblp computer science bibliography, https://dblp.org} +} + +@inproceedings{XXXX:conf/lics/SickertE20, + author = {Salomon Sickert and Javier Esparza}, + title = {An Efficient Normalisation Procedure for Linear Temporal Logic and Very Weak Alternating Automata}, + booktitle = {Proceedings of the 35th Annual {ACM/IEEE} Symposium on Logic in Computer + Science, {LICS} 2020, Saarbr\"ucken, Germany, July 8-11, 2020}, + publisher = {{ACM}}, + year = {2020}, + _url = {https://doi.org/10.1145/3373718.3394743}, + doi = {10.1145/3373718.3394743} +} + +@article{DBLP:journals/corr/abs-2005-00472, + author = {Salomon Sickert and + Javier Esparza}, + title = {An Efficient Normalisation Procedure for Linear Temporal Logic and Very Weak Alternating Automata}, + journal = {CoRR}, + volume = {abs/2005.00472}, + year = {2020}, + _url = {http://arxiv.org/abs/2005.00472}, + archivePrefix = {arXiv}, + eprint = {2005.00472} +} diff --git a/thys/LTL_Normal_Form/document/root.tex b/thys/LTL_Normal_Form/document/root.tex new file mode 100644 --- /dev/null +++ b/thys/LTL_Normal_Form/document/root.tex @@ -0,0 +1,108 @@ +\documentclass[11pt,a4paper]{article} + +\usepackage[english]{babel} +\usepackage[utf8]{inputenc} + +\usepackage{mathtools,amsthm,amssymb} +\usepackage{isabelle,isabellesym} + +\usepackage[T1]{fontenc} + +% this should be the last package used +\usepackage{pdfsetup} + +% urls in roman style, theory text in math-similar italics +\urlstyle{rm} +\isabellestyle{it} + +% for uniform font size +\renewcommand{\isastyle}{\isastyleminor} + +% LTL Operators + +\newcommand{\true}{{\ensuremath{\mathbf{t\hspace{-0.5pt}t}}}} +\newcommand{\false}{{\ensuremath{\mathbf{ff}}}} +\newcommand{\F}{{\ensuremath{\mathbf{F}}}} +\newcommand{\G}{{\ensuremath{\mathbf{G}}}} +\newcommand{\X}{{\ensuremath{\mathbf{X}}}} +\newcommand{\U}{{\ensuremath{\mathbf{U}}}} +\newcommand{\W}{{\ensuremath{\mathbf{W}}}} +\newcommand{\M}{{\ensuremath{\mathbf{M}}}} +\newcommand{\R}{{\ensuremath{\mathbf{R}}}} + +% LTL Subformulas + +\newcommand{\subf}{\textit{sf}\,} + +\newcommand{\sfmu}{{\ensuremath{\mathbb{\mu}}}} +\newcommand{\sfnu}{{\ensuremath{\mathbb{\nu}}}} +\newcommand{\setmu}{\ensuremath{M}} +\newcommand{\setnu}{\ensuremath{N}} +\newcommand{\setF}{\ensuremath{\mathcal{F}}} +\newcommand{\setG}{\ensuremath{\mathcal{G}}} +\newcommand{\setFG}{\ensuremath{\mathcal{F\hspace{-0.1em}G}}} +\newcommand{\setGF}{\ensuremath{\mathcal{G\hspace{-0.1em}F}\!}} + +% LTL Functions + +\newcommand{\evalnu}[2]{{#1[#2]^\Pi_1}} +\newcommand{\evalmu}[2]{{#1[#2]^\Sigma_1}} +\newcommand{\flatten}[2]{{#1[#2]^\Sigma_2}} +\newcommand{\flattentwo}[2]{{#1[#2]^\Pi_2}} + +\newtheorem{theorem}{Theorem} +\newtheorem{definition}[theorem]{Definition} +\newtheorem{lemma}[theorem]{Lemma} +\newtheorem{corollary}[theorem]{Corollary} +\newtheorem{proposition}[theorem]{Proposition} +\newtheorem{example}[theorem]{Example} +\newtheorem{remark}[theorem]{Remark} + +\begin{document} + +\title{An Efficient Normalisation Procedure for Linear Temporal Logic: Isabelle/HOL Formalisation} +\author{Salomon Sickert} + +\maketitle + +\begin{abstract} +In the mid 80s, Lichtenstein, Pnueli, and Zuck proved a classical theorem stating that every formula of Past LTL (the extension of LTL with past operators) is equivalent to a formula of the form $\bigwedge_{i=1}^n \G\F \varphi_i \vee \F\G \psi_i $, where $\varphi_i$ and $\psi_i$ contain only past operators \cite{DBLP:conf/lop/LichtensteinPZ85,XXXX:phd/Zuck86}. Some years later, Chang, Manna, and Pnueli built on this result to derive a similar normal form for LTL \cite{DBLP:conf/icalp/ChangMP92}. Both normalisation procedures have a non-elementary worst-case blow-up, and follow an involved path from formulas to counter-free automata to star-free regular expressions and back to formulas. We improve on both points. We present an executable formalisation of a direct and purely syntactic normalisation procedure for LTL yielding a normal form, comparable to the one by Chang, Manna, and Pnueli, that has only a single exponential blow-up. +\end{abstract} + +\tableofcontents + +\section{Overview} + +This document contains the formalisation of the central results appearing in \cite[Sections 4-6]{XXXX:conf/lics/SickertE20}. We refer the interested reader to \cite{XXXX:conf/lics/SickertE20} or to the extended version \cite{DBLP:journals/corr/abs-2005-00472} for an introduction to the topic, related work, intuitive explanations of the proofs, and an application of the normalisation procedure, namely, a translation from LTL to deterministic automata. + +The central result of this document is the following theorem: + +\begin{theorem} +Let $\varphi$ be an LTL formula and let $\Delta_2$, $\Sigma_1$, $\Sigma_2$, and $\Pi_1$ be the classes of LTL formulas from Definition \ref{def:future_hierarchy}. Then $\varphi$ is equivalent to the following formula from the class $\Delta_2$: +\[ +\bigvee_{\substack{\setmu \subseteq \sfmu(\varphi)\\\setnu \subseteq \sfnu(\varphi)}} \left( \flatten{\varphi}{\setmu} \wedge \bigwedge_{\psi \in \setmu} \G\F(\evalmu{\psi}{\setnu}) \wedge \bigwedge_{\psi \in \setnu} \F\G(\evalnu{\psi}{\setmu}) \right) +\] +\noindent where $\flatten{\psi}{\setmu}$, $\evalmu{\psi}{\setnu}$, and $\evalnu{\psi}{\setmu}$ are functions mapping $\psi$ to a formula from $\Sigma_2$, $\Sigma_1$, and $\Pi_1$, respectively. +\end{theorem} + +\begin{definition}[Adapted from \cite{DBLP:conf/mfcs/CernaP03}] +\label{def:future_hierarchy} +We define the following classes of LTL formulas: +\begin{itemize} + \item The class $\Sigma_0 = \Pi_0 = \Delta_0$ is the least set containing all atomic propositions and their negations, and is closed under the application of conjunction and disjunction. + \item The class $\Sigma_{i+1}$ is the least set containing $\Pi_i$ and is closed under the application of conjunction, disjunction, and the $\X$, $\U$, and $\M$ operators. + \item The class $\Pi_{i+1}$ is the least set containing $\Sigma_i$ and is closed under the application of conjunction, disjunction, and the $\X$, $\R$, and $\W$ operators. + \item The class $\Delta_{i+1}$ is the least set containing $\Sigma_{i+1}$ and $\Pi_{i+1}$ and is closed under the application of conjunction and disjunction. +\end{itemize} +\end{definition} + +% sane default for proof documents +\parindent 0pt\parskip 0.5ex + +% generated text of all theories +\input{session} + +\bibliographystyle{plainurl} +\bibliography{root} + +\end{document} diff --git a/thys/Lambert_W/Lambert_W.thy b/thys/Lambert_W/Lambert_W.thy new file mode 100644 --- /dev/null +++ b/thys/Lambert_W/Lambert_W.thy @@ -0,0 +1,1405 @@ +(* + File: Lambert_W.thy + Author: Manuel Eberl, TU München + + Definition and basic properties of the two real-valued branches of the Lambert W function, +*) +section \The Lambert $W$ Function on the reals\ +theory Lambert_W +imports + Complex_Main + "HOL-Library.FuncSet" + "HOL-Real_Asymp.Real_Asymp" +begin + +(*<*) +text \Some lemmas about asymptotic equivalence:\ + +lemma asymp_equiv_sandwich': + fixes f :: "'a \ real" + assumes "\c'. c' \ {l<.. eventually (\x. f x \ c' * g x) F" + assumes "\c'. c' \ {c<.. eventually (\x. f x \ c' * g x) F" + assumes "l < c" "c < u" and [simp]: "c \ 0" + shows "f \[F] (\x. c * g x)" +proof - + have "(\x. f x - c * g x) \ o[F](g)" + proof (rule landau_o.smallI) + fix e :: real assume e: "e > 0" + define C1 where "C1 = min (c + e) ((c + u) / 2)" + have C1: "C1 \ {c<.. e" + using e assms by (auto simp: C1_def min_def) + define C2 where "C2 = max (c - e) ((c + l) / 2)" + have C2: "C2 \ {l<.. e" + using e assms by (auto simp: C2_def max_def field_simps) + + show "eventually (\x. norm (f x - c * g x) \ e * norm (g x)) F" + using assms(2)[OF C1(1)] assms(1)[OF C2(1)] + proof eventually_elim + case (elim x) + show ?case + proof (cases "f x \ c * g x") + case True + hence "norm (f x - c * g x) = f x - c * g x" + by simp + also have "\ \ (C1 - c) * g x" + using elim by (simp add: algebra_simps) + also have "\ \ (C1 - c) * norm (g x)" + using C1 by (intro mult_left_mono) auto + also have "\ \ e * norm (g x)" + using C1 elim by (intro mult_right_mono) auto + finally show ?thesis using elim by simp + next + case False + hence "norm (f x - c * g x) = c * g x - f x" + by simp + also have "\ \ (c - C2) * g x" + using elim by (simp add: algebra_simps) + also have "\ \ (c - C2) * norm (g x)" + using C2 by (intro mult_left_mono) auto + also have "\ \ e * norm (g x)" + using C2 elim by (intro mult_right_mono) auto + finally show ?thesis using elim by simp + qed + qed + qed + also have "g \ O[F](\x. c * g x)" + by simp + finally show ?thesis + unfolding asymp_equiv_altdef by blast +qed + +lemma asymp_equiv_sandwich'': + fixes f :: "'a \ real" + assumes "\c'. c' \ {l<..<1} \ eventually (\x. f x \ c' * g x) F" + assumes "\c'. c' \ {1<.. eventually (\x. f x \ c' * g x) F" + assumes "l < 1" "1 < u" + shows "f \[F] (g)" + using asymp_equiv_sandwich'[of l 1 g f F u] assms by simp +(*>*) + +subsection \Properties of the function $x\mapsto x e^{x}$\ + +lemma exp_times_self_gt: + assumes "x \ -1" + shows "x * exp x > -exp (-1::real)" +proof - + define f where "f = (\x::real. x * exp x)" + define f' where "f' = (\x::real. (x + 1) * exp x)" + have "(f has_field_derivative f' x) (at x)" for x + by (auto simp: f_def f'_def intro!: derivative_eq_intros simp: algebra_simps) + define l r where "l = min x (-1)" and "r = max x (-1)" + + have "\z. z > l \ z < r \ f r - f l = (r - l) * f' z" + unfolding f_def f'_def l_def r_def using assms + by (intro MVT2) (auto intro!: derivative_eq_intros simp: algebra_simps) + then obtain z where z: "z \ {l<.. -1") (auto simp: l_def r_def max_def min_def algebra_simps) + moreover have "sgn ((x + 1) * f' z) = 1" + using z assms + by (cases x "(-1) :: real" rule: linorder_cases; cases z "(-1) :: real" rule: linorder_cases) + (auto simp: f'_def sgn_mult l_def r_def) + hence "(x + 1) * f' z > 0" using sgn_greater by fastforce + ultimately show ?thesis by (simp add: f_def) +qed + +lemma exp_times_self_ge: "x * exp x \ -exp (-1::real)" + using exp_times_self_gt[of x] by (cases "x = -1") auto + +lemma exp_times_self_strict_mono: + assumes "x \ -1" "x < (y :: real)" + shows "x * exp x < y * exp y" + using assms(2) +proof (rule DERIV_pos_imp_increasing_open) + fix t assume t: "x < t" "t < y" + have "((\x. x * exp x) has_real_derivative (t + 1) * exp t) (at t)" + by (auto intro!: derivative_eq_intros simp: algebra_simps) + moreover have "(t + 1) * exp t > 0" + using t assms by (intro mult_pos_pos) auto + ultimately show "\y. ((\a. a * exp a) has_real_derivative y) (at t) \ 0 < y" by blast +qed (auto intro!: continuous_intros) + +lemma exp_times_self_strict_antimono: + assumes "y \ -1" "x < (y :: real)" + shows "x * exp x > y * exp y" +proof - + have "-x * exp x < -y * exp y" + using assms(2) + proof (rule DERIV_pos_imp_increasing_open) + fix t assume t: "x < t" "t < y" + have "((\x. -x * exp x) has_real_derivative (-(t + 1)) * exp t) (at t)" + by (auto intro!: derivative_eq_intros simp: algebra_simps) + moreover have "(-(t + 1)) * exp t > 0" + using t assms by (intro mult_pos_pos) auto + ultimately show "\y. ((\a. -a * exp a) has_real_derivative y) (at t) \ 0 < y" by blast + qed (auto intro!: continuous_intros) + thus ?thesis by simp +qed + +lemma exp_times_self_mono: + assumes "x \ -1" "x \ (y :: real)" + shows "x * exp x \ y * exp y" + using exp_times_self_strict_mono[of x y] assms by (cases "x = y") auto + +lemma exp_times_self_antimono: + assumes "y \ -1" "x \ (y :: real)" + shows "x * exp x \ y * exp y" + using exp_times_self_strict_antimono[of y x] assms by (cases "x = y") auto + +lemma exp_times_self_inj: "inj_on (\x::real. x * exp x) {-1..}" +proof + fix x y :: real + assume "x \ {-1..}" "y \ {-1..}" "x * exp x = y * exp y" + thus "x = y" + using exp_times_self_strict_mono[of x y] exp_times_self_strict_mono[of y x] + by (cases x y rule: linorder_cases) auto +qed + +lemma exp_times_self_inj': "inj_on (\x::real. x * exp x) {..-1}" +proof + fix x y :: real + assume "x \ {..-1}" "y \ {..-1}" "x * exp x = y * exp y" + thus "x = y" + using exp_times_self_strict_antimono[of x y] exp_times_self_strict_antimono[of y x] + by (cases x y rule: linorder_cases) auto +qed + + +subsection \Definition\ + +text \ + The following are the two branches $W_0(x)$ and $W_{-1}(x)$ of the Lambert $W$ function on the + real numbers. These are the inverse functions of the function $x\mapsto xe^x$, i.\,e.\ + we have $W(x)e^{W(x)} = x$ for both branches wherever they are defined. The two branches + meet at the point $x = -\frac{1}{e}$. + + $W_0(x)$ is the principal branch, whose domain is $[-\frac{1}{e}; \infty)$ and whose + range is $[-1; \infty)$. + $W_{-1}(x)$ has the domain $[-\frac{1}{e}; 0)$ and the range $(-\infty;-1]$. + Figure~\ref{fig:lambertw} shows plots of these two branches for illustration. +\ + +text \ +\definecolor{myblue}{HTML}{3869b1} +\definecolor{myred}{HTML}{cc2428} +\begin{figure} +\begin{center} +\begin{tikzpicture} + \begin{axis}[ + xmin=-0.5, xmax=6.6, ymin=-3.8, ymax=1.5, axis lines=middle, ytick = {-3, -2, -1, 1}, xtick = {1,...,10}, yticklabel pos = right, + yticklabel style={right,xshift=1mm}, + extra x tick style={tick label style={above,yshift=1mm}}, + extra x ticks={-0.367879441}, + extra x tick labels={$-\frac{1}{e}$}, + width=\textwidth, height=0.8\textwidth, + xlabel={$x$}, tick style={thin,black} + ] + \addplot [color=black, line width=0.5pt, densely dashed, mark=none,domain=-5:0,samples=200] ({-exp(-1)}, {x}); + \addplot [color=myblue, line width=1pt, mark=none,domain=-1:1.5,samples=200] ({x*exp(x)}, {x}); + \addplot [color=myred, line width=1pt, mark=none,domain=-5:-1,samples=200] ({x*exp(x)}, {x}); + \end{axis} +\end{tikzpicture} +\end{center} +\caption{The two real branches of the Lambert $W$ function: $W_0$ (blue) and $W_{-1}$ (red).} +\label{fig:lambertw} +\end{figure} +\ + +definition Lambert_W :: "real \ real" where + "Lambert_W x = (if x < -exp(-1) then -1 else (THE w. w \ -1 \ w * exp w = x))" + +definition Lambert_W' :: "real \ real" where + "Lambert_W' x = (if x \ {-exp(-1)..<0} then (THE w. w \ -1 \ w * exp w = x) else -1)" + +lemma Lambert_W_ex1: + assumes "(x::real) \ -exp (-1)" + shows "\!w. w \ -1 \ w * exp w = x" +proof (rule ex_ex1I) + have "filterlim (\w::real. w * exp w) at_top at_top" + by real_asymp + hence "eventually (\w. w * exp w \ x) at_top" + by (auto simp: filterlim_at_top) + hence "eventually (\w. w \ 0 \ w * exp w \ x) at_top" + by (intro eventually_conj eventually_ge_at_top) + then obtain w' where w': "w' * exp w' \ x" "w' \ 0" + by (auto simp: eventually_at_top_linorder) + from w' assms have "\w. -1 \ w \ w \ w' \ w * exp w = x" + by (intro IVT' continuous_intros) auto + thus "\w. w \ -1 \ w * exp w = x" by blast +next + fix w w' :: real + assume ww': "w \ -1 \ w * exp w = x" "w' \ -1 \ w' * exp w' = x" + hence "w * exp w = w' * exp w'" by simp + thus "w = w'" + using exp_times_self_strict_mono[of w w'] exp_times_self_strict_mono[of w' w] ww' + by (cases w w' rule: linorder_cases) auto +qed + +lemma Lambert_W'_ex1: + assumes "(x::real) \ {-exp (-1)..<0}" + shows "\!w. w \ -1 \ w * exp w = x" +proof (rule ex_ex1I) + have "eventually (\w. x \ w * exp w) at_bot" + using assms by real_asymp + hence "eventually (\w. w \ -1 \ w * exp w \ x) at_bot" + by (intro eventually_conj eventually_le_at_bot) + then obtain w' where w': "w' * exp w' \ x" "w' \ -1" + by (auto simp: eventually_at_bot_linorder) + + from w' assms have "\w. w' \ w \ w \ -1 \ w * exp w = x" + by (intro IVT2' continuous_intros) auto + thus "\w. w \ -1 \ w * exp w = x" by blast +next + fix w w' :: real + assume ww': "w \ -1 \ w * exp w = x" "w' \ -1 \ w' * exp w' = x" + hence "w * exp w = w' * exp w'" by simp + thus "w = w'" + using exp_times_self_strict_antimono[of w w'] exp_times_self_strict_antimono[of w' w] ww' + by (cases w w' rule: linorder_cases) auto +qed + +lemma Lambert_W_times_exp_self: + assumes "x \ -exp (-1)" + shows "Lambert_W x * exp (Lambert_W x) = x" + using theI'[OF Lambert_W_ex1[OF assms]] assms by (auto simp: Lambert_W_def) + +lemma Lambert_W_times_exp_self': + assumes "x \ -exp (-1)" + shows "exp (Lambert_W x) * Lambert_W x = x" + using Lambert_W_times_exp_self[of x] assms by (simp add: mult_ac) + +lemma Lambert_W'_times_exp_self: + assumes "x \ {-exp (-1)..<0}" + shows "Lambert_W' x * exp (Lambert_W' x) = x" + using theI'[OF Lambert_W'_ex1[OF assms]] assms by (auto simp: Lambert_W'_def) + +lemma Lambert_W'_times_exp_self': + assumes "x \ {-exp (-1)..<0}" + shows "exp (Lambert_W' x) * Lambert_W' x = x" + using Lambert_W'_times_exp_self[of x] assms by (simp add: mult_ac) + +lemma Lambert_W_ge: "Lambert_W x \ -1" + using theI'[OF Lambert_W_ex1[of x]] by (auto simp: Lambert_W_def) + +lemma Lambert_W'_le: "Lambert_W' x \ -1" + using theI'[OF Lambert_W'_ex1[of x]] by (auto simp: Lambert_W'_def) + +lemma Lambert_W_eqI: + assumes "w \ -1" "w * exp w = x" + shows "Lambert_W x = w" +proof - + from assms exp_times_self_ge[of w] have "x \ -exp (-1)" + by (cases "x \ -exp (-1)") auto + from Lambert_W_ex1[OF this] Lambert_W_times_exp_self[OF this] Lambert_W_ge[of x] assms + show ?thesis by metis + qed + +lemma Lambert_W'_eqI: + assumes "w \ -1" "w * exp w = x" + shows "Lambert_W' x = w" +proof - + from assms exp_times_self_ge[of w] have "x \ -exp (-1)" + by (cases "x \ -exp (-1)") auto + moreover from assms have "w * exp w < 0" + by (intro mult_neg_pos) auto + ultimately have "x \ {-exp (-1)..<0}" + using assms by auto + + from Lambert_W'_ex1[OF this(1)] Lambert_W'_times_exp_self[OF this(1)] Lambert_W'_le assms + show ?thesis by metis + qed + +text \ + $W_0(x)$ and $W_{-1}(x)$ together fully cover all solutions of $we^w = x$: +\ +lemma exp_times_self_eqD: + assumes "w * exp w = x" + shows "x \ -exp (-1)" and "w = Lambert_W x \ x < 0 \ w = Lambert_W' x" +proof - + from assms show "x \ -exp (-1)" + using exp_times_self_ge[of w] by auto + show "w = Lambert_W x \ x < 0 \ w = Lambert_W' x" + proof (cases "w \ -1") + case True + hence "Lambert_W x = w" + using assms by (intro Lambert_W_eqI) auto + thus ?thesis by auto + next + case False + from False have "w * exp w < 0" + by (intro mult_neg_pos) auto + from False have "Lambert_W' x = w" + using assms by (intro Lambert_W'_eqI) auto + thus ?thesis using assms \w * exp w < 0\ by auto + qed +qed + +theorem exp_times_self_eq_iff: + "w * exp w = x \ x \ -exp (-1) \ (w = Lambert_W x \ x < 0 \ w = Lambert_W' x)" + using exp_times_self_eqD[of w x] + by (auto simp: Lambert_W_times_exp_self Lambert_W'_times_exp_self) + +lemma Lambert_W_exp_times_self [simp]: "x \ -1 \ Lambert_W (x * exp x) = x" + by (rule Lambert_W_eqI) auto + +lemma Lambert_W_exp_times_self' [simp]: "x \ -1 \ Lambert_W (exp x * x) = x" + by (rule Lambert_W_eqI) auto + +lemma Lambert_W'_exp_times_self [simp]: "x \ -1 \ Lambert_W' (x * exp x) = x" + by (rule Lambert_W'_eqI) auto + +lemma Lambert_W'_exp_times_self' [simp]: "x \ -1 \ Lambert_W' (exp x * x) = x" + by (rule Lambert_W'_eqI) auto + +lemma Lambert_W_times_ln_self: + assumes "x \ exp (-1)" + shows "Lambert_W (x * ln x) = ln x" +proof - + have "0 < exp (-1 :: real)" + by simp + also note \\ \ x\ + finally have "x > 0" . + from assms have "ln (exp (-1)) \ ln x" + using \x > 0\ by (subst ln_le_cancel_iff) auto + hence "Lambert_W (exp (ln x) * ln x) = ln x" + by (subst Lambert_W_exp_times_self') auto + thus ?thesis using \x > 0\ by simp +qed + +lemma Lambert_W_times_ln_self': + assumes "x \ exp (-1)" + shows "Lambert_W (ln x * x) = ln x" + using Lambert_W_times_ln_self[OF assms] by (simp add: mult.commute) + +lemma Lambert_W_eq_minus_exp_minus1 [simp]: "Lambert_W (-exp (-1)) = -1" + by (rule Lambert_W_eqI) auto + +lemma Lambert_W'_eq_minus_exp_minus1 [simp]: "Lambert_W' (-exp (-1)) = -1" + by (rule Lambert_W'_eqI) auto + +lemma Lambert_W_0 [simp]: "Lambert_W 0 = 0" + by (rule Lambert_W_eqI) auto + + +subsection \Monotonicity properties\ + +lemma Lambert_W_strict_mono: + assumes "x \ -exp(-1)" "x < y" + shows "Lambert_W x < Lambert_W y" +proof (rule ccontr) + assume "\(Lambert_W x < Lambert_W y)" + hence "Lambert_W x * exp (Lambert_W x) \ Lambert_W y * exp (Lambert_W y)" + by (intro exp_times_self_mono) (auto simp: Lambert_W_ge) + hence "x \ y" + using assms by (simp add: Lambert_W_times_exp_self) + with assms show False by simp +qed + +lemma Lambert_W_mono: + assumes "x \ -exp(-1)" "x \ y" + shows "Lambert_W x \ Lambert_W y" + using Lambert_W_strict_mono[of x y] assms by (cases "x = y") auto + +lemma Lambert_W_eq_iff [simp]: + "x \ -exp(-1) \ y \ -exp(-1) \ Lambert_W x = Lambert_W y \ x = y" + using Lambert_W_strict_mono[of x y] Lambert_W_strict_mono[of y x] + by (cases x y rule: linorder_cases) auto + +lemma Lambert_W_le_iff [simp]: + "x \ -exp(-1) \ y \ -exp(-1) \ Lambert_W x \ Lambert_W y \ x \ y" + using Lambert_W_strict_mono[of x y] Lambert_W_strict_mono[of y x] + by (cases x y rule: linorder_cases) auto + +lemma Lambert_W_less_iff [simp]: + "x \ -exp(-1) \ y \ -exp(-1) \ Lambert_W x < Lambert_W y \ x < y" + using Lambert_W_strict_mono[of x y] Lambert_W_strict_mono[of y x] + by (cases x y rule: linorder_cases) auto + +lemma Lambert_W_le_minus_one: + assumes "x \ -exp(-1)" + shows "Lambert_W x = -1" +proof (cases "x = -exp(-1)") + case False + thus ?thesis using assms + by (auto simp: Lambert_W_def) +qed auto + +lemma Lambert_W_pos_iff [simp]: "Lambert_W x > 0 \ x > 0" +proof (cases "x \ -exp (-1)") + case True + thus ?thesis + using Lambert_W_less_iff[of 0 x] by (simp del: Lambert_W_less_iff) +next + case False + hence "x < - exp(-1)" by auto + also have "\ \ 0" by simp + finally show ?thesis using False + by (auto simp: Lambert_W_le_minus_one) +qed + +lemma Lambert_W_eq_0_iff [simp]: "Lambert_W x = 0 \ x = 0" + using Lambert_W_eq_iff[of x 0] + by (cases "x \ -exp (-1)") (auto simp: Lambert_W_le_minus_one simp del: Lambert_W_eq_iff) + +lemma Lambert_W_nonneg_iff [simp]: "Lambert_W x \ 0 \ x \ 0" + using Lambert_W_pos_iff[of x] + by (cases "x = 0") (auto simp del: Lambert_W_pos_iff) + +lemma Lambert_W_neg_iff [simp]: "Lambert_W x < 0 \ x < 0" + using Lambert_W_nonneg_iff[of x] by (auto simp del: Lambert_W_nonneg_iff) + +lemma Lambert_W_nonpos_iff [simp]: "Lambert_W x \ 0 \ x \ 0" + using Lambert_W_pos_iff[of x] by (auto simp del: Lambert_W_pos_iff) + +lemma Lambert_W_geI: + assumes "y * exp y \ x" + shows "Lambert_W x \ y" +proof (cases "y \ -1") + case False + hence "y \ -1" by simp + also have "-1 \ Lambert_W x" by (rule Lambert_W_ge) + finally show ?thesis . +next + case True + have "Lambert_W x \ Lambert_W (y * exp y)" + using assms exp_times_self_ge[of y] by (intro Lambert_W_mono) auto + thus ?thesis using assms True by simp +qed + +lemma Lambert_W_gtI: + assumes "y * exp y < x" + shows "Lambert_W x > y" +proof (cases "y \ -1") + case False + hence "y < -1" by simp + also have "-1 \ Lambert_W x" by (rule Lambert_W_ge) + finally show ?thesis . +next + case True + have "Lambert_W x > Lambert_W (y * exp y)" + using assms exp_times_self_ge[of y] by (intro Lambert_W_strict_mono) auto + thus ?thesis using assms True by simp +qed + +lemma Lambert_W_leI: + assumes "y * exp y \ x" "y \ -1" "x \ -exp (-1)" + shows "Lambert_W x \ y" +proof - + have "Lambert_W x \ Lambert_W (y * exp y)" + using assms exp_times_self_ge[of y] by (intro Lambert_W_mono) auto + thus ?thesis using assms by simp +qed + +lemma Lambert_W_lessI: + assumes "y * exp y > x" "y \ -1" "x \ -exp (-1)" + shows "Lambert_W x < y" +proof - + have "Lambert_W x < Lambert_W (y * exp y)" + using assms exp_times_self_ge[of y] by (intro Lambert_W_strict_mono) auto + thus ?thesis using assms by simp +qed + + + +lemma Lambert_W'_strict_antimono: + assumes "-exp (-1) \ x" "x < y" "y < 0" + shows "Lambert_W' x > Lambert_W' y" +proof (rule ccontr) + assume "\(Lambert_W' x > Lambert_W' y)" + hence "Lambert_W' x * exp (Lambert_W' x) \ Lambert_W' y * exp (Lambert_W' y)" + using assms by (intro exp_times_self_antimono Lambert_W'_le) auto + hence "x \ y" + using assms by (simp add: Lambert_W'_times_exp_self) + with assms show False by simp +qed + +lemma Lambert_W'_antimono: + assumes "x \ -exp(-1)" "x \ y" "y < 0" + shows "Lambert_W' x \ Lambert_W' y" + using Lambert_W'_strict_antimono[of x y] assms by (cases "x = y") auto + +lemma Lambert_W'_eq_iff [simp]: + "x \ {-exp(-1)..<0} \ y \ {-exp(-1)..<0} \ Lambert_W' x = Lambert_W' y \ x = y" + using Lambert_W'_strict_antimono[of x y] Lambert_W'_strict_antimono[of y x] + by (cases x y rule: linorder_cases) auto + +lemma Lambert_W'_le_iff [simp]: + "x \ {-exp(-1)..<0} \ y \ {-exp(-1)..<0} \ Lambert_W' x \ Lambert_W' y \ x \ y" + using Lambert_W'_strict_antimono[of x y] Lambert_W'_strict_antimono[of y x] + by (cases x y rule: linorder_cases) auto + +lemma Lambert_W'_less_iff [simp]: + "x \ {-exp(-1)..<0} \ y \ {-exp(-1)..<0} \ Lambert_W' x < Lambert_W' y \ x > y" + using Lambert_W'_strict_antimono[of x y] Lambert_W'_strict_antimono[of y x] + by (cases x y rule: linorder_cases) auto + +lemma Lambert_W'_le_minus_one: + assumes "x \ -exp(-1)" + shows "Lambert_W' x = -1" +proof (cases "x = -exp(-1)") + case False + thus ?thesis using assms + by (auto simp: Lambert_W'_def) +qed auto + +lemma Lambert_W'_ge_zero: "x \ 0 \ Lambert_W' x = -1" + by (simp add: Lambert_W'_def) + +lemma Lambert_W'_neg: "Lambert_W' x < 0" + by (rule le_less_trans[OF Lambert_W'_le]) auto + +lemma Lambert_W'_nz [simp]: "Lambert_W' x \ 0" + using Lambert_W'_neg[of x] by simp + +lemma Lambert_W'_geI: + assumes "y * exp y \ x" "y \ -1" "x \ -exp(-1)" + shows "Lambert_W' x \ y" +proof - + from assms have "y * exp y < 0" + by (intro mult_neg_pos) auto + hence "Lambert_W' x \ Lambert_W' (y * exp y)" + using assms exp_times_self_ge[of y] by (intro Lambert_W'_antimono) auto + thus ?thesis using assms by simp +qed + +lemma Lambert_W'_gtI: + assumes "y * exp y > x" "y \ -1" "x \ -exp(-1)" + shows "Lambert_W' x \ y" +proof - + from assms have "y * exp y < 0" + by (intro mult_neg_pos) auto + hence "Lambert_W' x > Lambert_W' (y * exp y)" + using assms exp_times_self_ge[of y] by (intro Lambert_W'_strict_antimono) auto + thus ?thesis using assms by simp +qed + +lemma Lambert_W'_leI: + assumes "y * exp y \ x" "x < 0" + shows "Lambert_W' x \ y" +proof (cases "y \ -1") + case True + have "Lambert_W' x \ Lambert_W' (y * exp y)" + using assms exp_times_self_ge[of y] by (intro Lambert_W'_antimono) auto + thus ?thesis using assms True by simp +next + case False + have "Lambert_W' x \ -1" + by (rule Lambert_W'_le) + also have "\ < y" + using False by simp + finally show ?thesis by simp +qed + +lemma Lambert_W'_lessI: + assumes "y * exp y < x" "x < 0" + shows "Lambert_W' x < y" +proof (cases "y \ -1") + case True + have "Lambert_W' x < Lambert_W' (y * exp y)" + using assms exp_times_self_ge[of y] by (intro Lambert_W'_strict_antimono) auto + thus ?thesis using assms True by simp +next + case False + have "Lambert_W' x \ -1" + by (rule Lambert_W'_le) + also have "\ < y" + using False by simp + finally show ?thesis by simp +qed + + +lemma bij_betw_exp_times_self_atLeastAtMost: + fixes a b :: real + assumes "a \ -1" "a \ b" + shows "bij_betw (\x. x * exp x) {a..b} {a * exp a..b * exp b}" + unfolding bij_betw_def +proof + show "inj_on (\x. x * exp x) {a..b}" + by (rule inj_on_subset[OF exp_times_self_inj]) (use assms in auto) +next + show "(\x. x * exp x) ` {a..b} = {a * exp a..b * exp b}" + proof safe + fix x assume "x \ {a..b}" + thus "x * exp x \ {a * exp a..b * exp b}" + using assms by (auto intro!: exp_times_self_mono) + next + fix x assume x: "x \ {a * exp a..b * exp b}" + have "(-1) * exp (-1) \ a * exp a" + using assms by (intro exp_times_self_mono) auto + also have "\ \ x" using x by simp + finally have "x \ -exp (-1)" by simp + + have "Lambert_W x \ {a..b}" + using x \x \ -exp (-1)\ assms by (auto intro!: Lambert_W_geI Lambert_W_leI) + moreover have "Lambert_W x * exp (Lambert_W x) = x" + using \x \ -exp (-1)\ by (simp add: Lambert_W_times_exp_self) + ultimately show "x \ (\x. x * exp x) ` {a..b}" + unfolding image_iff by metis + qed +qed + +lemma bij_betw_exp_times_self_atLeastAtMost': + fixes a b :: real + assumes "a \ b" "b \ -1" + shows "bij_betw (\x. x * exp x) {a..b} {b * exp b..a * exp a}" + unfolding bij_betw_def +proof + show "inj_on (\x. x * exp x) {a..b}" + by (rule inj_on_subset[OF exp_times_self_inj']) (use assms in auto) +next + show "(\x. x * exp x) ` {a..b} = {b * exp b..a * exp a}" + proof safe + fix x assume "x \ {a..b}" + thus "x * exp x \ {b * exp b..a * exp a}" + using assms by (auto intro!: exp_times_self_antimono) + next + fix x assume x: "x \ {b * exp b..a * exp a}" + from assms have "a * exp a < 0" + by (intro mult_neg_pos) auto + with x have "x < 0" by auto + have "(-1) * exp (-1) \ b * exp b" + using assms by (intro exp_times_self_antimono) auto + also have "\ \ x" using x by simp + finally have "x \ -exp (-1)" by simp + + have "Lambert_W' x \ {a..b}" + using x \x \ -exp (-1)\ \x < 0\ assms + by (auto intro!: Lambert_W'_geI Lambert_W'_leI) + moreover have "Lambert_W' x * exp (Lambert_W' x) = x" + using \x \ -exp (-1)\ \x < 0\ by (auto simp: Lambert_W'_times_exp_self) + ultimately show "x \ (\x. x * exp x) ` {a..b}" + unfolding image_iff by metis + qed +qed + +lemma bij_betw_exp_times_self_atLeast: + fixes a :: real + assumes "a \ -1" + shows "bij_betw (\x. x * exp x) {a..} {a * exp a..}" + unfolding bij_betw_def +proof + show "inj_on (\x. x * exp x) {a..}" + by (rule inj_on_subset[OF exp_times_self_inj]) (use assms in auto) +next + show "(\x. x * exp x) ` {a..} = {a * exp a..}" + proof safe + fix x assume "x \ a" + thus "x * exp x \ a * exp a" + using assms by (auto intro!: exp_times_self_mono) + next + fix x assume x: "x \ a * exp a" + have "(-1) * exp (-1) \ a * exp a" + using assms by (intro exp_times_self_mono) auto + also have "\ \ x" using x by simp + finally have "x \ -exp (-1)" by simp + + have "Lambert_W x \ {a..}" + using x \x \ -exp (-1)\ assms by (auto intro!: Lambert_W_geI Lambert_W_leI) + moreover have "Lambert_W x * exp (Lambert_W x) = x" + using \x \ -exp (-1)\ by (simp add: Lambert_W_times_exp_self) + ultimately show "x \ (\x. x * exp x) ` {a..}" + unfolding image_iff by metis + qed +qed + + +subsection \Basic identities and bounds\ + +lemma Lambert_W_2_ln_2 [simp]: "Lambert_W (2 * ln 2) = ln 2" +proof - + have "-1 \ (0 :: real)" + by simp + also have "\ \ ln 2" + by simp + finally have "-1 \ (ln 2 :: real)" . + thus ?thesis + by (intro Lambert_W_eqI) auto +qed + +lemma Lambert_W_exp_1 [simp]: "Lambert_W (exp 1) = 1" + by (rule Lambert_W_eqI) auto + +lemma Lambert_W_neg_ln_over_self: + assumes "x \ {exp (-1)..exp 1}" + shows "Lambert_W (-ln x / x) = -ln x" +proof - + have "0 < (exp (-1) :: real)" + by simp + also have "\ \ x" + using assms by simp + finally have "x > 0" . + from \x > 0\ assms have "ln x \ ln (exp 1)" + by (subst ln_le_cancel_iff) auto + also have "ln (exp 1) = (1 :: real)" + by simp + finally have "ln x \ 1" . + show ?thesis + using assms \x > 0\ \ln x \ 1\ + by (intro Lambert_W_eqI) (auto simp: exp_minus field_simps) +qed + +lemma Lambert_W'_neg_ln_over_self: + assumes "x \ exp 1" + shows "Lambert_W' (-ln x / x) = -ln x" +proof (rule Lambert_W'_eqI) + have "0 < (exp 1 :: real)" + by simp + also have "\ \ x" + by fact + finally have "x > 0" . + from assms \x > 0\ have "ln x \ ln (exp 1)" + by (subst ln_le_cancel_iff) auto + thus "-ln x \ -1" by simp + show "-ln x * exp (-ln x) = -ln x / x" + using \x > 0\ by (simp add: field_simps exp_minus) +qed + +lemma exp_Lambert_W: "x \ -exp (-1) \ x \ 0 \ exp (Lambert_W x) = x / Lambert_W x" + using Lambert_W_times_exp_self[of x] by (auto simp add: divide_simps mult_ac) + +lemma exp_Lambert_W': "x \ {-exp (-1)..<0} \ exp (Lambert_W' x) = x / Lambert_W' x" + using Lambert_W'_times_exp_self[of x] by (auto simp add: divide_simps mult_ac) + +lemma ln_Lambert_W: + assumes "x > 0" + shows "ln (Lambert_W x) = ln x - Lambert_W x" +proof - + have "-exp (-1) \ (0 :: real)" + by simp + also have "\ < x" by fact + finally have x: "x > -exp(-1)" . + + have "exp (ln (Lambert_W x)) = exp (ln x - Lambert_W x)" + using assms x by (subst exp_diff) (auto simp: exp_Lambert_W) + thus ?thesis by (subst (asm) exp_inj_iff) +qed + +lemma ln_minus_Lambert_W': + assumes "x \ {-exp (-1)..<0}" + shows "ln (-Lambert_W' x) = ln (-x) - Lambert_W' x" +proof - + have "exp (ln (-x) - Lambert_W' x) = -Lambert_W' x" + using assms by (simp add: exp_diff exp_Lambert_W') + also have "\ = exp (ln (-Lambert_W' x))" + using Lambert_W'_neg[of x] by simp + finally show ?thesis by simp +qed + +lemma Lambert_W_plus_Lambert_W_eq: + assumes "x > 0" "y > 0" + shows "Lambert_W x + Lambert_W y = Lambert_W (x * y * (1 / Lambert_W x + 1 / Lambert_W y))" +proof (rule sym, rule Lambert_W_eqI) + have "x > -exp(-1)" "y > -exp (-1)" + by (rule less_trans[OF _ assms(1)] less_trans[OF _ assms(2)], simp)+ + with assms show "(Lambert_W x + Lambert_W y) * exp (Lambert_W x + Lambert_W y) = + x * y * (1 / Lambert_W x + 1 / Lambert_W y)" + by (auto simp: field_simps exp_add exp_Lambert_W) + have "-1 \ (0 :: real)" + by simp + also from assms have "\ \ Lambert_W x + Lambert_W y" + by (intro add_nonneg_nonneg) auto + finally show "\ \ -1" . +qed + +lemma Lambert_W'_plus_Lambert_W'_eq: + assumes "x \ {-exp(-1)..<0}" "y \ {-exp(-1)..<0}" + shows "Lambert_W' x + Lambert_W' y = Lambert_W' (x * y * (1 / Lambert_W' x + 1 / Lambert_W' y))" +proof (rule sym, rule Lambert_W'_eqI) + from assms show "(Lambert_W' x + Lambert_W' y) * exp (Lambert_W' x + Lambert_W' y) = + x * y * (1 / Lambert_W' x + 1 / Lambert_W' y)" + by (auto simp: field_simps exp_add exp_Lambert_W') + have "Lambert_W' x + Lambert_W' y \ -1 + -1" + by (intro add_mono Lambert_W'_le) + also have "\ \ -1" by simp + finally show "Lambert_W' x + Lambert_W' y \ -1" . +qed + +lemma Lambert_W_gt_ln_minus_ln_ln: + assumes "x > exp 1" + shows "Lambert_W x > ln x - ln (ln x)" +proof (rule Lambert_W_gtI) + have "x > 1" + by (rule less_trans[OF _ assms]) auto + have "ln x > ln (exp 1)" + by (subst ln_less_cancel_iff) (use \x > 1\ assms in auto) + thus "(ln x - ln (ln x)) * exp (ln x - ln (ln x)) < x" + using assms \x > 1\ by (simp add: exp_diff field_simps) +qed + +lemma Lambert_W_less_ln: + assumes "x > exp 1" + shows "Lambert_W x < ln x" +proof (rule Lambert_W_lessI) + have "x > 0" + by (rule less_trans[OF _ assms]) auto + have "ln x > ln (exp 1)" + by (subst ln_less_cancel_iff) (use \x > 0\ assms in auto) + thus "x < ln x * exp (ln x)" + using \x > 0\ by simp + show "ln x \ -1" + by (rule less_imp_le[OF le_less_trans[OF _ \ln x > _\]]) auto + show "x \ -exp (-1)" + by (rule less_imp_le[OF le_less_trans[OF _ \x > 0\]]) auto +qed + + +subsection \Limits, continuity, and differentiability\ + +lemma filterlim_Lambert_W_at_top [tendsto_intros]: "filterlim Lambert_W at_top at_top" + unfolding filterlim_at_top +proof + fix C :: real + have "eventually (\x. x \ C * exp C) at_top" + by (rule eventually_ge_at_top) + thus "eventually (\x. Lambert_W x \ C) at_top" + proof eventually_elim + case (elim x) + thus ?case + by (intro Lambert_W_geI) auto + qed +qed + +lemma filterlim_Lambert_W_at_left_0 [tendsto_intros]: + "filterlim Lambert_W' at_bot (at_left 0)" + unfolding filterlim_at_bot +proof + fix C :: real + define C' where "C' = min C (-1)" + have "C' < 0" "C' \ C" + by (simp_all add: C'_def) + have "C' * exp C' < 0" + using \C' < 0\ by (intro mult_neg_pos) auto + hence "eventually (\x. x \ C' * exp C') (at_left 0)" + by real_asymp + moreover have "eventually (\x::real. x < 0) (at_left 0)" + by real_asymp + ultimately show "eventually (\x. Lambert_W' x \ C) (at_left 0)" + proof eventually_elim + case (elim x) + hence "Lambert_W' x \ C'" + by (intro Lambert_W'_leI) auto + also have "\ \ C" by fact + finally show ?case . + qed +qed + +lemma continuous_on_Lambert_W [continuous_intros]: "continuous_on {-exp (-1)..} Lambert_W" +proof - + have *: "continuous_on {-exp (-1)..b * exp b} Lambert_W" if "b \ 0" for b + proof - + have "continuous_on ((\x. x * exp x) ` {-1..b}) Lambert_W" + by (rule continuous_on_inv) (auto intro!: continuous_intros) + also have "(\x. x * exp x) ` {-1..b} = {-exp (-1)..b * exp b}" + using bij_betw_exp_times_self_atLeastAtMost[of "-1" b] \b \ 0\ + by (simp add: bij_betw_def) + finally show ?thesis . + qed + + have "continuous (at x) Lambert_W" if "x \ 0" for x + proof - + have x: "-exp (-1) < x" + by (rule less_le_trans[OF _ that]) auto + + define b where "b = Lambert_W x + 1" + have "b \ 0" + using Lambert_W_ge[of x] by (simp add: b_def) + have "x = Lambert_W x * exp (Lambert_W x)" + using that x by (subst Lambert_W_times_exp_self) auto + also have "\ < b * exp b" + by (intro exp_times_self_strict_mono) (auto simp: b_def Lambert_W_ge) + finally have "b * exp b > x" . + have "continuous_on {-exp(-1)<..b \ 0\ in auto) + moreover have "x \ {-exp(-1)<..b * exp b > x\ x by (auto simp: ) + ultimately show "continuous (at x) Lambert_W" + by (subst (asm) continuous_on_eq_continuous_at) auto + qed + hence "continuous_on {0..} Lambert_W" + by (intro continuous_at_imp_continuous_on) auto + moreover have "continuous_on {-exp (-1)..0} Lambert_W" + using *[of 0] by simp + ultimately have "continuous_on ({-exp (-1)..0} \ {0..}) Lambert_W" + by (intro continuous_on_closed_Un) auto + also have "{-exp (-1)..0} \ {0..} = {-exp (-1::real)..}" + using order.trans[of "-exp (-1)::real" 0] by auto + finally show ?thesis . +qed + +lemma continuous_on_Lambert_W_alt [continuous_intros]: + assumes "continuous_on A f" "\x. x \ A \ f x \ -exp (-1)" + shows "continuous_on A (\x. Lambert_W (f x))" + using continuous_on_compose2[OF continuous_on_Lambert_W assms(1)] assms by auto + +lemma continuous_on_Lambert_W' [continuous_intros]: "continuous_on {-exp (-1)..<0} Lambert_W'" +proof - + have *: "continuous_on {-exp (-1)..-b * exp (-b)} Lambert_W'" if "b \ 1" for b + proof - + have "continuous_on ((\x. x * exp x) ` {-b..-1}) Lambert_W'" + by (intro continuous_on_inv ballI) (auto intro!: continuous_intros) + also have "(\x. x * exp x) ` {-b..-1} = {-exp (-1)..-b * exp (-b)}" + using bij_betw_exp_times_self_atLeastAtMost'[of "-b" "-1"] that + by (simp add: bij_betw_def) + finally show ?thesis . + qed + + have "continuous (at x) Lambert_W'" if "x > -exp (-1)" "x < 0" for x + proof - + define b where "b = Lambert_W x + 1" + have "eventually (\b. -b * exp (-b) > x) at_top" + using that by real_asymp + hence "eventually (\b. b \ 1 \ -b * exp (-b) > x) at_top" + by (intro eventually_conj eventually_ge_at_top) + then obtain b where b: "b \ 1" "-b * exp (-b) > x" + by (auto simp: eventually_at_top_linorder) + + have "continuous_on {-exp(-1)<..<-b * exp (-b)} Lambert_W'" + by (rule continuous_on_subset[OF *[of b]]) (use \b \ 1\ in auto) + moreover have "x \ {-exp(-1)<..<-b * exp (-b)}" + using b that by auto + ultimately show "continuous (at x) Lambert_W'" + by (subst (asm) continuous_on_eq_continuous_at) auto + qed + hence **: "continuous_on {-exp (-1)<..<0} Lambert_W'" + by (intro continuous_at_imp_continuous_on) auto + + show ?thesis + unfolding continuous_on_def + proof + fix x :: real assume x: "x \ {-exp(-1)..<0}" + show "(Lambert_W' \ Lambert_W' x) (at x within {-exp(-1)..<0})" + proof (cases "x = -exp(-1)") + case False + hence "isCont Lambert_W' x" + using x ** by (auto simp: continuous_on_eq_continuous_at) + thus ?thesis + using continuous_at filterlim_within_subset by blast + next + case True + define a :: real where "a = -2 * exp (-2)" + have a: "a > -exp (-1)" + using exp_times_self_strict_antimono[of "-1" "-2"] by (auto simp: a_def) + from True have "x \ {-exp (-1).. Lambert_W' x) (at x within {-exp (-1)..x \ {-exp (-1).. by (auto simp: continuous_on_def) + also have "at x within {-exp (-1).. = at x within {-exp (-1)..<0}" + using a by (intro at_within_nhd[of _ "{..<0}"]) (auto simp: True) + finally show ?thesis . + qed + qed +qed + +lemma continuous_on_Lambert_W'_alt [continuous_intros]: + assumes "continuous_on A f" "\x. x \ A \ f x \ {-exp (-1)..<0}" + shows "continuous_on A (\x. Lambert_W' (f x))" + using continuous_on_compose2[OF continuous_on_Lambert_W' assms(1)] assms + by (auto simp: subset_iff) + + +lemma tendsto_Lambert_W_1: + assumes "(f \ L) F" "eventually (\x. f x \ -exp (-1)) F" + shows "((\x. Lambert_W (f x)) \ Lambert_W L) F" +proof (cases "F = bot") + case [simp]: False + from tendsto_lowerbound[OF assms] have "L \ -exp (-1)" by simp + thus ?thesis + using continuous_on_tendsto_compose[OF continuous_on_Lambert_W assms(1)] assms(2) by simp +qed auto + +lemma tendsto_Lambert_W_2: + assumes "(f \ L) F" "L > -exp (-1)" + shows "((\x. Lambert_W (f x)) \ Lambert_W L) F" + using order_tendstoD(1)[OF assms] assms + by (intro tendsto_Lambert_W_1) (auto elim: eventually_mono) + +lemma tendsto_Lambert_W [tendsto_intros]: + assumes "(f \ L) F" "eventually (\x. f x \ -exp (-1)) F \ L > -exp (-1)" + shows "((\x. Lambert_W (f x)) \ Lambert_W L) F" + using assms(2) +proof + assume "L > -exp (-1)" + from order_tendstoD(1)[OF assms(1) this] assms(1) show ?thesis + by (intro tendsto_Lambert_W_1) (auto elim: eventually_mono) +qed (use tendsto_Lambert_W_1[OF assms(1)] in auto) + +lemma tendsto_Lambert_W'_1: + assumes "(f \ L) F" "eventually (\x. f x \ -exp (-1)) F" "L < 0" + shows "((\x. Lambert_W' (f x)) \ Lambert_W' L) F" +proof (cases "F = bot") + case [simp]: False + from tendsto_lowerbound[OF assms(1,2)] have L_ge: "L \ -exp (-1)" by simp + from order_tendstoD(2)[OF assms(1,3)] have ev: "eventually (\x. f x < 0) F" + by auto + with assms(2) have "eventually (\x. f x \ {-exp (-1)..<0}) F" + by eventually_elim auto + thus ?thesis using L_ge assms(3) + by (intro continuous_on_tendsto_compose[OF continuous_on_Lambert_W' assms(1)]) auto +qed auto + +lemma tendsto_Lambert_W'_2: + assumes "(f \ L) F" "L > -exp (-1)" "L < 0" + shows "((\x. Lambert_W' (f x)) \ Lambert_W' L) F" + using order_tendstoD(1)[OF assms(1,2)] assms + by (intro tendsto_Lambert_W'_1) (auto elim: eventually_mono) + +lemma tendsto_Lambert_W' [tendsto_intros]: + assumes "(f \ L) F" "eventually (\x. f x \ -exp (-1)) F \ L > -exp (-1)" "L < 0" + shows "((\x. Lambert_W' (f x)) \ Lambert_W' L) F" + using assms(2) +proof + assume "L > -exp (-1)" + from order_tendstoD(1)[OF assms(1) this] assms(1,3) show ?thesis + by (intro tendsto_Lambert_W'_1) (auto elim: eventually_mono) +qed (use tendsto_Lambert_W'_1[OF assms(1) _ assms(3)] in auto) + + +lemma continuous_Lambert_W [continuous_intros]: + assumes "continuous F f" "f (Lim F (\x. x)) > -exp (-1) \ eventually (\x. f x \ -exp (-1)) F" + shows "continuous F (\x. Lambert_W (f x))" + using assms unfolding continuous_def by (intro tendsto_Lambert_W) auto + +lemma continuous_Lambert_W' [continuous_intros]: + assumes "continuous F f" "f (Lim F (\x. x)) > -exp (-1) \ eventually (\x. f x \ -exp (-1)) F" + "f (Lim F (\x. x)) < 0" + shows "continuous F (\x. Lambert_W' (f x))" + using assms unfolding continuous_def by (intro tendsto_Lambert_W') auto + + +lemma has_field_derivative_Lambert_W [derivative_intros]: + assumes x: "x > -exp (-1)" + shows "(Lambert_W has_real_derivative inverse (x + exp (Lambert_W x))) (at x within A)" +proof - + write Lambert_W ("W") + from x have "W x > W (-exp (-1))" + by (subst Lambert_W_less_iff) auto + hence "W x > -1" by simp + + note [derivative_intros] = DERIV_inverse_function[where g = Lambert_W] + have "((\x. x * exp x) has_real_derivative (1 + W x) * exp (W x)) (at (W x))" + by (auto intro!: derivative_eq_intros simp: algebra_simps) + hence "(W has_real_derivative inverse ((1 + W x) * exp (W x))) (at x)" + by (rule DERIV_inverse_function[where a = "-exp (-1)" and b = "x + 1"]) + (use x \W x > -1\ in \auto simp: Lambert_W_times_exp_self Lim_ident_at + intro!: continuous_intros\) + also have "(1 + W x) * exp (W x) = x + exp (W x)" + using x by (simp add: algebra_simps Lambert_W_times_exp_self) + finally show ?thesis by (rule has_field_derivative_at_within) +qed + +lemma has_field_derivative_Lambert_W_gen [derivative_intros]: + assumes "(f has_real_derivative f') (at x within A)" "f x > -exp (-1)" + shows "((\x. Lambert_W (f x)) has_real_derivative + (f' / (f x + exp (Lambert_W (f x))))) (at x within A)" + using DERIV_chain2[OF has_field_derivative_Lambert_W[OF assms(2)] assms(1)] + by (simp add: field_simps) + +lemma has_field_derivative_Lambert_W' [derivative_intros]: + assumes x: "x \ {-exp (-1)<..<0}" + shows "(Lambert_W' has_real_derivative inverse (x + exp (Lambert_W' x))) (at x within A)" +proof - + write Lambert_W' ("W") + from x have "W x < W (-exp (-1))" + by (subst Lambert_W'_less_iff) auto + hence "W x < -1" by simp + + note [derivative_intros] = DERIV_inverse_function[where g = Lambert_W] + have "((\x. x * exp x) has_real_derivative (1 + W x) * exp (W x)) (at (W x))" + by (auto intro!: derivative_eq_intros simp: algebra_simps) + hence "(W has_real_derivative inverse ((1 + W x) * exp (W x))) (at x)" + by (rule DERIV_inverse_function[where a = "-exp (-1)" and b = "0"]) + (use x \W x < -1\ in \auto simp: Lambert_W'_times_exp_self Lim_ident_at + intro!: continuous_intros\) + also have "(1 + W x) * exp (W x) = x + exp (W x)" + using x by (simp add: algebra_simps Lambert_W'_times_exp_self) + finally show ?thesis by (rule has_field_derivative_at_within) +qed + +lemma has_field_derivative_Lambert_W'_gen [derivative_intros]: + assumes "(f has_real_derivative f') (at x within A)" "f x \ {-exp (-1)<..<0}" + shows "((\x. Lambert_W' (f x)) has_real_derivative + (f' / (f x + exp (Lambert_W' (f x))))) (at x within A)" + using DERIV_chain2[OF has_field_derivative_Lambert_W'[OF assms(2)] assms(1)] + by (simp add: field_simps) + + +subsection \Asymptotic expansion\ + +text \ + Lastly, we prove some more detailed asymptotic expansions of $W$ and $W'$ at their + singularities. First, we show that: + \begin{align*} + W(x) &= \log x - \log\log x + o(\log\log x) &&\text{for}\ x\to\infty\\ + W'(x) &= \log (-x) - \log (-\log (-x)) + o(\log (-\log (-x))) &&\text{for}\ x\to 0^{-} + \end{align*} +\ +theorem Lambert_W_asymp_equiv_at_top: + "(\x. Lambert_W x - ln x) \[at_top] (\x. -ln (ln x))" +proof - + have "(\x. Lambert_W x - ln x) \[at_top] (\x. (-1) * ln (ln x))" + proof (rule asymp_equiv_sandwich') + fix c' :: real assume c': "c' \ {-2<..<-1}" + have "eventually (\x. (ln x + c' * ln (ln x)) * exp (ln x + c' * ln (ln x)) \ x) at_top" + "eventually (\x. ln x + c' * ln (ln x) \ -1) at_top" + using c' by real_asymp+ + thus "eventually (\x. Lambert_W x - ln x \ c' * ln (ln x)) at_top" + proof eventually_elim + case (elim x) + hence "Lambert_W x \ ln x + c' * ln (ln x)" + by (intro Lambert_W_geI) + thus ?case by simp + qed + next + fix c' :: real assume c': "c' \ {-1<..<0}" + have "eventually (\x. (ln x + c' * ln (ln x)) * exp (ln x + c' * ln (ln x)) \ x) at_top" + "eventually (\x. ln x + c' * ln (ln x) \ -1) at_top" + using c' by real_asymp+ + thus "eventually (\x. Lambert_W x - ln x \ c' * ln (ln x)) at_top" + using eventually_ge_at_top[of "-exp (-1)"] + proof eventually_elim + case (elim x) + hence "Lambert_W x \ ln x + c' * ln (ln x)" + by (intro Lambert_W_leI) + thus ?case by simp + qed + qed auto + thus ?thesis by simp +qed + +lemma Lambert_W_asymp_equiv_at_top' [asymp_equiv_intros]: + "Lambert_W \[at_top] ln" +proof - + have "(\x. Lambert_W x - ln x) \ \(\x. -ln (ln x))" + by (intro asymp_equiv_imp_bigtheta Lambert_W_asymp_equiv_at_top) + also have "(\x::real. -ln (ln x)) \ o(ln)" + by real_asymp + finally show ?thesis by (simp add: asymp_equiv_altdef) +qed + +theorem Lambert_W'_asymp_equiv_at_left_0: + "(\x. Lambert_W' x - ln (-x)) \[at_left 0] (\x. -ln (-ln (-x)))" +proof - + have "(\x. Lambert_W' x - ln (-x)) \[at_left 0] (\x. (-1) * ln (-ln (-x)))" + proof (rule asymp_equiv_sandwich') + fix c' :: real assume c': "c' \ {-2<..<-1}" + have "eventually (\x. x \ (ln (-x) + c' * ln (-ln (-x))) * exp (ln (-x) + c' * ln (-ln (-x)))) (at_left 0)" + "eventually (\x::real. ln (-x) + c' * ln (-ln (-x)) \ -1) (at_left 0)" + "eventually (\x::real. -exp (-1) \ x) (at_left 0)" + using c' by real_asymp+ + thus "eventually (\x. Lambert_W' x - ln (-x) \ c' * ln (-ln (-x))) (at_left 0)" + proof eventually_elim + case (elim x) + hence "Lambert_W' x \ ln (-x) + c' * ln (-ln (-x))" + by (intro Lambert_W'_geI) + thus ?case by simp + qed + next + fix c' :: real assume c': "c' \ {-1<..<0}" + have "eventually (\x. x \ (ln (-x) + c' * ln (-ln (-x))) * exp (ln (-x) + c' * ln (-ln (-x)))) (at_left 0)" + using c' by real_asymp + moreover have "eventually (\x::real. x < 0) (at_left 0)" + by (auto simp: eventually_at intro: exI[of _ 1]) + ultimately show "eventually (\x. Lambert_W' x - ln (-x) \ c' * ln (-ln (-x))) (at_left 0)" + proof eventually_elim + case (elim x) + hence "Lambert_W' x \ ln (-x) + c' * ln (-ln (-x))" + by (intro Lambert_W'_leI) + thus ?case by simp + qed + qed auto + thus ?thesis by simp +qed + +lemma Lambert_W'_asymp_equiv'_at_left_0 [asymp_equiv_intros]: + "Lambert_W' \[at_left 0] (\x. ln (-x))" +proof - + have "(\x. Lambert_W' x - ln (-x)) \ \[at_left 0](\x. -ln (-ln (-x)))" + by (intro asymp_equiv_imp_bigtheta Lambert_W'_asymp_equiv_at_left_0) + also have "(\x::real. -ln (-ln (-x))) \ o[at_left 0](\x. ln (-x))" + by real_asymp + finally show ?thesis by (simp add: asymp_equiv_altdef) +qed + + +text \ + Next, we look at the branching point $a := \tfrac{1}{e}$. Here, the asymptotic behaviour + is as follows: + \begin{align*} + W(x) &= -1 + \sqrt{2e}(x - a)^{\frac{1}{2}} - \tfrac{2}{3}e(x-a) + o(x-a) &&\text{for} x\to a^+\\ + W'(x) &= -1 - \sqrt{2e}(x - a)^{\frac{1}{2}} - \tfrac{2}{3}e(x-a) + o(x-a) &&\text{for} x\to a^+ + \end{align*} +\ +lemma sqrt_sqrt_mult: + assumes "x \ (0 :: real)" + shows "sqrt x * (sqrt x * y) = x * y" + using assms by (subst mult.assoc [symmetric]) auto + +theorem Lambert_W_asymp_equiv_at_right_minus_exp_minus1: + defines "e \ exp 1" + defines "a \ -exp (-1)" + defines "C1 \ sqrt (2 * exp 1)" + defines "f \ (\x. -1 + C1 * sqrt (x - a))" + shows "(\x. Lambert_W x - f x) \[at_right a] (\x. -2/3 * e * (x - a))" +proof - + define C :: "real \ real" where "C = (\c. sqrt (2/e)/3 * (2*e+3*c))" + have asymp_equiv: "(\x. (f x + c * (x - a)) * exp (f x + c * (x - a)) - x) + \[at_right a] (\x. C c * (x - a) powr (3/2))" if "c \ -2/3 * e" for c + proof - + from that have "C c \ 0" + by (auto simp: C_def e_def) + have "(\x. (f x + c * (x - a)) * exp (f x + c * (x - a)) - x - C c * (x - a) powr (3/2)) + \ o[at_right a](\x. (x - a) powr (3/2))" + unfolding f_def a_def C_def C1_def e_def + by (real_asymp simp: field_simps real_sqrt_mult real_sqrt_divide sqrt_sqrt_mult + exp_minus simp flip: sqrt_def) + thus ?thesis + using \C c \ 0\ by (intro smallo_imp_asymp_equiv) auto + qed + + show ?thesis + proof (rule asymp_equiv_sandwich') + fix c' :: real assume c': "c' \ {-e<..<-2/3*e}" + hence neq: "c' \ -2/3 * e" by auto + from c' have neg: "C c' < 0" unfolding C_def by (auto intro!: mult_pos_neg) + hence "eventually (\x. C c' * (x - a) powr (3 / 2) < 0) (at_right a)" + by real_asymp + hence "eventually (\x. (f x + c' * (x - a)) * exp (f x + c' * (x - a)) - x < 0) (at_right a)" + using asymp_equiv_eventually_neg_iff[OF asymp_equiv[OF neq]] + by eventually_elim (use neg in auto) + thus "eventually (\x. Lambert_W x - f x \ c' * (x - a)) (at_right a)" + proof eventually_elim + case (elim x) + hence "Lambert_W x \ f x + c' * (x - a)" + by (intro Lambert_W_geI) auto + thus ?case by simp + qed + next + fix c' :: real assume c': "c' \ {-2/3*e<..<0}" + hence neq: "c' \ -2/3 * e" by auto + from c' have pos: "C c' > 0" unfolding C_def by auto + hence "eventually (\x. C c' * (x - a) powr (3 / 2) > 0) (at_right a)" + by real_asymp + hence "eventually (\x. (f x + c' * (x - a)) * exp (f x + c' * (x - a)) - x > 0) (at_right a)" + using asymp_equiv_eventually_pos_iff[OF asymp_equiv[OF neq]] + by eventually_elim (use pos in auto) + moreover have "eventually (\x. - 1 \ f x + c' * (x - a)) (at_right a)" + "eventually (\x. x > a) (at_right a)" + unfolding a_def f_def C1_def c' by real_asymp+ + ultimately show "eventually (\x. Lambert_W x - f x \ c' * (x - a)) (at_right a)" + proof eventually_elim + case (elim x) + hence "Lambert_W x \ f x + c' * (x - a)" + by (intro Lambert_W_leI) (auto simp: a_def) + thus ?case by simp + qed + qed (auto simp: e_def) +qed + +theorem Lambert_W'_asymp_equiv_at_right_minus_exp_minus1: + defines "e \ exp 1" + defines "a \ -exp (-1)" + defines "C1 \ sqrt (2 * exp 1)" + defines "f \ (\x. -1 - C1 * sqrt (x - a))" + shows "(\x. Lambert_W' x - f x) \[at_right a] (\x. -2/3 * e * (x - a))" +proof - + define C :: "real \ real" where "C = (\c. -sqrt (2/e)/3 * (2*e+3*c))" + + have asymp_equiv: "(\x. (f x + c * (x - a)) * exp (f x + c * (x - a)) - x) + \[at_right a] (\x. C c * (x - a) powr (3/2))" if "c \ -2/3 * e" for c + proof - + from that have "C c \ 0" + by (auto simp: C_def e_def) + have "(\x. (f x + c * (x - a)) * exp (f x + c * (x - a)) - x - C c * (x - a) powr (3/2)) + \ o[at_right a](\x. (x - a) powr (3/2))" + unfolding f_def a_def C_def C1_def e_def + by (real_asymp simp: field_simps real_sqrt_mult real_sqrt_divide sqrt_sqrt_mult + exp_minus simp flip: sqrt_def) + thus ?thesis + using \C c \ 0\ by (intro smallo_imp_asymp_equiv) auto + qed + + show ?thesis + proof (rule asymp_equiv_sandwich') + fix c' :: real assume c': "c' \ {-e<..<-2/3*e}" + hence neq: "c' \ -2/3 * e" by auto + from c' have pos: "C c' > 0" unfolding C_def by (auto intro!: mult_pos_neg) + hence "eventually (\x. C c' * (x - a) powr (3 / 2) > 0) (at_right a)" + by real_asymp + hence "eventually (\x. (f x + c' * (x - a)) * exp (f x + c' * (x - a)) - x > 0) (at_right a)" + using asymp_equiv_eventually_pos_iff[OF asymp_equiv[OF neq]] + by eventually_elim (use pos in auto) + moreover have "eventually (\x. x > a) (at_right a)" + "eventually (\x. f x + c' * (x - a) \ -1) (at_right a)" + unfolding a_def f_def C1_def c' by real_asymp+ + ultimately show "eventually (\x. Lambert_W' x - f x \ c' * (x - a)) (at_right a)" + proof eventually_elim + case (elim x) + hence "Lambert_W' x \ f x + c' * (x - a)" + by (intro Lambert_W'_geI) (auto simp: a_def) + thus ?case by simp + qed + next + fix c' :: real assume c': "c' \ {-2/3*e<..<0}" + hence neq: "c' \ -2/3 * e" by auto + from c' have neg: "C c' < 0" unfolding C_def by auto + hence "eventually (\x. C c' * (x - a) powr (3 / 2) < 0) (at_right a)" + by real_asymp + hence "eventually (\x. (f x + c' * (x - a)) * exp (f x + c' * (x - a)) - x < 0) (at_right a)" + using asymp_equiv_eventually_neg_iff[OF asymp_equiv[OF neq]] + by eventually_elim (use neg in auto) + moreover have "eventually (\x. x < 0) (at_right a)" + unfolding a_def by real_asymp + ultimately show "eventually (\x. Lambert_W' x - f x \ c' * (x - a)) (at_right a)" + proof eventually_elim + case (elim x) + hence "Lambert_W' x \ f x + c' * (x - a)" + by (intro Lambert_W'_leI) auto + thus ?case by simp + qed + qed (auto simp: e_def) +qed + + +text \ + Lastly, just for fun, we derive a slightly more accurate expansion of $W_0(x)$ for $x\to\infty$: +\ +theorem Lambert_W_asymp_equiv_at_top'': + "(\x. Lambert_W x - ln x + ln (ln x)) \[at_top] (\x. ln (ln x) / ln x)" +proof - + have "(\x. Lambert_W x - ln x + ln (ln x)) \[at_top] (\x. 1 * (ln (ln x) / ln x))" + proof (rule asymp_equiv_sandwich') + fix c' :: real assume c': "c' \ {0<..<1}" + define a where "a = (\x::real. ln x - ln (ln x) + c' * (ln (ln x) / ln x))" + have "eventually (\x. a x * exp (a x) \ x) at_top" + using c' unfolding a_def by real_asymp+ + thus "eventually (\x. Lambert_W x - ln x + ln (ln x) \ c' * (ln (ln x) / ln x)) at_top" + proof eventually_elim + case (elim x) + hence "Lambert_W x \ a x" + by (intro Lambert_W_geI) + thus ?case by (simp add: a_def) + qed + next + fix c' :: real assume c': "c' \ {1<..<2}" + define a where "a = (\x::real. ln x - ln (ln x) + c' * (ln (ln x) / ln x))" + have "eventually (\x. a x * exp (a x) \ x) at_top" + "eventually (\x. a x \ -1) at_top" + using c' unfolding a_def by real_asymp+ + thus "eventually (\x. Lambert_W x - ln x + ln (ln x) \ c' * (ln (ln x) / ln x)) at_top" + using eventually_ge_at_top[of "-exp (-1)"] + proof eventually_elim + case (elim x) + hence "Lambert_W x \ a x" + by (intro Lambert_W_leI) + thus ?case by (simp add: a_def) + qed + qed auto + thus ?thesis by simp +qed + +end \ No newline at end of file diff --git a/thys/Lambert_W/Lambert_W_MacLaurin_Series.thy b/thys/Lambert_W/Lambert_W_MacLaurin_Series.thy new file mode 100644 --- /dev/null +++ b/thys/Lambert_W/Lambert_W_MacLaurin_Series.thy @@ -0,0 +1,373 @@ +(* + File: Lambert_W_MacLaurin_Series + Author: Manuel Eberl, TU München + + The MacLaurin series of the Lambert W function at x = 0 + This file is kept separate from the main Lambert_W file because it requires significantly + more library material, including HOL-Analysis. +*) +theory Lambert_W_MacLaurin_Series +imports + "HOL-Computational_Algebra.Formal_Power_Series" + "Bernoulli.Bernoulli_FPS" (* TODO only for Stirling number identities; should be moved! *) + "Stirling_Formula.Stirling_Formula" + Lambert_W +begin + +subsection \The MacLaurin series of $W_0(x)$ at $x = 0$\ + +text \ + In this section, we derive the MacLaurin series of $W_0(x)$ as a formal power series + at $x = 0$ and prove that its radius of convergenge is $e^{-1}$. + + We do not actually show that this series evaluates to 1 since Isabelle's library does not + contain the required theorems about convergence of the composition of two power series yet. + If it did, however, this last remaining step would be trivial since we did all the real work + here. +\ + +(* TODO Move *) +lemma Stirling_Suc_n_n: "Stirling (Suc n) n = (Suc n choose 2)" + by (induction n) (auto simp: choose_two) + +lemma Stirling_n_n_minus_1: "n > 0 \ Stirling n (n - 1) = (n choose 2)" + using Stirling_Suc_n_n[of "n - 1"] by (cases n) auto + +text \ + The following defines the power series $W(X)$ as the formal inverse of the + formal power series $X e^X$: +\ +definition fps_Lambert_W :: "real fps" where + "fps_Lambert_W = fps_inv (fps_X * fps_exp 1)" + +text \ + The formal composition of $W(X)$ and $X e^X$ is, in fact, the identity (in both directions). +\ +lemma fps_compose_Lambert_W: "fps_compose fps_Lambert_W (fps_X * fps_exp 1) = fps_X" + unfolding fps_Lambert_W_def by (rule fps_inv) auto + +lemma fps_compose_Lambert_W': "fps_compose (fps_X * fps_exp 1) fps_Lambert_W = fps_X" + unfolding fps_Lambert_W_def by (rule fps_inv_right) auto + +text \ + We have $W(0) = 0$, which shows that $W(X)$ indeed represents the branch $W_0$. +\ +lemma fps_nth_Lambert_W_0 [simp]: "fps_nth fps_Lambert_W 0 = 0" + by (simp add: fps_Lambert_W_def fps_inv_def) + +lemma fps_nth_Lambert_W_1 [simp]: "fps_nth fps_Lambert_W 1 = 1" + by (simp add: fps_Lambert_W_def fps_inv_def) + +text \ + All the equalities that hold for the analytic Lambert $W$ function in a neighbourhood of 0 + also hold formally for the formal power series, e.g. $W(X) = X e^{-W(X)}$: +\ +lemma fps_Lambert_W_over_X: + "fps_Lambert_W = fps_X * fps_compose (fps_exp (-1)) fps_Lambert_W" +proof - + have "fps_nth (fps_exp 1 oo fps_Lambert_W) 0 = 1" + by simp + hence nz: "fps_exp 1 oo fps_Lambert_W \ 0" + by force + have "fps_Lambert_W * fps_compose (fps_exp 1) fps_Lambert_W = + fps_compose (fps_X * fps_exp 1) fps_Lambert_W" + by (simp add: fps_compose_mult_distrib) + also have "\ = fps_X * fps_compose 1 fps_Lambert_W" + by (simp add: fps_compose_Lambert_W') + also have "1 = fps_exp (-1) * fps_exp (1 :: real)" + by (simp flip: fps_exp_add_mult) + also have "fps_X * fps_compose \ fps_Lambert_W = + fps_X * fps_compose (fps_exp (-1)) fps_Lambert_W * + fps_compose (fps_exp 1) fps_Lambert_W" + by (simp add: fps_compose_mult_distrib mult_ac) + finally show ?thesis + using nz by simp +qed + +text \ + We now derive the closed-form expression + \[W(X) = \sum_{n=1}^\infty \frac{(-n)^{n-1}}{n!} X^n\ .\] +\ +lemma fps_nth_Lambert_W: "fps_nth fps_Lambert_W n = (if n = 0 then 0 else ((-n)^(n-1) / fact n))" +proof - + define F :: "real fps" where "F = fps_X * fps_exp 1" + have fps_nth_eq: "fps_nth F n = 1 / fact (n - 1)" if "n > 0" for n + using that unfolding F_def by simp + have F_power: "F ^ n = fps_X ^ n * fps_exp (of_nat n)" for n + by (simp add: F_def power_mult_distrib fps_exp_power_mult) + + have "fps_nth (fps_inv F) n = (if n = 0 then 0 else ((-n)^(n-1) / fact n))" for n + proof (induction n rule: less_induct) + case (less n) + consider "n = 0" | "n = 1" | "n > 1" by force + thus ?case + proof cases + case 3 + hence "fps_nth (fps_inv F) n = -(\i=0..n-1. fps_nth (fps_inv F) i * fps_nth (F ^ i) n)" + (is "_ = -?S") by (cases n) (auto simp: fps_inv_def F_def) + also have "?S = (\i=1.. = (-1) ^ (n+1) / fact n * + (\i=1..i=1..i\{..n}-{n}. ((-1)^(n - i) * real (n choose i) * real i ^ (n - 1)))" + using 3 by (intro sum.mono_neutral_left) auto + also have "\ = (\i\n. ((-1)^(n - i) * real (n choose i) * real i ^ (n - 1))) - + real n ^ (n - 1)" + by (subst (2) sum.remove[of _ n]) auto + also have "(\i\n. ((-1)^(n - i) * real (n choose i) * real i ^ (n - 1))) = + real (Stirling (n - 1) n) * fact n" + by (subst Stirling_closed_form) auto + also have "Stirling (n - 1) n = 0" + using 3 by (subst Stirling_less) auto + finally have "fps_nth (fps_inv F) n = -((-1) ^ n * real n ^ (n - 1) / fact n)" + by simp + also have "\ = (-real n) ^ (n - 1) / fact n" + using 3 by (subst power_minus) (auto simp: minus_one_power_iff) + finally show ?thesis + using 3 by simp + qed (auto simp: fps_inv_def F_def) + qed + thus ?thesis by (simp add: F_def fps_Lambert_W_def) +qed + +(* TODO: Move *) +text \ + Next, we need a few auxiliary lemmas about summability and convergence radii that should + go into Isabelle's standard library at some point: +\ +lemma summable_comparison_test_bigo: + fixes f :: "nat \ real" + assumes "summable (\n. norm (g n))" "f \ O(g)" + shows "summable f" +proof - + from \f \ O(g)\ obtain C where C: "eventually (\x. norm (f x) \ C * norm (g x)) at_top" + by (auto elim: landau_o.bigE) + thus ?thesis + by (rule summable_comparison_test_ev) (insert assms, auto intro: summable_mult) +qed + +lemma summable_comparison_test_bigo': + assumes "summable (\n. norm (g n))" + assumes "(\n. norm (f n :: 'a :: banach)) \ O(\n. norm (g n))" + shows "summable f" +proof (rule summable_norm_cancel, rule summable_comparison_test_bigo) + show "summable (\n. norm (norm (g n)))" + using assms by simp +qed fact+ + +lemma conv_radius_conv_Sup': + fixes f :: "nat \ 'a :: {banach, real_normed_div_algebra}" + shows "conv_radius f = Sup {r. \z. ereal (norm z) < r \ summable (\n. norm (f n * z ^ n))}" +proof (rule Sup_eqI [symmetric], goal_cases) + case (1 r) + show ?case + proof (rule conv_radius_geI_ex') + fix r' :: real assume r': "r' > 0" "ereal r' < r" + show "summable (\n. f n * of_real r' ^ n)" + by (rule summable_norm_cancel) (use 1 r' in auto) + qed +next + case (2 r) + from 2[of 0] have r: "r \ 0" by auto + show ?case + proof (intro conv_radius_leI_ex' r) + fix R assume R: "R > 0" "ereal R > r" + with r obtain r' where [simp]: "r = ereal r'" by (cases r) auto + show "\summable (\n. f n * of_real R ^ n)" + proof + assume *: "summable (\n. f n * of_real R ^ n)" + define R' where "R' = (R + r') / 2" + from R have R': "R' > r'" "R' < R" by (simp_all add: R'_def) + hence "\z. norm z < R' \ summable (\n. norm (f n * z ^ n))" + using powser_insidea[OF *] by auto + from 2[of R'] and this have "R' \ r'" by auto + with \R' > r'\ show False by simp + qed + qed +qed + +lemma bigo_imp_conv_radius_ge: + fixes f g :: "nat \ 'a :: {banach, real_normed_field}" + assumes "f \ O(g)" + shows "conv_radius f \ conv_radius g" +proof - + have "conv_radius g = Sup {r. \z. ereal (norm z) < r \ summable (\n. norm (g n * z ^ n))}" + by (simp add: conv_radius_conv_Sup') + also have "\ \ Sup {r. \z. ereal (norm z) < r \ summable (\n. f n * z ^ n)}" + proof (rule Sup_subset_mono, safe) + fix r :: ereal and z :: 'a + assume g: "\z. ereal (norm z) < r \ summable (\n. norm (g n * z ^ n))" + assume z: "ereal (norm z) < r" + from g z have "summable (\n. norm (g n * z ^ n))" + by blast + moreover have "(\n. norm (f n * z ^ n)) \ O(\n. norm (g n * z ^ n))" + unfolding landau_o.big.norm_iff by (intro landau_o.big.mult assms) auto + ultimately show "summable (\n. f n * z ^ n)" + by (rule summable_comparison_test_bigo') + qed + also have "\ = conv_radius f" + by (simp add: conv_radius_conv_Sup) + finally show ?thesis . +qed + +lemma conv_radius_cong_bigtheta: + assumes "f \ \(g)" + shows "conv_radius f = conv_radius g" + using assms + by (intro antisym bigo_imp_conv_radius_ge) (auto simp: bigtheta_def bigomega_iff_bigo) + +lemma conv_radius_eqI_smallomega_smallo: + fixes f :: "nat \ 'a :: {real_normed_div_algebra, banach}" + assumes "\\. \ > l \ \ < inverse C \ (\n. norm (f n)) \ \(\n. \ ^ n)" + assumes "\\. \ < u \ \ > inverse C \ (\n. norm (f n)) \ o(\n. \ ^ n)" + assumes C: "C > 0" and lu: "l > 0" "l < inverse C" "u > inverse C" + shows "conv_radius f = ereal C" +proof (intro antisym) + have "0 < inverse C" + using assms by (auto simp: field_simps) + also have "\ < u" + by fact + finally have "u > 0" by simp + show "conv_radius f \ C" + unfolding conv_radius_altdef le_Liminf_iff + proof safe + fix c :: ereal assume c: "c < C" + hence "max c (inverse u) < ereal C" + using lu C \u > 0\ by (auto simp: field_simps) + from ereal_dense2[OF this] obtain c' where c': "c < ereal c'" "inverse u < c'" "c' < C" + by auto + have "inverse u > 0" + using \u > 0\ by simp + also have "\ < c'" by fact + finally have "c' > 0" . + + have "\\<^sub>F x in sequentially. norm (norm (f x)) \ 1/2 * norm (inverse c' ^ x)" + using landau_o.smallD[OF assms(2)[of "inverse c'"], of "1/2"] c' C lu \c' > 0\ c + by (simp add: field_simps) + thus "\\<^sub>F n in sequentially. c < inverse (ereal (root n (norm (f n))))" + using eventually_gt_at_top[of 0] + proof eventually_elim + case (elim n) + have "norm (f n) \ 1/2 * norm (inverse c' ^ n)" + using c' using elim by (simp add: field_simps) + also have "\ < norm (inverse c' ^ n)" + using \c' > 0\ by simp + finally have "root n (norm (f n)) < root n (norm (inverse c' ^ n))" + using \n > 0\ c' by (intro real_root_less_mono) auto + also have "root n (norm (inverse c' ^ n)) = inverse c'" + using \n > 0\ \c' > 0\ by (simp add: norm_power real_root_power) + finally have "ereal (root n (norm (f n))) < ereal (inverse c')" + by simp + also have "\ = inverse (ereal c')" + using \c' > 0\ by auto + finally have "inverse (inverse (ereal c')) < inverse (ereal (root n (norm (f n))))" + using c' \n > 0\ by (intro ereal_inverse_antimono_strict) auto + also have "inverse (inverse (ereal c')) = ereal c'" + using c' by simp + finally show ?case + using \c < c'\ by simp + qed + qed +next + show "conv_radius f \ C" + proof (rule ccontr) + assume "\(conv_radius f \ C)" + hence "conv_radius f > C" by auto + hence "min (conv_radius f) (inverse l) > ereal C" + using lu C \l > 0\ by (auto simp: field_simps) + from ereal_dense2[OF this] obtain c where c: "C < ereal c" "inverse l > c" "c < conv_radius f" + by auto + hence "c > 0" using lu C + by (simp add: field_simps) + + have "\\<^sub>F n in sequentially. ereal c < inverse (ereal (root n (norm (f n))))" + using less_LiminfD[OF c(3)[unfolded conv_radius_altdef]] by simp + moreover have "\\<^sub>F n in sequentially. norm (f n) \ 2 * norm (inverse c ^ n)" + using landau_omega.smallD[OF assms(1)[of "inverse c"], of 2] c C \c > 0\ lu + by (simp add: field_simps) + ultimately have "eventually (\n. False) sequentially" + using eventually_gt_at_top[of 0] + proof eventually_elim + case (elim n) + have "norm (inverse c ^ n) < 2 * norm (inverse c ^ n)" + using c \n > 0\ C by simp + also have "\ \ norm (f n)" + using elim by simp + finally have "root n (inverse c ^ n) < root n (norm (f n))" + using \n > 0\ by (intro real_root_less_mono) auto + also have "root n (inverse c ^ n) = inverse c" + using \n > 0\ c C by (subst real_root_power) auto + finally have "ereal (inverse c) < ereal (root n (norm (f n)))" + by simp + also have "ereal (inverse c) = inverse (ereal c)" + using c C by auto + finally have "inverse (ereal (root n (norm (f n)))) < inverse (inverse (ereal c))" + using c C + by (intro ereal_inverse_antimono_strict) auto + also have "\ = ereal c" + using c C by auto + also have "\ < inverse (ereal (root n (norm (f n))))" + using elim by simp + finally show False . + qed + thus False by simp + qed +qed + +text \ + Finally, we show that the radius of convergence of $W(X)$ is $e^{-1}$ by directly computing + \[\lim\limits_{n\to\infty} \sqrt[n]{|[X^n]\,W(X)|} = e\] + using Stirling's formula for $n!$: +\ +lemma fps_conv_radius_Lambert_W: "fps_conv_radius fps_Lambert_W = exp (-1)" +proof - + have "conv_radius (fps_nth fps_Lambert_W) = conv_radius (\n. exp 1 ^ n * n powr (-3/2) :: real)" + proof (rule conv_radius_cong_bigtheta) + have "fps_nth fps_Lambert_W \ \(\n. (-real n) ^ (n - 1) / fact n)" + by (intro bigthetaI_cong eventually_mono[OF eventually_gt_at_top[of 0]]) + (auto simp: fps_nth_Lambert_W) + also have "(\n. (-real n) ^ (n - 1) / fact n) \ \(\n. real n ^ (n - 1) / fact n)" + by (subst landau_theta.norm_iff [symmetric], subst norm_divide) auto + also have "(\n. (real n) ^ (n - 1) / fact n) \ + \(\n. (real n) ^ (n - 1) / (sqrt (2 * pi * real n) * (real n / exp 1) ^ n))" + by (intro asymp_equiv_imp_bigtheta asymp_equiv_intros fact_asymp_equiv) + also have "(\n. (real n) ^ (n - 1) / (sqrt (2 * pi * real n) * (real n / exp 1) ^ n)) \ + \(\n. exp 1 ^ n * n powr (-3/2))" + by (real_asymp simp: ln_inverse) + finally show "fps_nth fps_Lambert_W \ \(\n. exp 1 ^ n * n powr (-3/2) :: real)" . + qed + also have "\ = inverse (limsup (\n. ereal (root n (exp 1 ^ n * real n powr - (3 / 2)))))" + by (simp add: conv_radius_def) + also have "limsup (\n. ereal (root n (exp 1 ^ n * real n powr - (3 / 2)))) = exp 1" + proof (intro lim_imp_Limsup tendsto_intros) + \ \real\_asymp does not support \<^const>\root\ for a variable basis natively, so + we need to convert it to \<^const>\powr\ first.\ + (* TODO add better "root" support to real_asymp *) + have "(\n. (exp 1 ^ n * real n powr -(3/2)) powr (1 / real n)) \ exp 1" + by real_asymp + also have "?this \ (\x. root x (exp 1 ^ x * real x powr - (3 / 2))) \ exp 1" + by (intro filterlim_cong eventually_mono[OF eventually_gt_at_top[of 0]]) + (auto simp: root_powr_inverse) + finally show \ . + qed auto + finally show ?thesis + by (simp add: fps_conv_radius_def exp_minus) +qed + +end \ No newline at end of file diff --git a/thys/Lambert_W/ROOT b/thys/Lambert_W/ROOT new file mode 100644 --- /dev/null +++ b/thys/Lambert_W/ROOT @@ -0,0 +1,16 @@ +chapter AFP + +session Lambert_W (AFP) = Bernoulli + + options [timeout = 1200] + sessions + "HOL-Library" + "HOL-Computational_Algebra" + "HOL-Real_Asymp" + Stirling_Formula + theories + Lambert_W + Lambert_W_MacLaurin_Series + document_files + "root.tex" + "root.bib" + diff --git a/thys/Lambert_W/document/root.bib b/thys/Lambert_W/document/root.bib new file mode 100644 --- /dev/null +++ b/thys/Lambert_W/document/root.bib @@ -0,0 +1,13 @@ +@article{corless96, + doi = {10.1007/bf02124750}, + url = {https://doi.org/10.1007/bf02124750}, + year = {1996}, + month = dec, + publisher = {Springer Science and Business Media {LLC}}, + volume = {5}, + number = {1}, + pages = {329--359}, + author = {R. M. Corless and G. H. Gonnet and D. E. G. Hare and D. J. Jeffrey and D. E. Knuth}, + title = {On the {Lambert $W$} function}, + journal = {Advances in Computational Mathematics} +} diff --git a/thys/Lambert_W/document/root.tex b/thys/Lambert_W/document/root.tex new file mode 100644 --- /dev/null +++ b/thys/Lambert_W/document/root.tex @@ -0,0 +1,40 @@ +\documentclass[11pt,a4paper]{article} +\usepackage{isabelle,isabellesym} +\usepackage{amsfonts, amsmath, amssymb} +\usepackage{pgfplots} + +% this should be the last package used +\usepackage{pdfsetup} + +% urls in roman style, theory text in math-similar italics +\urlstyle{rm} +\isabellestyle{it} + +\begin{document} + +\title{The Lambert $W$ Function on the Reals} +\author{Manuel Eberl} +\maketitle + +\begin{abstract} +The Lambert $W$ function is a multi-valued function defined as the inverse function of $x \mapsto x e^x$. Besides numerous applications in combinatorics, physics, and engineering, it also frequently occurs when solving equations containing both $e^x$ and $x$, or both $x$ and $\log x$. + +This article provides a definition of the two real-valued branches $W_0(x)$ and $W_{-1}(x)$ and proves various properties such as basic identities and inequalities, monotonicity, differentiability, asymptotic expansions, and the MacLaurin series of $W_0(x)$ at $x = 0$. +\end{abstract} + +\tableofcontents +\newpage +\parindent 0pt\parskip 0.5ex + +\input{session} + +\nocite{corless96} +\bibliographystyle{abbrv} +\bibliography{root} + +\end{document} + +%%% Local Variables: +%%% mode: latex +%%% TeX-master: t +%%% End: diff --git a/thys/Matrices_for_ODEs/MTX_Examples.thy b/thys/Matrices_for_ODEs/MTX_Examples.thy new file mode 100644 --- /dev/null +++ b/thys/Matrices_for_ODEs/MTX_Examples.thy @@ -0,0 +1,237 @@ +(* Title: Verification Examples + Author: Jonathan Julián Huerta y Munive, 2020 + Maintainer: Jonathan Julián Huerta y Munive +*) + +section \ Verification examples \ + + +theory MTX_Examples + imports MTX_Flows "Hybrid_Systems_VCs.HS_VC_Spartan" + +begin + + +subsection \ Examples \ + +abbreviation hoareT :: "('a \ bool) \ ('a \ 'a set) \ ('a \ bool) \ bool" + ("PRE_ HP _ POST _" [85,85]85) where "PRE P HP X POST Q \ (P \ |X]Q)" + + +subsubsection \ Verification by uniqueness. \ + +abbreviation mtx_circ :: "2 sq_mtx" ("A") + where "A \ mtx + ([0, 1] # + [-1, 0] # [])" + +abbreviation mtx_circ_flow :: "real \ real^2 \ real^2" ("\") + where "\ t s \ (\ i. if i = 1 then s$1 * cos t + s$2 * sin t else - s$1 * sin t + s$2 * cos t)" + +lemma mtx_circ_flow_eq: "exp (t *\<^sub>R A) *\<^sub>V s = \ t s" + apply(rule local_flow.eq_solution[OF local_flow_sq_mtx_linear, symmetric]) + apply(rule ivp_solsI, simp_all add: sq_mtx_vec_mult_eq vec_eq_iff) + unfolding UNIV_2 using exhaust_2 + by (force intro!: poly_derivatives simp: matrix_vector_mult_def)+ + +lemma mtx_circ: + "PRE(\s. r\<^sup>2 = (s $ 1)\<^sup>2 + (s $ 2)\<^sup>2) + HP x\=(*\<^sub>V) A & G + POST (\s. r\<^sup>2 = (s $ 1)\<^sup>2 + (s $ 2)\<^sup>2)" + apply(subst local_flow.fbox_g_ode[OF local_flow_sq_mtx_linear]) + unfolding mtx_circ_flow_eq by auto + +no_notation mtx_circ ("A") + and mtx_circ_flow ("\") + + +subsubsection \ Flow of diagonalisable matrix. \ + +abbreviation mtx_hOsc :: "real \ real \ 2 sq_mtx" ("A") + where "A a b \ mtx + ([0, 1] # + [a, b] # [])" + +abbreviation mtx_chB_hOsc :: "real \ real \ 2 sq_mtx" ("P") + where "P a b \ mtx + ([a, b] # + [1, 1] # [])" + +lemma inv_mtx_chB_hOsc: + "a \ b \ (P a b)\<^sup>-\<^sup>1 = (1/(a - b)) *\<^sub>R mtx + ([ 1, -b] # + [-1, a] # [])" + apply(rule sq_mtx_inv_unique, unfold scaleR_mtx2 times_mtx2) + by (simp add: diff_divide_distrib[symmetric] one_mtx2)+ + +lemma invertible_mtx_chB_hOsc: "a \ b \ mtx_invertible (P a b)" + apply(rule mtx_invertibleI[of _ "(P a b)\<^sup>-\<^sup>1"]) + apply(unfold inv_mtx_chB_hOsc scaleR_mtx2 times_mtx2 one_mtx2) + by (subst sq_mtx_eq_iff, simp add: vector_def frac_diff_eq1)+ + +lemma mtx_hOsc_diagonalizable: + fixes a b :: real + defines "\\<^sub>1 \ (b - sqrt (b^2+4*a))/2" and "\\<^sub>2 \ (b + sqrt (b^2+4*a))/2" + assumes "b\<^sup>2 + a * 4 > 0" and "a \ 0" + shows "A a b = P (-\\<^sub>2/a) (-\\<^sub>1/a) * (\\\\ i. if i = 1 then \\<^sub>1 else \\<^sub>2) * (P (-\\<^sub>2/a) (-\\<^sub>1/a))\<^sup>-\<^sup>1" + unfolding assms apply(subst inv_mtx_chB_hOsc) + using assms(3,4) apply(simp_all add: diag2_eq[symmetric]) + unfolding sq_mtx_times_eq sq_mtx_scaleR_eq UNIV_2 apply(subst sq_mtx_eq_iff) + using exhaust_2 assms by (auto simp: field_simps, auto simp: field_power_simps) + +lemma mtx_hOsc_solution_eq: + fixes a b :: real + defines "\\<^sub>1 \ (b - sqrt (b\<^sup>2+4*a))/2" and "\\<^sub>2 \ (b + sqrt (b\<^sup>2+4*a))/2" + defines "\ t \ mtx ( + [\\<^sub>2*exp(t*\\<^sub>1) - \\<^sub>1*exp(t*\\<^sub>2), exp(t*\\<^sub>2)-exp(t*\\<^sub>1)]# + [a*exp(t*\\<^sub>2) - a*exp(t*\\<^sub>1), \\<^sub>2*exp(t*\\<^sub>2)-\\<^sub>1*exp(t*\\<^sub>1)]#[])" + assumes "b\<^sup>2 + a * 4 > 0" and "a \ 0" + shows "P (-\\<^sub>2/a) (-\\<^sub>1/a) * (\\\\ i. exp (t * (if i=1 then \\<^sub>1 else \\<^sub>2))) * (P (-\\<^sub>2/a) (-\\<^sub>1/a))\<^sup>-\<^sup>1 + = (1/sqrt (b\<^sup>2 + a * 4)) *\<^sub>R (\ t)" + unfolding assms apply(subst inv_mtx_chB_hOsc) + using assms apply(simp_all add: mtx_times_scaleR_commute, subst sq_mtx_eq_iff) + unfolding UNIV_2 sq_mtx_times_eq sq_mtx_scaleR_eq sq_mtx_uminus_eq apply(simp_all add: axis_def) + by (auto simp: field_simps, auto simp: field_power_simps)+ + +lemma local_flow_mtx_hOsc: + fixes a b + defines "\\<^sub>1 \ (b - sqrt (b^2+4*a))/2" and "\\<^sub>2 \ (b + sqrt (b^2+4*a))/2" + defines "\ t \ mtx ( + [\\<^sub>2*exp(t*\\<^sub>1) - \\<^sub>1*exp(t*\\<^sub>2), exp(t*\\<^sub>2)-exp(t*\\<^sub>1)]# + [a*exp(t*\\<^sub>2) - a*exp(t*\\<^sub>1), \\<^sub>2*exp(t*\\<^sub>2)-\\<^sub>1*exp(t*\\<^sub>1)]#[])" + assumes "b\<^sup>2 + a * 4 > 0" and "a \ 0" + shows "local_flow ((*\<^sub>V) (A a b)) UNIV UNIV (\t. (*\<^sub>V) ((1/sqrt (b\<^sup>2 + a * 4)) *\<^sub>R \ t))" + unfolding assms using local_flow_sq_mtx_linear[of "A a b"] assms + apply(subst (asm) exp_scaleR_diagonal2[OF invertible_mtx_chB_hOsc mtx_hOsc_diagonalizable]) + apply(simp, simp, simp) + by (subst (asm) mtx_hOsc_solution_eq) simp_all + +lemma overdamped_door_arith: + assumes "b\<^sup>2 + a * 4 > 0" and "a < 0" and "b \ 0" and "t \ 0" and "s1 > 0" + shows "0 \ ((b + sqrt (b\<^sup>2 + 4 * a)) * exp (t * (b - sqrt (b\<^sup>2 + 4 * a)) / 2) / 2 - +(b - sqrt (b\<^sup>2 + 4 * a)) * exp (t * (b + sqrt (b\<^sup>2 + 4 * a)) / 2) / 2) * s1 / sqrt (b\<^sup>2 + a * 4)" +proof(subst diff_divide_distrib[symmetric], simp) + have f0: "s1 / (2 * sqrt (b\<^sup>2 + a * 4)) > 0" (is "s1/?c3 > 0") + using assms(1,5) by simp + have f1: "(b - sqrt (b\<^sup>2 + 4 * a)) < (b + sqrt (b\<^sup>2 + 4 * a))" (is "?c2 < ?c1") + and f2: "(b + sqrt (b\<^sup>2 + 4 * a)) < 0" + using sqrt_ge_absD[of b "b\<^sup>2 + 4 * a"] assms by (force, linarith) + hence f3: "exp (t * ?c2 / 2) \ exp (t * ?c1 / 2)" (is "exp ?t1 \ exp ?t2") + unfolding exp_le_cancel_iff + using assms(4) by (case_tac "t=0", simp_all) + hence "?c2 * exp ?t2 \ ?c2 * exp ?t1" + using f1 f2 real_mult_le_cancel_iff2[of "-?c2" "exp ?t1" "exp ?t2"] by linarith + also have "... < ?c1 * exp ?t1" + using f1 by auto + also have"... \ ?c1 * exp ?t1" + using f1 f2 by auto + ultimately show "0 \ (?c1 * exp ?t1 - ?c2 * exp ?t2) * s1 / ?c3" + using f0 f1 assms(5) by auto +qed + +lemma overdamped_door: + assumes "b\<^sup>2 + a * 4 > 0" and "a < 0" and "b \ 0" and "0 \ t" + shows "PRE (\s. s$1 = 0) + HP (LOOP + (\s. {s. s$1 > 0 \ s$2 = 0}); + (x\=(*\<^sub>V) (A a b) & G on {0..t} UNIV @ 0) + INV (\s. 0 \ s$1)) + POST (\s. 0 \ s $ 1)" + apply(rule fbox_loopI, simp_all add: le_fun_def) + apply(subst local_flow.fbox_g_ode_ivl[OF local_flow_mtx_hOsc[OF assms(1)]]) + using assms apply(simp_all add: le_fun_def fbox_def) + unfolding sq_mtx_scaleR_eq UNIV_2 sq_mtx_vec_mult_eq + by (clarsimp simp: overdamped_door_arith) + + +no_notation mtx_hOsc ("A") + and mtx_chB_hOsc ("P") + + +subsubsection \ Flow of non-diagonalisable matrix. \ + +abbreviation mtx_cnst_acc :: "3 sq_mtx" ("K") + where "K \ mtx ( + [0,1,0] # + [0,0,1] # + [0,0,0] # [])" + +lemma pow2_scaleR_mtx_cnst_acc: "(t *\<^sub>R K)\<^sup>2 = mtx ( + [0,0,t\<^sup>2] # + [0,0,0] # + [0,0,0] # [])" + unfolding power2_eq_square apply(subst sq_mtx_eq_iff) + unfolding sq_mtx_times_eq UNIV_3 by auto + +lemma powN_scaleR_mtx_cnst_acc: "n > 2 \ (t *\<^sub>R K)^n = 0" + apply(induct n, simp, case_tac "n \ 2") + apply(subgoal_tac "n = 2", erule ssubst) + unfolding power_Suc2 pow2_scaleR_mtx_cnst_acc sq_mtx_times_eq UNIV_3 + by (auto simp: sq_mtx_eq_iff) + +lemma exp_mtx_cnst_acc: "exp (t *\<^sub>R K) = ((t *\<^sub>R K)\<^sup>2/\<^sub>R 2) + (t *\<^sub>R K) + 1" + unfolding exp_def apply(subst suminf_eq_sum[of 2]) + using powN_scaleR_mtx_cnst_acc by (simp_all add: numeral_2_eq_2) + +lemma exp_mtx_cnst_acc_simps: + "exp (t *\<^sub>R K) $$ 1 $ 1 = 1" "exp (t *\<^sub>R K) $$ 1 $ 2 = t" "exp (t *\<^sub>R K) $$ 1 $ 3 = t^2/2" + "exp (t *\<^sub>R K) $$ 2 $ 1 = 0" "exp (t *\<^sub>R K) $$ 2 $ 2 = 1" "exp (t *\<^sub>R K) $$ 2 $ 3 = t" + "exp (t *\<^sub>R K) $$ 3 $ 1 = 0" "exp (t *\<^sub>R K) $$ 3 $ 2 = 0" "exp (t *\<^sub>R K) $$ 3 $ 3 = 1" + unfolding exp_mtx_cnst_acc one_mtx3 pow2_scaleR_mtx_cnst_acc by simp_all + +lemma exp_mtx_cnst_acc_vec_mult_eq: "exp (t *\<^sub>R K) *\<^sub>V s = + vector [s$3 * t^2/2 + s$2 * t + s$1, s$3 * t + s$2, s$3]" + apply(simp add: sq_mtx_vec_mult_eq vector_def) + unfolding UNIV_3 apply (simp add: exp_mtx_cnst_acc_simps fun_eq_iff) + using exhaust_3 exp_mtx_cnst_acc_simps(7,8,9) by fastforce + +lemma local_flow_mtx_cnst_acc: + "local_flow ((*\<^sub>V) K) UNIV UNIV (\t s. ((t *\<^sub>R K)\<^sup>2/\<^sub>R 2 + (t *\<^sub>R K) + 1) *\<^sub>V s)" + using local_flow_sq_mtx_linear[of K] unfolding exp_mtx_cnst_acc . + +lemma docking_station_arith: + assumes "(d::real) > x" and "v > 0" + shows "(v = v\<^sup>2 * t / (2 * d - 2 * x)) \ (v * t - v\<^sup>2 * t\<^sup>2 / (4 * d - 4 * x) + x = d)" +proof + assume "v = v\<^sup>2 * t / (2 * d - 2 * x)" + hence "v * t = 2 * (d - x)" + using assms by (simp add: eq_divide_eq power2_eq_square) + hence "v * t - v\<^sup>2 * t\<^sup>2 / (4 * d - 4 * x) + x = 2 * (d - x) - 4 * (d - x)\<^sup>2 / (4 * (d - x)) + x" + apply(subst power_mult_distrib[symmetric]) + by (erule ssubst, subst power_mult_distrib, simp) + also have "... = d" + apply(simp only: mult_divide_mult_cancel_left_if) + using assms by (auto simp: power2_eq_square) + finally show "v * t - v\<^sup>2 * t\<^sup>2 / (4 * d - 4 * x) + x = d" . +next + assume "v * t - v\<^sup>2 * t\<^sup>2 / (4 * d - 4 * x) + x = d" + hence "0 = v\<^sup>2 * t\<^sup>2 / (4 * (d - x)) + (d - x) - v * t" + by auto + hence "0 = (4 * (d - x)) * (v\<^sup>2 * t\<^sup>2 / (4 * (d - x)) + (d - x) - v * t)" + by auto + also have "... = v\<^sup>2 * t\<^sup>2 + 4 * (d - x)\<^sup>2 - (4 * (d - x)) * (v * t)" + using assms apply(simp add: distrib_left right_diff_distrib) + apply(subst right_diff_distrib[symmetric])+ + by (simp add: power2_eq_square) + also have "... = (v * t - 2 * (d - x))\<^sup>2" + by (simp only: power2_diff, auto simp: field_simps power2_diff) + finally have "0 = (v * t - 2 * (d - x))\<^sup>2" . + hence "v * t = 2 * (d - x)" + by auto + thus "v = v\<^sup>2 * t / (2 * d - 2 * x)" + apply(subst power2_eq_square, subst mult.assoc) + apply(erule ssubst, subst right_diff_distrib[symmetric]) + using assms by auto +qed + +lemma docking_station: + assumes "d > x\<^sub>0" and "v\<^sub>0 > 0" + shows "PRE (\s. s$1 = x\<^sub>0 \ s$2 = v\<^sub>0) + HP ((3 ::= (\s. -(v\<^sub>0^2/(2*(d-x\<^sub>0))))); x\=(*\<^sub>V) K & G) + POST (\s. s$2 = 0 \ s$1 = d)" + apply(clarsimp simp: le_fun_def local_flow.fbox_g_ode[OF local_flow_sq_mtx_linear[of K]]) + unfolding exp_mtx_cnst_acc_vec_mult_eq using assms by (simp add: docking_station_arith) + +no_notation mtx_cnst_acc ("K") + +end \ No newline at end of file diff --git a/thys/Matrices_for_ODEs/MTX_Flows.thy b/thys/Matrices_for_ODEs/MTX_Flows.thy new file mode 100644 --- /dev/null +++ b/thys/Matrices_for_ODEs/MTX_Flows.thy @@ -0,0 +1,291 @@ +(* Title: Affine systems of ODEs + Author: Jonathan Julián Huerta y Munive, 2020 + Maintainer: Jonathan Julián Huerta y Munive +*) + +section \ Affine systems of ODEs \ + +text \Affine systems of ordinary differential equations (ODEs) are those whose vector +fields are linear operators. Broadly speaking, if there are functions $A$ and $B$ such that the +system of ODEs $X'\, t = f\, (X\, t)$ turns into $X'\, t = (A\, t)\cdot (X\, t)+(B\, t)$, then it +is affine. The end goal of this section is to prove that every affine system of ODEs has a unique +solution, and to obtain a characterization of said solution. \ + +theory MTX_Flows + imports SQ_MTX "Hybrid_Systems_VCs.HS_ODEs" + +begin + + +subsection \ Existence and uniqueness for affine systems \ + +definition matrix_continuous_on :: "real set \ (real \ ('a::real_normed_algebra_1)^'n^'m) \ bool" + where "matrix_continuous_on T A = (\t \ T. \\ > 0. \ \ > 0. \\\T. \\ - t\ < \ \ \A \ - A t\\<^sub>o\<^sub>p \ \)" + +lemma continuous_on_matrix_vector_multl: + assumes "matrix_continuous_on T A" + shows "continuous_on T (\t. A t *v s)" +proof(rule continuous_onI, simp add: dist_norm) + fix e t::real assume "0 < e" and "t \ T" + let ?\ = "e/(\(if s = 0 then 1 else s)\)" + have "?\ > 0" + using \0 < e\ by simp + then obtain \ where dHyp: "\ > 0 \ (\\\T. \\ - t\ < \ \ \A \ - A t\\<^sub>o\<^sub>p \ ?\)" + using assms \t \ T\ unfolding dist_norm matrix_continuous_on_def by fastforce + {fix \ assume "\ \ T" and "\\ - t\ < \" + have obs: "?\ * (\s\) = (if s = 0 then 0 else e)" + by auto + have "\A \ *v s - A t *v s\ = \(A \ - A t) *v s\" + by (simp add: matrix_vector_mult_diff_rdistrib) + also have "... \ (\A \ - A t\\<^sub>o\<^sub>p) * (\s\)" + using norm_matrix_le_mult_op_norm by blast + also have "... \ ?\ * (\s\)" + using dHyp \\ \ T\ \\\ - t\ < \\ mult_right_mono norm_ge_zero by blast + finally have "\A \ *v s - A t *v s\ \ e" + by (subst (asm) obs) (metis (mono_tags, hide_lams) \0 < e\ less_eq_real_def order_trans)} + thus "\d>0. \\\T. \\ - t\ < d \ \A \ *v s - A t *v s\ \ e" + using dHyp by blast +qed + +lemma lipschitz_cond_affine: + fixes A :: "real \ 'a::real_normed_algebra_1^'n^'m" and T::"real set" + defines "L \ Sup {\A t\\<^sub>o\<^sub>p |t. t \ T}" + assumes "t \ T" and "bdd_above {\A t\\<^sub>o\<^sub>p |t. t \ T}" + shows "\A t *v x - A t *v y\ \ L * (\x - y\)" +proof- + have obs: "\A t\\<^sub>o\<^sub>p \ Sup {\A t\\<^sub>o\<^sub>p |t. t \ T}" + apply(rule cSup_upper) + using continuous_on_subset assms by (auto simp: dist_norm) + have "\A t *v x - A t *v y\ = \A t *v (x - y)\" + by (simp add: matrix_vector_mult_diff_distrib) + also have "... \ (\A t\\<^sub>o\<^sub>p) * (\x - y\)" + using norm_matrix_le_mult_op_norm by blast + also have "... \ Sup {\A t\\<^sub>o\<^sub>p |t. t \ T} * (\x - y\)" + using obs mult_right_mono norm_ge_zero by blast + finally show "\A t *v x - A t *v y\ \ L * (\x - y\)" + unfolding assms . +qed + +lemma local_lipschitz_affine: + fixes A :: "real \ 'a::real_normed_algebra_1^'n^'m" + assumes "open T" and "open S" + and Ahyp: "\\ \. \ > 0 \ \ \ T \ cball \ \ \ T \ bdd_above {\A t\\<^sub>o\<^sub>p |t. t \ cball \ \}" + shows "local_lipschitz T S (\t s. A t *v s + B t)" +proof(unfold local_lipschitz_def lipschitz_on_def, clarsimp) + fix s t assume "s \ S" and "t \ T" + then obtain e1 e2 where "cball t e1 \ T" and "cball s e2 \ S" and "min e1 e2 > 0" + using open_cballE[OF _ \open T\] open_cballE[OF _ \open S\] by force + hence obs: "cball t (min e1 e2) \ T" + by auto + let ?L = "Sup {\A \\\<^sub>o\<^sub>p |\. \ \ cball t (min e1 e2)}" + have "\A t\\<^sub>o\<^sub>p \ {\A \\\<^sub>o\<^sub>p |\. \ \ cball t (min e1 e2)}" + using \min e1 e2 > 0\ by auto + moreover have bdd: "bdd_above {\A \\\<^sub>o\<^sub>p |\. \ \ cball t (min e1 e2)}" + by (rule Ahyp, simp only: \min e1 e2 > 0\, simp_all add: \t \ T\ obs) + moreover have "Sup {\A \\\<^sub>o\<^sub>p |\. \ \ cball t (min e1 e2)} \ 0" + apply(rule order.trans[OF op_norm_ge_0[of "A t"]]) + by (rule cSup_upper[OF calculation]) + moreover have "\x\cball s (min e1 e2) \ S. \y\cball s (min e1 e2) \ S. + \\\cball t (min e1 e2) \ T. dist (A \ *v x) (A \ *v y) \ ?L * dist x y" + apply(clarify, simp only: dist_norm, rule lipschitz_cond_affine) + using \min e1 e2 > 0\ bdd by auto + ultimately show "\e>0. \L. \t\cball t e \ T. 0 \ L \ + (\x\cball s e \ S. \y\cball s e \ S. dist (A t *v x) (A t *v y) \ L * dist x y)" + using \min e1 e2 > 0\ by blast +qed + +lemma picard_lindeloef_affine: + fixes A :: "real \ 'a::{banach,real_normed_algebra_1,heine_borel}^'n^'n" + assumes Ahyp: "matrix_continuous_on T A" + and "\\ \. \ \ T \ \ > 0 \ bdd_above {\A t\\<^sub>o\<^sub>p |t. dist \ t \ \}" + and Bhyp: "continuous_on T B" and "open S" + and "t\<^sub>0 \ T" and Thyp: "open T" "is_interval T" + shows "picard_lindeloef (\ t s. A t *v s + B t) T S t\<^sub>0" + apply (unfold_locales, simp_all add: assms, clarsimp) + apply (rule continuous_on_add[OF continuous_on_matrix_vector_multl[OF Ahyp] Bhyp]) + by (rule local_lipschitz_affine) (simp_all add: assms) + +lemma picard_lindeloef_autonomous_affine: + fixes A :: "'a::{banach,real_normed_field,heine_borel}^'n^'n" + shows "picard_lindeloef (\ t s. A *v s + B) UNIV UNIV t\<^sub>0" + using picard_lindeloef_affine[of _ "\t. A" "\t. B"] + unfolding matrix_continuous_on_def by (simp only: diff_self op_norm0, auto) + +lemma picard_lindeloef_autonomous_linear: + fixes A :: "'a::{banach,real_normed_field,heine_borel}^'n^'n" + shows "picard_lindeloef (\ t. (*v) A) UNIV UNIV t\<^sub>0" + using picard_lindeloef_autonomous_affine[of A 0] by force + +lemmas unique_sol_autonomous_affine = picard_lindeloef.unique_solution[OF + picard_lindeloef_autonomous_affine _ _ funcset_UNIV UNIV_I _ _ funcset_UNIV UNIV_I] + +lemmas unique_sol_autonomous_linear = picard_lindeloef.unique_solution[OF + picard_lindeloef_autonomous_linear _ _ funcset_UNIV UNIV_I _ _ funcset_UNIV UNIV_I] + + +subsection \ Flow for affine systems \ + + +subsubsection \ Derivative rules for square matrices \ + +lemma has_derivative_exp_scaleRl[derivative_intros]: + fixes f::"real \ real" (* by Fabian Immler and Johannes Hölzl *) + assumes "D f \ f' at t within T" + shows "D (\t. exp (f t *\<^sub>R A)) \ (\h. f' h *\<^sub>R (exp (f t *\<^sub>R A) * A)) at t within T" +proof - + have "bounded_linear f'" + using assms by auto + then obtain m where obs: "f' = (\h. h * m)" + using real_bounded_linear by blast + thus ?thesis + using vector_diff_chain_within[OF _ exp_scaleR_has_vector_derivative_right] + assms obs by (auto simp: has_vector_derivative_def comp_def) +qed + +lemma has_vderiv_on_exp_scaleRl: + assumes "D f = f' on T" + shows "D (\x. exp (f x *\<^sub>R A)) = (\x. f' x *\<^sub>R exp (f x *\<^sub>R A) * A) on T" + using assms unfolding has_vderiv_on_def has_vector_derivative_def apply clarsimp + by (rule has_derivative_exp_scaleRl, auto simp: fun_eq_iff) + +lemma vderiv_on_exp_scaleRlI[poly_derivatives]: + assumes "D f = f' on T" and "g' = (\x. f' x *\<^sub>R exp (f x *\<^sub>R A) * A)" + shows "D (\x. exp (f x *\<^sub>R A)) = g' on T" + using has_vderiv_on_exp_scaleRl assms by simp + +lemma has_derivative_mtx_ith[derivative_intros]: + fixes t::real and T :: "real set" + defines "t\<^sub>0 \ netlimit (at t within T)" + assumes "D A \ (\h. h *\<^sub>R A' t) at t within T" + shows "D (\t. A t $$ i) \ (\h. h *\<^sub>R A' t $$ i) at t within T" + using assms unfolding has_derivative_def apply safe + apply(force simp: bounded_linear_def bounded_linear_axioms_def) + apply(rule_tac F="\\. (A \ - A t\<^sub>0 - (\ - t\<^sub>0) *\<^sub>R A' t) /\<^sub>R (\\ - t\<^sub>0\)" in tendsto_zero_norm_bound) + by (clarsimp, rule mult_left_mono, metis (no_types, lifting) norm_column_le_norm + sq_mtx_minus_ith sq_mtx_scaleR_ith) simp_all + +lemmas has_derivative_mtx_vec_mult[derivative_intros] = + bounded_bilinear.FDERIV[OF bounded_bilinear_sq_mtx_vec_mult] + +lemma vderiv_mtx_vec_mult_intro[poly_derivatives]: + assumes "D u = u' on T" and "D A = A' on T" + and "g = (\t. A t *\<^sub>V u' t + A' t *\<^sub>V u t )" + shows "D (\t. A t *\<^sub>V u t) = g on T" + using assms unfolding has_vderiv_on_def has_vector_derivative_def apply clarsimp + apply(erule_tac x=x in ballE, simp_all)+ + apply(rule derivative_eq_intros(142)) + by (auto simp: fun_eq_iff mtx_vec_scaleR_commute pth_6 scaleR_mtx_vec_assoc) + +lemmas has_vderiv_on_ivl_integral = ivl_integral_has_vderiv_on[OF vderiv_on_continuous_on] + +declare has_vderiv_on_ivl_integral [poly_derivatives] + +lemma has_derivative_mtx_vec_multl[derivative_intros]: + assumes "\ i j. D (\t. (A t) $$ i $ j) \ (\\. \ *\<^sub>R (A' t) $$ i $ j) (at t within T)" + shows "D (\t. A t *\<^sub>V x) \ (\\. \ *\<^sub>R (A' t) *\<^sub>V x) at t within T" + unfolding sq_mtx_vec_mult_sum_cols + apply(rule_tac f'1="\i \. \ *\<^sub>R (x $ i *\<^sub>R \\\ i (A' t))" in derivative_eq_intros(10)) + apply(simp_all add: scaleR_right.sum) + apply(rule_tac g'1="\\. \ *\<^sub>R \\\ i (A' t)" in derivative_eq_intros(4), simp_all add: mult.commute) + using assms unfolding sq_mtx_col_def column_def apply(transfer, simp) + apply(rule has_derivative_vec_lambda) + by (simp add: scaleR_vec_def) + +lemma continuous_on_mtx_vec_multr: "continuous_on S ((*\<^sub>V) A)" + by transfer (simp add: matrix_vector_mult_linear_continuous_on) + +\ \Automatically generated derivative rules from this subsubsection \ + +thm derivative_eq_intros(140,141,142,143) + + +subsubsection \ Existence and uniqueness with square matrices \ + +text \Finally, we can use the @{term exp} operation to characterize the general solutions for affine +systems of ODEs. We show that they satisfy the @{term "local_flow"} locale.\ + +lemma continuous_on_sq_mtx_vec_multl: + fixes A :: "real \ ('n::finite) sq_mtx" + assumes "continuous_on T A" + shows "continuous_on T (\t. A t *\<^sub>V s)" +proof- + have "matrix_continuous_on T (\t. to_vec (A t))" + using assms by (force simp: continuous_on_iff dist_norm norm_sq_mtx_def matrix_continuous_on_def) + hence "continuous_on T (\t. to_vec (A t) *v s)" + by (rule continuous_on_matrix_vector_multl) + thus ?thesis + by transfer +qed + +lemmas continuous_on_affine = continuous_on_add[OF continuous_on_sq_mtx_vec_multl] + +lemma local_lipschitz_sq_mtx_affine: + fixes A :: "real \ ('n::finite) sq_mtx" + assumes "continuous_on T A" "open T" "open S" + shows "local_lipschitz T S (\t s. A t *\<^sub>V s + B t)" +proof- + have obs: "\\ \. 0 < \ \ \ \ T \ cball \ \ \ T \ bdd_above {\A t\ |t. t \ cball \ \}" + by (rule bdd_above_norm_cont_comp, rule continuous_on_subset[OF assms(1)], simp_all) + hence "\\ \. 0 < \ \ \ \ T \ cball \ \ \ T \ bdd_above {\to_vec (A t)\\<^sub>o\<^sub>p |t. t \ cball \ \}" + by (simp add: norm_sq_mtx_def) + hence "local_lipschitz T S (\t s. to_vec (A t) *v s + B t)" + using local_lipschitz_affine[OF assms(2,3), of "\t. to_vec (A t)"] by force + thus ?thesis + by transfer +qed + +lemma picard_lindeloef_sq_mtx_affine: + assumes "continuous_on T A" and "continuous_on T B" + and "t\<^sub>0 \ T" "is_interval T" "open T" and "open S" + shows "picard_lindeloef (\t s. A t *\<^sub>V s + B t) T S t\<^sub>0" + apply(unfold_locales, simp_all add: assms, clarsimp) + using continuous_on_affine assms apply blast + by (rule local_lipschitz_sq_mtx_affine, simp_all add: assms) + +lemmas sq_mtx_unique_sol_autonomous_affine = picard_lindeloef.unique_solution[OF + picard_lindeloef_sq_mtx_affine[OF + continuous_on_const + continuous_on_const + UNIV_I is_interval_univ + open_UNIV open_UNIV] + _ _ funcset_UNIV UNIV_I _ _ funcset_UNIV UNIV_I] + +lemma has_vderiv_on_sq_mtx_linear: + "D (\t. exp ((t - t\<^sub>0) *\<^sub>R A) *\<^sub>V s) = (\t. A *\<^sub>V (exp ((t - t\<^sub>0) *\<^sub>R A) *\<^sub>V s)) on {t\<^sub>0--t}" + by (rule poly_derivatives)+ (auto simp: exp_times_scaleR_commute sq_mtx_times_vec_assoc) + +lemma has_vderiv_on_sq_mtx_affine: + fixes t\<^sub>0::real and A :: "('a::finite) sq_mtx" + defines "lSol c t \ exp ((c * (t - t\<^sub>0)) *\<^sub>R A)" + shows "D (\t. lSol 1 t *\<^sub>V s + lSol 1 t *\<^sub>V (\\<^sub>t\<^sub>0\<^sup>t (lSol (-1) \ *\<^sub>V B) \\)) = + (\t. A *\<^sub>V (lSol 1 t *\<^sub>V s + lSol 1 t *\<^sub>V (\\<^sub>t\<^sub>0\<^sup>t (lSol (-1) \ *\<^sub>V B) \\)) + B) on {t\<^sub>0--t}" + unfolding assms apply(simp only: mult.left_neutral mult_minus1) + apply(rule poly_derivatives, (force)?, (force)?, (force)?, (force)?)+ + by (simp add: mtx_vec_mult_add_rdistl sq_mtx_times_vec_assoc[symmetric] + exp_minus_inverse exp_times_scaleR_commute mult_exp_exp scale_left_distrib[symmetric]) + +lemma autonomous_linear_sol_is_exp: + assumes "D X = (\t. A *\<^sub>V X t) on {t\<^sub>0--t}" and "X t\<^sub>0 = s" + shows "X t = exp ((t - t\<^sub>0) *\<^sub>R A) *\<^sub>V s" + apply(rule sq_mtx_unique_sol_autonomous_affine[of X A 0, OF _ \X t\<^sub>0 = s\]) + using assms has_vderiv_on_sq_mtx_linear by force+ + +lemma autonomous_affine_sol_is_exp_plus_int: + assumes "D X = (\t. A *\<^sub>V X t + B) on {t\<^sub>0--t}" and "X t\<^sub>0 = s" + shows "X t = exp ((t - t\<^sub>0) *\<^sub>R A) *\<^sub>V s + exp ((t - t\<^sub>0) *\<^sub>R A) *\<^sub>V (\\<^sub>t\<^sub>0\<^sup>t(exp (- (\ - t\<^sub>0) *\<^sub>R A) *\<^sub>V B)\\)" + apply(rule sq_mtx_unique_sol_autonomous_affine[OF assms]) + using has_vderiv_on_sq_mtx_affine by force+ + +lemma local_flow_sq_mtx_linear: "local_flow ((*\<^sub>V) A) UNIV UNIV (\t s. exp (t *\<^sub>R A) *\<^sub>V s)" + unfolding local_flow_def local_flow_axioms_def apply safe + using picard_lindeloef_sq_mtx_affine[of _ "\t. A" "\t. 0"] apply force + using has_vderiv_on_sq_mtx_linear[of 0] by auto + +lemma local_flow_sq_mtx_affine: "local_flow (\s. A *\<^sub>V s + B) UNIV UNIV + (\t s. exp (t *\<^sub>R A) *\<^sub>V s + exp (t *\<^sub>R A) *\<^sub>V (\\<^sub>0\<^sup>t(exp (- \ *\<^sub>R A) *\<^sub>V B)\\))" + unfolding local_flow_def local_flow_axioms_def apply safe + using picard_lindeloef_sq_mtx_affine[of _ "\t. A" "\t. B"] apply force + using has_vderiv_on_sq_mtx_affine[of 0 A] by auto + + +end \ No newline at end of file diff --git a/thys/Matrices_for_ODEs/MTX_Norms.thy b/thys/Matrices_for_ODEs/MTX_Norms.thy new file mode 100644 --- /dev/null +++ b/thys/Matrices_for_ODEs/MTX_Norms.thy @@ -0,0 +1,285 @@ +(* Title: Matrix norms + Author: Jonathan Julián Huerta y Munive, 2020 + Maintainer: Jonathan Julián Huerta y Munive +*) + +section \ Matrix norms \ + +text \ Here, we explore some properties about the operator and the maximum norms for matrices. \ + +theory MTX_Norms + imports MTX_Preliminaries + +begin + + +subsection\ Matrix operator norm \ + +abbreviation op_norm :: "('a::real_normed_algebra_1)^'n^'m \ real" ("(1\_\\<^sub>o\<^sub>p)" [65] 61) + where "\A\\<^sub>o\<^sub>p \ onorm (\x. A *v x)" + +lemma norm_matrix_bound: + fixes A :: "('a::real_normed_algebra_1)^'n^'m" + shows "\x\ = 1 \ \A *v x\ \ \(\ i j. \A $ i $ j\) *v 1\" +proof- + fix x :: "('a, 'n) vec" assume "\x\ = 1" + hence xi_le1:"\i. \x $ i\ \ 1" + by (metis Finite_Cartesian_Product.norm_nth_le) + {fix j::'m + have "\(\i\UNIV. A $ j $ i * x $ i)\ \ (\i\UNIV. \A $ j $ i * x $ i\)" + using norm_sum by blast + also have "... \ (\i\UNIV. (\A $ j $ i\) * (\x $ i\))" + by (simp add: norm_mult_ineq sum_mono) + also have "... \ (\i\UNIV. (\A $ j $ i\) * 1)" + using xi_le1 by (simp add: sum_mono mult_left_le) + finally have "\(\i\UNIV. A $ j $ i * x $ i)\ \ (\i\UNIV. (\A $ j $ i\) * 1)" by simp} + hence "\j. \(A *v x) $ j\ \ ((\ i1 i2. \A $ i1 $ i2\) *v 1) $ j" + unfolding matrix_vector_mult_def by simp + hence "(\j\UNIV. (\(A *v x) $ j\)\<^sup>2) \ (\j\UNIV. (\((\ i1 i2. \A $ i1 $ i2\) *v 1) $ j\)\<^sup>2)" + by (metis (mono_tags, lifting) norm_ge_zero power2_abs power_mono real_norm_def sum_mono) + thus "\A *v x\ \ \(\ i j. \A $ i $ j\) *v 1\" + unfolding norm_vec_def L2_set_def by simp +qed + +lemma onorm_set_proptys: + fixes A :: "('a::real_normed_algebra_1)^'n^'m" + shows "bounded (range (\x. (\A *v x\) / (\x\)))" + and "bdd_above (range (\x. (\A *v x\) / (\x\)))" + and "(range (\x. (\A *v x\) / (\x\))) \ {}" + unfolding bounded_def bdd_above_def image_def dist_real_def + apply(rule_tac x=0 in exI) + by (rule_tac x="\(\ i j. \A $ i $ j\) *v 1\" in exI, clarsimp, + subst mult_norm_matrix_sgn_eq[symmetric], clarsimp, + rule_tac x="sgn _" in norm_matrix_bound, simp add: norm_sgn)+ force + +lemma op_norm_set_proptys: + fixes A :: "('a::real_normed_algebra_1)^'n^'m" + shows "bounded {\A *v x\ | x. \x\ = 1}" + and "bdd_above {\A *v x\ | x. \x\ = 1}" + and "{\A *v x\ | x. \x\ = 1} \ {}" + unfolding bounded_def bdd_above_def apply safe + apply(rule_tac x=0 in exI, rule_tac x="\(\ i j. \A $ i $ j\) *v 1\" in exI) + apply(force simp: norm_matrix_bound dist_real_def) + apply(rule_tac x="\(\ i j. \A $ i $ j\) *v 1\" in exI, force simp: norm_matrix_bound) + using ex_norm_eq_1 by blast + +lemma op_norm_def: "\A\\<^sub>o\<^sub>p = Sup {\A *v x\ | x. \x\ = 1}" + apply(rule antisym[OF onorm_le cSup_least[OF op_norm_set_proptys(3)]]) + apply(case_tac "x = 0", simp) + apply(subst mult_norm_matrix_sgn_eq[symmetric], simp) + apply(rule cSup_upper[OF _ op_norm_set_proptys(2)]) + apply(force simp: norm_sgn) + unfolding onorm_def + apply(rule cSup_upper[OF _ onorm_set_proptys(2)]) + by (simp add: image_def, clarsimp) (metis div_by_1) + +lemma norm_matrix_le_op_norm: "\x\ = 1 \ \A *v x\ \ \A\\<^sub>o\<^sub>p" + apply(unfold onorm_def, rule cSup_upper[OF _ onorm_set_proptys(2)]) + unfolding image_def by (clarsimp, rule_tac x=x in exI) simp + +lemma op_norm_ge_0: "0 \ \A\\<^sub>o\<^sub>p" + using ex_norm_eq_1 norm_ge_zero norm_matrix_le_op_norm basic_trans_rules(23) by blast + +lemma norm_sgn_le_op_norm: "\A *v sgn x\ \ \A\\<^sub>o\<^sub>p" + by (cases "x=0", simp_all add: norm_sgn norm_matrix_le_op_norm op_norm_ge_0) + +lemma norm_matrix_le_mult_op_norm: "\A *v x\ \ (\A\\<^sub>o\<^sub>p) * (\x\)" +proof- + have "\A *v x\ = (\A *v sgn x\) * (\x\)" + by(simp add: mult_norm_matrix_sgn_eq) + also have "... \ (\A\\<^sub>o\<^sub>p) * (\x\)" + using norm_sgn_le_op_norm[of A] by (simp add: mult_mono') + finally show ?thesis by simp +qed + +lemma blin_matrix_vector_mult: "bounded_linear ((*v) A)" for A :: "('a::real_normed_algebra_1)^'n^'m" + by (unfold_locales) (auto intro: norm_matrix_le_mult_op_norm simp: + mult.commute matrix_vector_right_distrib vector_scaleR_commute) + +lemma op_norm_eq_0: "(\A\\<^sub>o\<^sub>p = 0) = (A = 0)" for A :: "('a::real_normed_field)^'n^'m" + unfolding onorm_eq_0[OF blin_matrix_vector_mult] using matrix_axis_0[of 1 A] by fastforce + +lemma op_norm0: "\(0::('a::real_normed_field)^'n^'m)\\<^sub>o\<^sub>p = 0" + using op_norm_eq_0[of 0] by simp + +lemma op_norm_triangle: "\A + B\\<^sub>o\<^sub>p \ (\A\\<^sub>o\<^sub>p) + (\B\\<^sub>o\<^sub>p)" + using onorm_triangle[OF blin_matrix_vector_mult[of A] blin_matrix_vector_mult[of B]] + matrix_vector_mult_add_rdistrib[symmetric, of A _ B] by simp + +lemma op_norm_scaleR: "\c *\<^sub>R A\\<^sub>o\<^sub>p = \c\ * (\A\\<^sub>o\<^sub>p)" + unfolding onorm_scaleR[OF blin_matrix_vector_mult, symmetric] scaleR_vector_assoc .. + +lemma op_norm_matrix_matrix_mult_le: "\A ** B\\<^sub>o\<^sub>p \ (\A\\<^sub>o\<^sub>p) * (\B\\<^sub>o\<^sub>p)" +proof(rule onorm_le) + have "0 \ (\A\\<^sub>o\<^sub>p)" + by(rule onorm_pos_le[OF blin_matrix_vector_mult]) + fix x have "\A ** B *v x\ = \A *v (B *v x)\" + by (simp add: matrix_vector_mul_assoc) + also have "... \ (\A\\<^sub>o\<^sub>p) * (\B *v x\)" + by (simp add: norm_matrix_le_mult_op_norm[of _ "B *v x"]) + also have "... \ (\A\\<^sub>o\<^sub>p) * ((\B\\<^sub>o\<^sub>p) * (\x\))" + using norm_matrix_le_mult_op_norm[of B x] \0 \ (\A\\<^sub>o\<^sub>p)\ mult_left_mono by blast + finally show "\A ** B *v x\ \ (\A\\<^sub>o\<^sub>p) * (\B\\<^sub>o\<^sub>p) * (\x\)" + by simp +qed + +lemma norm_matrix_vec_mult_le_transpose: + "\x\ = 1 \ (\A *v x\) \ sqrt (\transpose A ** A\\<^sub>o\<^sub>p) * (\x\)" for A :: "real^'n^'n" +proof- + assume "\x\ = 1" + have "(\A *v x\)\<^sup>2 = (A *v x) \ (A *v x)" + using dot_square_norm[of "(A *v x)"] by simp + also have "... = x \ (transpose A *v (A *v x))" + using vec_mult_inner by blast + also have "... \ (\x\) * (\transpose A *v (A *v x)\)" + using norm_cauchy_schwarz by blast + also have "... \ (\transpose A ** A\\<^sub>o\<^sub>p) * (\x\)^2" + apply(subst matrix_vector_mul_assoc) + using norm_matrix_le_mult_op_norm[of "transpose A ** A" x] + by (simp add: \\x\ = 1\) + finally have "((\A *v x\))^2 \ (\transpose A ** A\\<^sub>o\<^sub>p) * (\x\)^2" + by linarith + thus "(\A *v x\) \ sqrt ((\transpose A ** A\\<^sub>o\<^sub>p)) * (\x\)" + by (simp add: \\x\ = 1\ real_le_rsqrt) +qed + +lemma op_norm_le_sum_column: "\A\\<^sub>o\<^sub>p \ (\i\UNIV. \column i A\)" for A :: "real^'n^'m" +proof(unfold op_norm_def, rule cSup_least[OF op_norm_set_proptys(3)], clarsimp) + fix x :: "real^'n" assume x_def:"\x\ = 1" + hence x_hyp:"\i. \x $ i\ \ 1" + by (simp add: norm_bound_component_le_cart) + have "(\A *v x\) = \(\i\UNIV. x $ i *s column i A)\" + by(subst matrix_mult_sum[of A], simp) + also have "... \ (\i\UNIV. \x $ i *s column i A\)" + by (simp add: sum_norm_le) + also have "... = (\i\UNIV. (\x $ i\) * (\column i A\))" + by (simp add: mult_norm_matrix_sgn_eq) + also have "... \ (\i\UNIV. \column i A\)" + using x_hyp by (simp add: mult_left_le_one_le sum_mono) + finally show "\A *v x\ \ (\i\UNIV. \column i A\)" . +qed + +lemma op_norm_le_transpose: "\A\\<^sub>o\<^sub>p \ \transpose A\\<^sub>o\<^sub>p" for A :: "real^'n^'n" +proof- + have obs:"\x. \x\ = 1 \ (\A *v x\) \ sqrt ((\transpose A ** A\\<^sub>o\<^sub>p)) * (\x\)" + using norm_matrix_vec_mult_le_transpose by blast + have "(\A\\<^sub>o\<^sub>p) \ sqrt ((\transpose A ** A\\<^sub>o\<^sub>p))" + using obs apply(unfold op_norm_def) + by (rule cSup_least[OF op_norm_set_proptys(3)]) clarsimp + hence "((\A\\<^sub>o\<^sub>p))\<^sup>2 \ (\transpose A ** A\\<^sub>o\<^sub>p)" + using power_mono[of "(\A\\<^sub>o\<^sub>p)" _ 2] op_norm_ge_0 + by (metis not_le real_less_lsqrt) + also have "... \ (\transpose A\\<^sub>o\<^sub>p) * (\A\\<^sub>o\<^sub>p)" + using op_norm_matrix_matrix_mult_le by blast + finally have "((\A\\<^sub>o\<^sub>p))\<^sup>2 \ (\transpose A\\<^sub>o\<^sub>p) * (\A\\<^sub>o\<^sub>p)" + by linarith + thus "(\A\\<^sub>o\<^sub>p) \ (\transpose A\\<^sub>o\<^sub>p)" + using sq_le_cancel[of "(\A\\<^sub>o\<^sub>p)"] op_norm_ge_0 by metis +qed + + +subsection\ Matrix maximum norm \ + +abbreviation max_norm :: "real^'n^'m \ real" ("(1\_\\<^sub>m\<^sub>a\<^sub>x)" [65] 61) + where "\A\\<^sub>m\<^sub>a\<^sub>x \ Max (abs ` (entries A))" + +lemma max_norm_def: "\A\\<^sub>m\<^sub>a\<^sub>x = Max {\A $ i $ j\|i j. i\UNIV \ j\UNIV}" + by (simp add: image_def, rule arg_cong[of _ _ Max], blast) + +lemma max_norm_set_proptys: "finite {\A $ i $ j\ |i j. i \ UNIV \ j \ UNIV}" (is "finite ?X") +proof- + have "\i. finite {\A $ i $ j\ | j. j \ UNIV}" + using finite_Atleast_Atmost_nat by fastforce + hence "finite (\i\UNIV. {\A $ i $ j\ | j. j \ UNIV})" (is "finite ?Y") + using finite_class.finite_UNIV by blast + also have "?X \ ?Y" + by auto + ultimately show ?thesis + using finite_subset by blast +qed + +lemma max_norm_ge_0: "0 \ \A\\<^sub>m\<^sub>a\<^sub>x" + unfolding max_norm_def + apply(rule order.trans[OF abs_ge_zero[of "A $ _ $ _"] Max_ge]) + using max_norm_set_proptys by auto + +lemma op_norm_le_max_norm: + fixes A :: "real^('n::finite)^('m::finite)" + shows "\A\\<^sub>o\<^sub>p \ real CARD('m) * real CARD('n) * (\A\\<^sub>m\<^sub>a\<^sub>x)" + apply(rule onorm_le_matrix_component) + unfolding max_norm_def by(rule Max_ge[OF max_norm_set_proptys]) force + +lemma sqrt_Sup_power2_eq_Sup_abs: + "finite A \ A \ {} \ sqrt (Sup {(f i)\<^sup>2 |i. i \ A}) = Sup {\f i\ |i. i \ A}" +proof(rule sym) + assume assms: "finite A" "A \ {}" + then obtain i where i_def: "i \ A \ Sup {(f i)\<^sup>2|i. i \ A} = (f i)^2" + using cSup_finite_ex[of "{(f i)\<^sup>2|i. i \ A}"] by auto + hence lhs: "sqrt (Sup {(f i)\<^sup>2 |i. i \ A}) = \f i\" + by simp + have "finite {(f i)\<^sup>2|i. i \ A}" + using assms by simp + hence "\j\A. (f j)\<^sup>2 \ (f i)\<^sup>2" + using i_def cSup_upper[of _ "{(f i)\<^sup>2 |i. i \ A}"] by force + hence "\j\A. \f j\ \ \f i\" + using abs_le_square_iff by blast + also have "\f i\ \ {\f i\ |i. i \ A}" + using i_def by auto + ultimately show "Sup {\f i\ |i. i \ A} = sqrt (Sup {(f i)\<^sup>2 |i. i \ A})" + using cSup_mem_eq[of "\f i\" "{\f i\ |i. i \ A}"] lhs by auto +qed + +lemma sqrt_Max_power2_eq_max_abs: + "finite A \ A \ {} \ sqrt (Max {(f i)\<^sup>2|i. i \ A}) = Max {\f i\ |i. i \ A}" + apply(subst cSup_eq_Max[symmetric], simp_all)+ + using sqrt_Sup_power2_eq_Sup_abs . + +lemma op_norm_diag_mat_eq: "\diag_mat f\\<^sub>o\<^sub>p = Max {\f i\ |i. i \ UNIV}" (is "_ = Max ?A") +proof(unfold op_norm_def) + have obs: "\x i. (f i)\<^sup>2 * (x $ i)\<^sup>2 \ Max {(f i)\<^sup>2|i. i \ UNIV} * (x $ i)\<^sup>2" + apply(rule mult_right_mono[OF _ zero_le_power2]) + using le_max_image_of_finite[of "\i. (f i)^2"] by simp + {fix r assume "r \ {\diag_mat f *v x\ |x. \x\ = 1}" + then obtain x where x_def: "\diag_mat f *v x\ = r \ \x\ = 1" + by blast + hence "r\<^sup>2 = (\i\UNIV. (f i)\<^sup>2 * (x $ i)\<^sup>2)" + unfolding norm_vec_def L2_set_def matrix_vector_mul_diag_mat + apply (simp add: power_mult_distrib) + by (metis (no_types, lifting) x_def norm_ge_zero real_sqrt_ge_0_iff real_sqrt_pow2) + also have "... \ (Max {(f i)\<^sup>2|i. i \ UNIV}) * (\i\UNIV. (x $ i)\<^sup>2)" + using obs[of _ x] by (simp add: sum_mono sum_distrib_left) + also have "... = Max {(f i)\<^sup>2|i. i \ UNIV}" + using x_def by (simp add: norm_vec_def L2_set_def) + finally have "r \ sqrt (Max {(f i)\<^sup>2|i. i \ UNIV})" + using x_def real_le_rsqrt by blast + hence "r \ Max ?A" + by (subst (asm) sqrt_Max_power2_eq_max_abs[of UNIV f], simp_all)} + hence 1: "\x\{\diag_mat f *v x\ |x. \x\ = 1}. x \ Max ?A" + unfolding diag_mat_def by blast + obtain i where i_def: "Max ?A = \diag_mat f *v \ i\" + using cMax_finite_ex[of ?A] by force + hence 2: "\x\{\diag_mat f *v x\ |x. \x\ = 1}. Max ?A \ x" + by (metis (mono_tags, lifting) abs_1 mem_Collect_eq norm_axis_eq order_refl real_norm_def) + show "Sup {\diag_mat f *v x\ |x. \x\ = 1} = Max ?A" + by (rule cSup_eq[OF 1 2]) +qed + +lemma op_max_norms_eq_at_diag: "\diag_mat f\\<^sub>o\<^sub>p = \diag_mat f\\<^sub>m\<^sub>a\<^sub>x" +proof(rule antisym) + have "{\f i\ |i. i \ UNIV} \ {\diag_mat f $ i $ j\ |i j. i \ UNIV \ j \ UNIV}" + by (smt Collect_mono diag_mat_vec_nth_simps(1)) + thus "\diag_mat f\\<^sub>o\<^sub>p \ \diag_mat f\\<^sub>m\<^sub>a\<^sub>x" + unfolding op_norm_diag_mat_eq max_norm_def + by (rule Max.subset_imp) (blast, simp only: finite_image_of_finite2) +next + have "Sup {\diag_mat f $ i $ j\ |i j. i \ UNIV \ j \ UNIV} \ Sup {\f i\ |i. i \ UNIV}" + apply(rule cSup_least, blast, clarify, case_tac "i = j", simp) + by (rule cSup_upper, blast, simp_all) (rule cSup_upper2, auto) + thus "\diag_mat f\\<^sub>m\<^sub>a\<^sub>x \ \diag_mat f\\<^sub>o\<^sub>p" + unfolding op_norm_diag_mat_eq max_norm_def + apply (subst cSup_eq_Max[symmetric], simp only: finite_image_of_finite2, blast) + by (subst cSup_eq_Max[symmetric], simp, blast) +qed + + +end \ No newline at end of file diff --git a/thys/Matrices_for_ODEs/MTX_Preliminaries.thy b/thys/Matrices_for_ODEs/MTX_Preliminaries.thy new file mode 100644 --- /dev/null +++ b/thys/Matrices_for_ODEs/MTX_Preliminaries.thy @@ -0,0 +1,403 @@ +(* Title: Mathematical Preliminaries + Author: Jonathan Julián Huerta y Munive, 2020 + Maintainer: Jonathan Julián Huerta y Munive +*) + +section \ Mathematical Preliminaries \ + +text \This section adds useful syntax, abbreviations and theorems to the Isabelle distribution. \ + +theory MTX_Preliminaries + imports "Hybrid_Systems_VCs.HS_Preliminaries" + +begin + + +subsection \ Syntax \ + +abbreviation "\ k \ axis k 1" + +syntax + "_ivl_integral" :: "real \ real \ 'a \ pttrn \ bool" ("(3\\<^sub>_\<^sup>_ (_)\/_)" [0, 0, 10] 10) + +translations + "\\<^sub>a\<^sup>b f \x" \ "CONST ivl_integral a b (\x. f)" + +notation matrix_inv ("_\<^sup>-\<^sup>1" [90]) + +abbreviation "entries (A::'a^'n^'m) \ {A $ i $ j | i j. i \ UNIV \ j \ UNIV}" + + +subsection \ Topology and sets \ + +lemmas compact_imp_bdd_above = compact_imp_bounded[THEN bounded_imp_bdd_above] + +lemma comp_cont_image_spec: "continuous_on T f \ compact T \ compact {f t |t. t \ T}" + using compact_continuous_image by (simp add: Setcompr_eq_image) + +lemmas bdd_above_cont_comp_spec = compact_imp_bdd_above[OF comp_cont_image_spec] + +lemmas bdd_above_norm_cont_comp = continuous_on_norm[THEN bdd_above_cont_comp_spec] + +lemma open_cballE: "t\<^sub>0 \ T \ open T \ \e>0. cball t\<^sub>0 e \ T" + using open_contains_cball by blast + +lemma open_ballE: "t\<^sub>0 \ T \ open T \ \e>0. ball t\<^sub>0 e \ T" + using open_contains_ball by blast + +lemma funcset_UNIV: "f \ A \ UNIV" + by auto + +lemma finite_image_of_finite[simp]: + fixes f::"'a::finite \ 'b" + shows "finite {x. \i. x = f i}" + using finite_Atleast_Atmost_nat by force + +lemma finite_image_of_finite2: + fixes f :: "'a::finite \ 'b::finite \ 'c" + shows "finite {f x y |x y. P x y}" +proof- + have "finite (\x. {f x y|y. P x y})" + by simp + moreover have "{f x y|x y. P x y} = (\x. {f x y|y. P x y})" + by auto + ultimately show ?thesis + by simp +qed + + +subsection \ Functions \ + +lemma finite_sum_univ_singleton: "(sum g UNIV) = sum g {i::'a::finite} + sum g (UNIV - {i})" + by (metis add.commute finite_class.finite_UNIV sum.subset_diff top_greatest) + +lemma suminfI: + fixes f :: "nat \ 'a::{t2_space,comm_monoid_add}" + shows "f sums k \ suminf f = k" + unfolding sums_iff by simp + +lemma suminf_eq_sum: + fixes f :: "nat \ ('a::real_normed_vector)" + assumes "\n. n > m \ f n = 0" + shows "(\n. f n) = (\n \ m. f n)" + using assms by (meson atMost_iff finite_atMost not_le suminf_finite) + +lemma suminf_multr: "summable f \ (\n. f n * c) = (\n. f n) * c" for c::"'a::real_normed_algebra" + by (rule bounded_linear.suminf [OF bounded_linear_mult_left, symmetric]) + +lemma sum_if_then_else_simps[simp]: + fixes q :: "('a::semiring_0)" and i :: "'n::finite" + shows "(\j\UNIV. f j * (if j = i then q else 0)) = f i * q" + and "(\j\UNIV. f j * (if i = j then q else 0)) = f i * q" + and "(\j\UNIV. (if i = j then q else 0) * f j) = q * f i" + and "(\j\UNIV. (if j = i then q else 0) * f j) = q * f i" + by (auto simp: finite_sum_univ_singleton[of _ i]) + + +subsection \ Suprema \ + +lemma le_max_image_of_finite[simp]: + fixes f::"'a::finite \ 'b::linorder" + shows "(f i) \ Max {x. \i. x = f i}" + by (rule Max.coboundedI, simp_all) (rule_tac x=i in exI, simp) + +lemma cSup_eq: + fixes c::"'a::conditionally_complete_lattice" + assumes "\x \ X. x \ c" and "\x \ X. c \ x" + shows "Sup X = c" + by (metis assms cSup_eq_maximum order_class.order.antisym) + +lemma cSup_mem_eq: + "c \ X \ \x \ X. x \ c \ Sup X = c" for c::"'a::conditionally_complete_lattice" + by (rule cSup_eq, auto) + +lemma cSup_finite_ex: + "finite X \ X \ {} \ \x\X. Sup X = x" for X::"'a::conditionally_complete_linorder set" + by (metis (full_types) bdd_finite(1) cSup_upper finite_Sup_less_iff order_less_le) + +lemma cMax_finite_ex: + "finite X \ X \ {} \ \x\X. Max X = x" for X::"'a::conditionally_complete_linorder set" + apply(subst cSup_eq_Max[symmetric]) + using cSup_finite_ex by auto + +lemma finite_nat_minimal_witness: + fixes P :: "('a::finite) \ nat \ bool" + assumes "\i. \N::nat. \n \ N. P i n" + shows "\N. \i. \n \ N. P i n" +proof- + let "?bound i" = "(LEAST N. \n \ N. P i n)" + let ?N = "Max {?bound i |i. i \ UNIV}" + {fix n::nat and i::'a + assume "n \ ?N" + obtain M where "\n \ M. P i n" + using assms by blast + hence obs: "\ m \ ?bound i. P i m" + using LeastI[of "\N. \n \ N. P i n"] by blast + have "finite {?bound i |i. i \ UNIV}" + by simp + hence "?N \ ?bound i" + using Max_ge by blast + hence "n \ ?bound i" + using \n \ ?N\ by linarith + hence "P i n" + using obs by blast} + thus "\N. \i n. N \ n \ P i n" + by blast +qed + + +subsection \ Real numbers \ + +named_theorems field_power_simps "simplification rules for powers to the nth" + +declare semiring_normalization_rules(18) [field_power_simps] + and semiring_normalization_rules(26) [field_power_simps] + and semiring_normalization_rules(27) [field_power_simps] + and semiring_normalization_rules(28) [field_power_simps] + and semiring_normalization_rules(29) [field_power_simps] + +text \WARNING: Adding @{thm semiring_normalization_rules(27)} to our tactic makes +its combination with simp to loop infinitely in some proofs.\ + +lemma sq_le_cancel: + shows "(a::real) \ 0 \ b \ 0 \ a^2 \ b * a \ a \ b" + and "(a::real) \ 0 \ b \ 0 \ a^2 \ a * b \ a \ b" + apply (metis less_eq_real_def mult.commute mult_le_cancel_left semiring_normalization_rules(29)) + by (metis less_eq_real_def mult_le_cancel_left semiring_normalization_rules(29)) + +lemma frac_diff_eq1: "a \ b \ a / (a - b) - b / (a - b) = 1" for a::real + by (metis (no_types, hide_lams) ab_left_minus add.commute add_left_cancel + diff_divide_distrib diff_minus_eq_add div_self) + +lemma exp_add: "x * y - y * x = 0 \ exp (x + y) = exp x * exp y" + by (rule exp_add_commuting) (simp add: ac_simps) + +lemmas mult_exp_exp = exp_add[symmetric] + + +subsection \ Vectors and matrices \ + +lemma sum_axis[simp]: + fixes q :: "('a::semiring_0)" + shows "(\j\UNIV. f j * axis i q $ j) = f i * q" + and "(\j\UNIV. axis i q $ j * f j) = q * f i" + unfolding axis_def by(auto simp: vec_eq_iff) + +lemma sum_scalar_nth_axis: "sum (\i. (x $ i) *s \ i) UNIV = x" for x :: "('a::semiring_1)^'n" + unfolding vec_eq_iff axis_def by simp + +lemma scalar_eq_scaleR[simp]: "c *s x = c *\<^sub>R x" + unfolding vec_eq_iff by simp + +lemma matrix_add_rdistrib: "((B + C) ** A) = (B ** A) + (C ** A)" + by (vector matrix_matrix_mult_def sum.distrib[symmetric] field_simps) + +lemma vec_mult_inner: "(A *v v) \ w = v \ (transpose A *v w)" for A :: "real ^'n^'n" + unfolding matrix_vector_mult_def transpose_def inner_vec_def + apply(simp add: sum_distrib_right sum_distrib_left) + apply(subst sum.swap) + apply(subgoal_tac "\i j. A $ i $ j * v $ j * w $ i = v $ j * (A $ i $ j * w $ i)") + by presburger simp + +lemma uminus_axis_eq[simp]: "- axis i k = axis i (-k)" for k :: "'a::ring" + unfolding axis_def by(simp add: vec_eq_iff) + +lemma norm_axis_eq[simp]: "\axis i k\ = \k\" +proof(simp add: axis_def norm_vec_def L2_set_def) + let "?\\<^sub>K" = "\i j k. if i = j then k else 0" + have "(\j\UNIV. (\(?\\<^sub>K j i k)\)\<^sup>2) = (\j\{i}. (\(?\\<^sub>K j i k)\)\<^sup>2) + (\j\(UNIV-{i}). (\(?\\<^sub>K j i k)\)\<^sup>2)" + using finite_sum_univ_singleton by blast + also have "... = (\k\)\<^sup>2" + by simp + finally show "sqrt (\j\UNIV. (norm (if j = i then k else 0))\<^sup>2) = norm k" + by simp +qed + +lemma matrix_axis_0: + fixes A :: "('a::idom)^'n^'m" + assumes "k \ 0 " and h:"\i. (A *v (axis i k)) = 0" + shows "A = 0" +proof- + {fix i::'n + have "0 = (\j\UNIV. (axis i k) $ j *s column j A)" + using h matrix_mult_sum[of A "axis i k"] by simp + also have "... = k *s column i A" + by (simp add: axis_def vector_scalar_mult_def column_def vec_eq_iff mult.commute) + finally have "k *s column i A = 0" + unfolding axis_def by simp + hence "column i A = 0" + using vector_mul_eq_0 \k \ 0\ by blast} + thus "A = 0" + unfolding column_def vec_eq_iff by simp +qed + +lemma scaleR_norm_sgn_eq: "(\x\) *\<^sub>R sgn x = x" + by (metis divideR_right norm_eq_zero scale_eq_0_iff sgn_div_norm) + +lemma vector_scaleR_commute: "A *v c *\<^sub>R x = c *\<^sub>R (A *v x)" for x :: "('a::real_normed_algebra_1)^'n" + unfolding scaleR_vec_def matrix_vector_mult_def by(auto simp: vec_eq_iff scaleR_right.sum) + +lemma scaleR_vector_assoc: "c *\<^sub>R (A *v x) = (c *\<^sub>R A) *v x" for x :: "('a::real_normed_algebra_1)^'n" + unfolding matrix_vector_mult_def by(auto simp: vec_eq_iff scaleR_right.sum) + +lemma mult_norm_matrix_sgn_eq: + fixes x :: "('a::real_normed_algebra_1)^'n" + shows "(\A *v sgn x\) * (\x\) = \A *v x\" +proof- + have "\A *v x\ = \A *v ((\x\) *\<^sub>R sgn x)\" + by(simp add: scaleR_norm_sgn_eq) + also have "... = (\A *v sgn x\) * (\x\)" + by(simp add: vector_scaleR_commute) + finally show ?thesis .. +qed + + +subsection\ Diagonalization \ + +lemma invertibleI: "A ** B = mat 1 \ B ** A = mat 1 \ invertible A" + unfolding invertible_def by auto + +lemma invertibleD[simp]: + assumes "invertible A" + shows "A\<^sup>-\<^sup>1 ** A = mat 1" and "A ** A\<^sup>-\<^sup>1 = mat 1" + using assms unfolding matrix_inv_def invertible_def + by (simp_all add: verit_sko_ex') + +lemma matrix_inv_unique: + assumes "A ** B = mat 1" and "B ** A = mat 1" + shows "A\<^sup>-\<^sup>1 = B" + by (metis assms invertibleD(2) invertibleI matrix_mul_assoc matrix_mul_lid) + +lemma invertible_matrix_inv: "invertible A \ invertible (A\<^sup>-\<^sup>1)" + using invertibleD invertibleI by blast + +lemma matrix_inv_idempotent[simp]: "invertible A \ A\<^sup>-\<^sup>1\<^sup>-\<^sup>1 = A" + using invertibleD matrix_inv_unique by blast + +lemma matrix_inv_matrix_mul: + assumes "invertible A" and "invertible B" + shows "(A ** B)\<^sup>-\<^sup>1 = B\<^sup>-\<^sup>1 ** A\<^sup>-\<^sup>1" +proof(rule matrix_inv_unique) + have "A ** B ** (B\<^sup>-\<^sup>1 ** A\<^sup>-\<^sup>1) = A ** (B ** B\<^sup>-\<^sup>1) ** A\<^sup>-\<^sup>1" + by (simp add: matrix_mul_assoc) + also have "... = mat 1" + using assms by simp + finally show "A ** B ** (B\<^sup>-\<^sup>1 ** A\<^sup>-\<^sup>1) = mat 1" . +next + have "B\<^sup>-\<^sup>1 ** A\<^sup>-\<^sup>1 ** (A ** B) = B\<^sup>-\<^sup>1 ** (A\<^sup>-\<^sup>1 ** A) ** B" + by (simp add: matrix_mul_assoc) + also have "... = mat 1" + using assms by simp + finally show "B\<^sup>-\<^sup>1 ** A\<^sup>-\<^sup>1 ** (A ** B) = mat 1" . +qed + +lemma mat_inverse_simps[simp]: + fixes c :: "'a::division_ring" + assumes "c \ 0" + shows "mat (inverse c) ** mat c = mat 1" + and "mat c ** mat (inverse c) = mat 1" + unfolding matrix_matrix_mult_def mat_def by (auto simp: vec_eq_iff assms) + +lemma matrix_inv_mat[simp]: "c \ 0 \ (mat c)\<^sup>-\<^sup>1 = mat (inverse c)" for c :: "'a::division_ring" + by (simp add: matrix_inv_unique) + +lemma invertible_mat[simp]: "c \ 0 \ invertible (mat c)" for c :: "'a::division_ring" + using invertibleI mat_inverse_simps(1) mat_inverse_simps(2) by blast + +lemma matrix_inv_mat_1: "(mat (1::'a::division_ring))\<^sup>-\<^sup>1 = mat 1" + by simp + +lemma invertible_mat_1: "invertible (mat (1::'a::division_ring))" + by simp + +definition similar_matrix :: "('a::semiring_1)^'m^'m \ ('a::semiring_1)^'n^'n \ bool" (infixr "\" 25) + where "similar_matrix A B \ (\ P. invertible P \ A = P\<^sup>-\<^sup>1 ** B ** P)" + +lemma similar_matrix_refl[simp]: "A \ A" for A :: "'a::division_ring^'n^'n" + by (unfold similar_matrix_def, rule_tac x="mat 1" in exI, simp) + +lemma similar_matrix_simm: "A \ B \ B \ A" for A B :: "('a::semiring_1)^'n^'n" + apply(unfold similar_matrix_def, clarsimp) + apply(rule_tac x="P\<^sup>-\<^sup>1" in exI, simp add: invertible_matrix_inv) + by (metis invertible_def matrix_inv_unique matrix_mul_assoc matrix_mul_lid matrix_mul_rid) + +lemma similar_matrix_trans: "A \ B \ B \ C \ A \ C" for A B C :: "('a::semiring_1)^'n^'n" +proof(unfold similar_matrix_def, clarsimp) + fix P Q + assume "A = P\<^sup>-\<^sup>1 ** (Q\<^sup>-\<^sup>1 ** C ** Q) ** P" and "B = Q\<^sup>-\<^sup>1 ** C ** Q" + let ?R = "Q ** P" + assume inverts: "invertible Q" "invertible P" + hence "?R\<^sup>-\<^sup>1 = P\<^sup>-\<^sup>1 ** Q\<^sup>-\<^sup>1" + by (rule matrix_inv_matrix_mul) + also have "invertible ?R" + using inverts invertible_mult by blast + ultimately show "\R. invertible R \ P\<^sup>-\<^sup>1 ** (Q\<^sup>-\<^sup>1 ** C ** Q) ** P = R\<^sup>-\<^sup>1 ** C ** R" + by (metis matrix_mul_assoc) +qed + +lemma mat_vec_nth_simps[simp]: + "i = j \ mat c $ i $ j = c" + "i \ j \ mat c $ i $ j = 0" + by (simp_all add: mat_def) + +definition "diag_mat f = (\ i j. if i = j then f i else 0)" + +lemma diag_mat_vec_nth_simps[simp]: + "i = j \ diag_mat f $ i $ j = f i" + "i \ j \ diag_mat f $ i $ j = 0" + unfolding diag_mat_def by simp_all + +lemma diag_mat_const_eq[simp]: "diag_mat (\i. c) = mat c" + unfolding mat_def diag_mat_def by simp + +lemma matrix_vector_mul_diag_mat: "diag_mat f *v s = (\ i. f i * s$i)" + unfolding diag_mat_def matrix_vector_mult_def by simp + +lemma matrix_vector_mul_diag_axis[simp]: "diag_mat f *v (axis i k) = axis i (f i * k)" + by (simp add: matrix_vector_mul_diag_mat axis_def fun_eq_iff) + +lemma matrix_mul_diag_matl: "diag_mat f ** A = (\ i j. f i * A$i$j)" + unfolding diag_mat_def matrix_matrix_mult_def by simp + +lemma matrix_matrix_mul_diag_matr: "A ** diag_mat f = (\ i j. A$i$j * f j)" + unfolding diag_mat_def matrix_matrix_mult_def apply(clarsimp simp: fun_eq_iff) + subgoal for i j + by (auto simp: finite_sum_univ_singleton[of _ j]) + done + +lemma matrix_mul_diag_diag: "diag_mat f ** diag_mat g = diag_mat (\i. f i * g i)" + unfolding diag_mat_def matrix_matrix_mult_def vec_eq_iff by simp + +lemma compow_matrix_mul_diag_mat_eq: "((**) (diag_mat f) ^^ n) (mat 1) = diag_mat (\i. f i^n)" + apply(induct n, simp_all add: matrix_mul_diag_matl) + by (auto simp: vec_eq_iff diag_mat_def) + +lemma compow_similar_diag_mat_eq: + assumes "invertible P" + and "A = P\<^sup>-\<^sup>1 ** (diag_mat f) ** P" + shows "((**) A ^^ n) (mat 1) = P\<^sup>-\<^sup>1 ** (diag_mat (\i. f i^n)) ** P" +proof(induct n, simp_all add: assms) + fix n::nat + have "P\<^sup>-\<^sup>1 ** diag_mat f ** P ** (P\<^sup>-\<^sup>1 ** diag_mat (\i. f i ^ n) ** P) = + P\<^sup>-\<^sup>1 ** diag_mat f ** diag_mat (\i. f i ^ n) ** P" (is "?lhs = _") + by (metis (no_types, lifting) assms(1) invertibleD(2) matrix_mul_rid matrix_mul_assoc) + also have "... = P\<^sup>-\<^sup>1 ** diag_mat (\i. f i * f i ^ n) ** P" (is "_ = ?rhs") + by (metis (full_types) matrix_mul_assoc matrix_mul_diag_diag) + finally show "?lhs = ?rhs" . +qed + +lemma compow_similar_diag_mat: + assumes "A \ (diag_mat f)" + shows "((**) A ^^ n) (mat 1) \ diag_mat (\i. f i^n)" +proof(unfold similar_matrix_def) + obtain P where "invertible P" and "A = P\<^sup>-\<^sup>1 ** (diag_mat f) ** P" + using assms unfolding similar_matrix_def by blast + thus "\P. invertible P \ ((**) A ^^ n) (mat 1) = P\<^sup>-\<^sup>1 ** diag_mat (\i. f i ^ n) ** P" + using compow_similar_diag_mat_eq by blast +qed + +no_notation matrix_inv ("_\<^sup>-\<^sup>1" [90]) + and similar_matrix (infixr "\" 25) + + +end \ No newline at end of file diff --git a/thys/Matrices_for_ODEs/ROOT b/thys/Matrices_for_ODEs/ROOT new file mode 100644 --- /dev/null +++ b/thys/Matrices_for_ODEs/ROOT @@ -0,0 +1,11 @@ +chapter AFP + +session "Matrices_for_ODEs" (AFP) = "HOL-Analysis" + + options [timeout = 1800] + sessions + Hybrid_Systems_VCs + theories + MTX_Examples + document_files + "root.bib" + "root.tex" \ No newline at end of file diff --git a/thys/Matrices_for_ODEs/SQ_MTX.thy b/thys/Matrices_for_ODEs/SQ_MTX.thy new file mode 100644 --- /dev/null +++ b/thys/Matrices_for_ODEs/SQ_MTX.thy @@ -0,0 +1,718 @@ +(* Title: Square Matrices + Author: Jonathan Julián Huerta y Munive, 2020 + Maintainer: Jonathan Julián Huerta y Munive +*) + +section \ Square Matrices \ + +text\ The general solution for affine systems of ODEs involves the exponential function. +Unfortunately, this operation is only available in Isabelle for the type class ``banach''. +Hence, we define a type of square matrices and prove that it is an instance of this class.\ + +theory SQ_MTX + imports MTX_Norms + +begin + +subsection \ Definition \ + +typedef 'm sq_mtx = "UNIV::(real^'m^'m) set" + morphisms to_vec to_mtx by simp + +declare to_mtx_inverse [simp] + and to_vec_inverse [simp] + +setup_lifting type_definition_sq_mtx + +lift_definition sq_mtx_ith :: "'m sq_mtx \ 'm \ (real^'m)" (infixl "$$" 90) is "($)" . + +lift_definition sq_mtx_vec_mult :: "'m sq_mtx \ (real^'m) \ (real^'m)" (infixl "*\<^sub>V" 90) is "(*v)" . + +lift_definition vec_sq_mtx_prod :: "(real^'m) \ 'm sq_mtx \ (real^'m)" is "(v*)" . + +lift_definition sq_mtx_diag :: "(('m::finite) \ real) \ ('m::finite) sq_mtx" (binder "\\\\ " 10) + is diag_mat . + +lift_definition sq_mtx_transpose :: "('m::finite) sq_mtx \ 'm sq_mtx" ("_\<^sup>\") is transpose . + +lift_definition sq_mtx_inv :: "('m::finite) sq_mtx \ 'm sq_mtx" ("_\<^sup>-\<^sup>1" [90]) is matrix_inv . + +lift_definition sq_mtx_row :: "'m \ ('m::finite) sq_mtx \ real^'m" ("\\\") is row . + +lift_definition sq_mtx_col :: "'m \ ('m::finite) sq_mtx \ real^'m" ("\\\") is column . + +lemma to_vec_eq_ith: "(to_vec A) $ i = A $$ i" + by transfer simp + +lemma to_mtx_ith[simp]: + "(to_mtx A) $$ i1 = A $ i1" + "(to_mtx A) $$ i1 $ i2 = A $ i1 $ i2" + by (transfer, simp)+ + +lemma to_mtx_vec_lambda_ith[simp]: "to_mtx (\ i j. x i j) $$ i1 $ i2 = x i1 i2" + by (simp add: sq_mtx_ith_def) + +lemma sq_mtx_eq_iff: + shows "A = B = (\i j. A $$ i $ j = B $$ i $ j)" + and "A = B = (\i. A $$ i = B $$ i)" + by (transfer, simp add: vec_eq_iff)+ + +lemma sq_mtx_diag_simps[simp]: + "i = j \ sq_mtx_diag f $$ i $ j = f i" + "i \ j \ sq_mtx_diag f $$ i $ j = 0" + "sq_mtx_diag f $$ i = axis i (f i)" + unfolding sq_mtx_diag_def by (simp_all add: axis_def vec_eq_iff) + +lemma sq_mtx_diag_vec_mult: "(\\\\ i. f i) *\<^sub>V s = (\ i. f i * s$i)" + by (simp add: matrix_vector_mul_diag_mat sq_mtx_diag.abs_eq sq_mtx_vec_mult.abs_eq) + +lemma sq_mtx_vec_mult_diag_axis: "(\\\\ i. f i) *\<^sub>V (axis i k) = axis i (f i * k)" + unfolding sq_mtx_diag_vec_mult axis_def by auto + +lemma sq_mtx_vec_mult_eq: "m *\<^sub>V x = (\ i. sum (\j. (m $$ i $ j) * (x $ j)) UNIV)" + by (transfer, simp add: matrix_vector_mult_def) + +lemma sq_mtx_transpose_transpose[simp]: "(A\<^sup>\)\<^sup>\ = A" + by (transfer, simp) + +lemma transpose_mult_vec_canon_row[simp]: "(A\<^sup>\) *\<^sub>V (\ i) = \\\ i A" + by transfer (simp add: row_def transpose_def axis_def matrix_vector_mult_def) + +lemma row_ith[simp]: "\\\ i A = A $$ i" + by transfer (simp add: row_def) + +lemma mtx_vec_mult_canon: "A *\<^sub>V (\ i) = \\\ i A" + by (transfer, simp add: matrix_vector_mult_basis) + + +subsection \ Ring of square matrices \ + +instantiation sq_mtx :: (finite) ring +begin + +lift_definition plus_sq_mtx :: "'a sq_mtx \ 'a sq_mtx \ 'a sq_mtx" is "(+)" . + +lift_definition zero_sq_mtx :: "'a sq_mtx" is "0" . + +lift_definition uminus_sq_mtx :: "'a sq_mtx \ 'a sq_mtx" is "uminus" . + +lift_definition minus_sq_mtx :: "'a sq_mtx \ 'a sq_mtx \ 'a sq_mtx" is "(-)" . + +lift_definition times_sq_mtx :: "'a sq_mtx \ 'a sq_mtx \ 'a sq_mtx" is "(**)" . + +declare plus_sq_mtx.rep_eq [simp] + and minus_sq_mtx.rep_eq [simp] + +instance apply intro_classes + by(transfer, simp add: algebra_simps matrix_mul_assoc matrix_add_rdistrib matrix_add_ldistrib)+ + +end + +lemma sq_mtx_zero_ith[simp]: "0 $$ i = 0" + by (transfer, simp) + +lemma sq_mtx_zero_nth[simp]: "0 $$ i $ j = 0" + by transfer simp + +lemma sq_mtx_plus_eq: "A + B = to_mtx (\ i j. A$$i$j + B$$i$j)" + by transfer (simp add: vec_eq_iff) + +lemma sq_mtx_plus_ith[simp]:"(A + B) $$ i = A $$ i + B $$ i" + unfolding sq_mtx_plus_eq by (simp add: vec_eq_iff) + +lemma sq_mtx_uminus_eq: "- A = to_mtx (\ i j. - A$$i$j)" + by transfer (simp add: vec_eq_iff) + +lemma sq_mtx_minus_eq: "A - B = to_mtx (\ i j. A$$i$j - B$$i$j)" + by transfer (simp add: vec_eq_iff) + +lemma sq_mtx_minus_ith[simp]:"(A - B) $$ i = A $$ i - B $$ i" + unfolding sq_mtx_minus_eq by (simp add: vec_eq_iff) + +lemma sq_mtx_times_eq: "A * B = to_mtx (\ i j. sum (\k. A$$i$k * B$$k$j) UNIV)" + by transfer (simp add: matrix_matrix_mult_def) + +lemma sq_mtx_plus_diag_diag[simp]: "sq_mtx_diag f + sq_mtx_diag g = (\\\\ i. f i + g i)" + by (subst sq_mtx_eq_iff) (simp add: axis_def) + +lemma sq_mtx_minus_diag_diag[simp]: "sq_mtx_diag f - sq_mtx_diag g = (\\\\ i. f i - g i)" + by (subst sq_mtx_eq_iff) (simp add: axis_def) + +lemma sum_sq_mtx_diag[simp]: "(\n\\\ i. \n\\\ i. f i * g i)" + by (simp add: matrix_mul_diag_diag sq_mtx_diag.abs_eq times_sq_mtx.abs_eq) + +lemma sq_mtx_mult_diagl: "(\\\\ i. f i) * A = to_mtx (\ i j. f i * A $$ i $ j)" + by transfer (simp add: matrix_mul_diag_matl) + +lemma sq_mtx_mult_diagr: "A * (\\\\ i. f i) = to_mtx (\ i j. A $$ i $ j * f j)" + by transfer (simp add: matrix_matrix_mul_diag_matr) + +lemma mtx_vec_mult_0l[simp]: "0 *\<^sub>V x = 0" + by (simp add: sq_mtx_vec_mult.abs_eq zero_sq_mtx_def) + +lemma mtx_vec_mult_0r[simp]: "A *\<^sub>V 0 = 0" + by (transfer, simp) + +lemma mtx_vec_mult_add_rdistr: "(A + B) *\<^sub>V x = A *\<^sub>V x + B *\<^sub>V x" + unfolding plus_sq_mtx_def + apply(transfer) + by (simp add: matrix_vector_mult_add_rdistrib) + +lemma mtx_vec_mult_add_rdistl: "A *\<^sub>V (x + y) = A *\<^sub>V x + A *\<^sub>V y" + unfolding plus_sq_mtx_def + apply transfer + by (simp add: matrix_vector_right_distrib) + +lemma mtx_vec_mult_minus_rdistrib: "(A - B) *\<^sub>V x = A *\<^sub>V x - B *\<^sub>V x" + unfolding minus_sq_mtx_def by(transfer, simp add: matrix_vector_mult_diff_rdistrib) + +lemma mtx_vec_mult_minus_ldistrib: "A *\<^sub>V (x - y) = A *\<^sub>V x - A *\<^sub>V y" + by (metis (no_types, lifting) add_diff_cancel diff_add_cancel + matrix_vector_right_distrib sq_mtx_vec_mult.rep_eq) + +lemma sq_mtx_times_vec_assoc: "(A * B) *\<^sub>V x = A *\<^sub>V (B *\<^sub>V x)" + by (transfer, simp add: matrix_vector_mul_assoc) + +lemma sq_mtx_vec_mult_sum_cols: "A *\<^sub>V x = sum (\i. x $ i *\<^sub>R \\\ i A) UNIV" + by(transfer) (simp add: matrix_mult_sum scalar_mult_eq_scaleR) + + +subsection \ Real normed vector space of square matrices \ + +instantiation sq_mtx :: (finite) real_normed_vector +begin + +definition norm_sq_mtx :: "'a sq_mtx \ real" where "\A\ = \to_vec A\\<^sub>o\<^sub>p" + +lift_definition scaleR_sq_mtx :: "real \ 'a sq_mtx \ 'a sq_mtx" is scaleR . + +definition sgn_sq_mtx :: "'a sq_mtx \ 'a sq_mtx" + where "sgn_sq_mtx A = (inverse (\A\)) *\<^sub>R A" + +definition dist_sq_mtx :: "'a sq_mtx \ 'a sq_mtx \ real" + where "dist_sq_mtx A B = \A - B\" + +definition uniformity_sq_mtx :: "('a sq_mtx \ 'a sq_mtx) filter" + where "uniformity_sq_mtx = (INF e\{0<..}. principal {(x, y). dist x y < e})" + +definition open_sq_mtx :: "'a sq_mtx set \ bool" + where "open_sq_mtx U = (\x\U. \\<^sub>F (x', y) in uniformity. x' = x \ y \ U)" + +instance apply intro_classes + unfolding sgn_sq_mtx_def open_sq_mtx_def dist_sq_mtx_def uniformity_sq_mtx_def + prefer 10 + apply(transfer, simp add: norm_sq_mtx_def op_norm_triangle) + prefer 9 + apply(simp_all add: norm_sq_mtx_def zero_sq_mtx_def op_norm_eq_0) + by (transfer, simp add: norm_sq_mtx_def op_norm_scaleR algebra_simps)+ + +end + +lemma sq_mtx_scaleR_eq: "c *\<^sub>R A = to_mtx (\ i j. c *\<^sub>R A $$ i $ j)" + by transfer (simp add: vec_eq_iff) + +lemma scaleR_to_mtx_ith[simp]: "c *\<^sub>R (to_mtx A) $$ i1 $ i2 = c * A $ i1 $ i2" + by transfer (simp add: scaleR_vec_def) + +lemma sq_mtx_scaleR_ith[simp]: "(c *\<^sub>R A) $$ i = (c *\<^sub>R (A $$ i))" + by (unfold scaleR_sq_mtx_def, transfer, simp) + +lemma scaleR_sq_mtx_diag: "c *\<^sub>R sq_mtx_diag f = (\\\\ i. c * f i)" + by (subst sq_mtx_eq_iff, simp add: axis_def) + +lemma scaleR_mtx_vec_assoc: "(c *\<^sub>R A) *\<^sub>V x = c *\<^sub>R (A *\<^sub>V x)" + unfolding scaleR_sq_mtx_def sq_mtx_vec_mult_def apply simp + by (simp add: scaleR_matrix_vector_assoc) + +lemma mtx_vec_scaleR_commute: "A *\<^sub>V (c *\<^sub>R x) = c *\<^sub>R (A *\<^sub>V x)" + unfolding scaleR_sq_mtx_def sq_mtx_vec_mult_def apply(simp, transfer) + by (simp add: vector_scaleR_commute) + +lemma mtx_times_scaleR_commute: "A * (c *\<^sub>R B) = c *\<^sub>R (A * B)" for A::"('n::finite) sq_mtx" + unfolding sq_mtx_scaleR_eq sq_mtx_times_eq + apply(simp add: to_mtx_inject) + apply(simp add: vec_eq_iff fun_eq_iff) + by (simp add: semiring_normalization_rules(19) vector_space_over_itself.scale_sum_right) + +lemma le_mtx_norm: "m \ {\A *\<^sub>V x\ |x. \x\ = 1} \ m \ \A\" + using cSup_upper[of _ "{\(to_vec A) *v x\ | x. \x\ = 1}"] + by (simp add: op_norm_set_proptys(2) op_norm_def norm_sq_mtx_def sq_mtx_vec_mult.rep_eq) + +lemma norm_vec_mult_le: "\A *\<^sub>V x\ \ (\A\) * (\x\)" + by (simp add: norm_matrix_le_mult_op_norm norm_sq_mtx_def sq_mtx_vec_mult.rep_eq) + +lemma bounded_bilinear_sq_mtx_vec_mult: "bounded_bilinear (\A s. A *\<^sub>V s)" + apply (rule bounded_bilinear.intro, simp_all add: mtx_vec_mult_add_rdistr + mtx_vec_mult_add_rdistl scaleR_mtx_vec_assoc mtx_vec_scaleR_commute) + by (rule_tac x=1 in exI, auto intro!: norm_vec_mult_le) + +lemma norm_sq_mtx_def2: "\A\ = Sup {\A *\<^sub>V x\ |x. \x\ = 1}" + unfolding norm_sq_mtx_def op_norm_def sq_mtx_vec_mult_def by simp + +lemma norm_sq_mtx_def3: "\A\ = (SUP x. (\A *\<^sub>V x\) / (\x\))" + unfolding norm_sq_mtx_def onorm_def sq_mtx_vec_mult_def by simp + +lemma norm_sq_mtx_diag: "\sq_mtx_diag f\ = Max {\f i\ |i. i \ UNIV}" + unfolding norm_sq_mtx_def apply transfer + by (rule op_norm_diag_mat_eq) + +lemma sq_mtx_norm_le_sum_col: "\A\ \ (\i\UNIV. \\\\ i A\)" + using op_norm_le_sum_column[of "to_vec A"] + apply(simp add: norm_sq_mtx_def) + by(transfer, simp add: op_norm_le_sum_column) + +lemma norm_le_transpose: "\A\ \ \A\<^sup>\\" + unfolding norm_sq_mtx_def by transfer (rule op_norm_le_transpose) + +lemma norm_eq_norm_transpose[simp]: "\A\<^sup>\\ = \A\" + using norm_le_transpose[of A] and norm_le_transpose[of "A\<^sup>\"] by simp + +lemma norm_column_le_norm: "\A $$ i\ \ \A\" + using norm_vec_mult_le[of "A\<^sup>\" "\ i"] by simp + + +subsection \ Real normed algebra of square matrices \ + +instantiation sq_mtx :: (finite) real_normed_algebra_1 +begin + +lift_definition one_sq_mtx :: "'a sq_mtx" is "to_mtx (mat 1)" . + +lemma sq_mtx_one_idty: "1 * A = A" "A * 1 = A" for A :: "'a sq_mtx" + by(transfer, transfer, unfold mat_def matrix_matrix_mult_def, simp add: vec_eq_iff)+ + +lemma sq_mtx_norm_1: "\(1::'a sq_mtx)\ = 1" + unfolding one_sq_mtx_def norm_sq_mtx_def + apply(simp add: op_norm_def) + apply(subst cSup_eq[of _ 1]) + using ex_norm_eq_1 by auto + +lemma sq_mtx_norm_times: "\A * B\ \ (\A\) * (\B\)" for A :: "'a sq_mtx" + unfolding norm_sq_mtx_def times_sq_mtx_def by(simp add: op_norm_matrix_matrix_mult_le) + +instance + apply intro_classes + apply(simp_all add: sq_mtx_one_idty sq_mtx_norm_1 sq_mtx_norm_times) + apply(simp_all add: to_mtx_inject vec_eq_iff one_sq_mtx_def zero_sq_mtx_def mat_def) + by(transfer, simp add: scalar_matrix_assoc matrix_scalar_ac)+ + +end + +lemma sq_mtx_one_ith_simps[simp]: "1 $$ i $ i = 1" "i \ j \ 1 $$ i $ j = 0" + unfolding one_sq_mtx_def mat_def by simp_all + +lemma of_nat_eq_sq_mtx_diag[simp]: "of_nat m = (\\\\ i. m)" + by (induct m) (simp, subst sq_mtx_eq_iff, simp add: axis_def)+ + +lemma mtx_vec_mult_1[simp]: "1 *\<^sub>V s = s" + by (auto simp: sq_mtx_vec_mult_def one_sq_mtx_def + mat_def vec_eq_iff matrix_vector_mult_def) + +lemma sq_mtx_diag_one[simp]: "(\\\\ i. 1) = 1" + by (subst sq_mtx_eq_iff, simp add: one_sq_mtx_def mat_def axis_def) + +abbreviation "mtx_invertible A \ invertible (to_vec A)" + +lemma mtx_invertible_def: "mtx_invertible A \ (\A'. A' * A = 1 \ A * A' = 1)" + apply (unfold sq_mtx_inv_def times_sq_mtx_def one_sq_mtx_def invertible_def, clarsimp, safe) + apply(rule_tac x="to_mtx A'" in exI, simp) + by (rule_tac x="to_vec A'" in exI, simp add: to_mtx_inject) + +lemma mtx_invertibleI: + assumes "A * B = 1" and "B * A = 1" + shows "mtx_invertible A" + using assms unfolding mtx_invertible_def by auto + +lemma mtx_invertibleD[simp]: + assumes "mtx_invertible A" + shows "A\<^sup>-\<^sup>1 * A = 1" and "A * A\<^sup>-\<^sup>1 = 1" + apply (unfold sq_mtx_inv_def times_sq_mtx_def one_sq_mtx_def) + using assms by simp_all + +lemma mtx_invertible_inv[simp]: "mtx_invertible A \ mtx_invertible (A\<^sup>-\<^sup>1)" + using mtx_invertibleD mtx_invertibleI by blast + +lemma mtx_invertible_one[simp]: "mtx_invertible 1" + by (simp add: one_sq_mtx.rep_eq) + +lemma sq_mtx_inv_unique: + assumes "A * B = 1" and "B * A = 1" + shows "A\<^sup>-\<^sup>1 = B" + by (metis (no_types, lifting) assms mtx_invertibleD(2) + mtx_invertibleI mult.assoc sq_mtx_one_idty(1)) + +lemma sq_mtx_inv_idempotent[simp]: "mtx_invertible A \ A\<^sup>-\<^sup>1\<^sup>-\<^sup>1 = A" + using mtx_invertibleD sq_mtx_inv_unique by blast + +lemma sq_mtx_inv_mult: + assumes "mtx_invertible A" and "mtx_invertible B" + shows "(A * B)\<^sup>-\<^sup>1 = B\<^sup>-\<^sup>1 * A\<^sup>-\<^sup>1" + by (simp add: assms matrix_inv_matrix_mul sq_mtx_inv_def times_sq_mtx_def) + +lemma sq_mtx_inv_one[simp]: "1\<^sup>-\<^sup>1 = 1" + by (simp add: sq_mtx_inv_unique) + +definition similar_sq_mtx :: "('n::finite) sq_mtx \ 'n sq_mtx \ bool" (infixr "\" 25) + where "(A \ B) \ (\ P. mtx_invertible P \ A = P\<^sup>-\<^sup>1 * B * P)" + +lemma similar_sq_mtx_matrix: "(A \ B) = similar_matrix (to_vec A) (to_vec B)" + apply(unfold similar_matrix_def similar_sq_mtx_def, safe) + apply (metis sq_mtx_inv.rep_eq times_sq_mtx.rep_eq) + by (metis UNIV_I sq_mtx_inv.abs_eq times_sq_mtx.abs_eq to_mtx_inverse to_vec_inverse) + +lemma similar_sq_mtx_refl[simp]: "A \ A" + by (unfold similar_sq_mtx_def, rule_tac x="1" in exI, simp) + +lemma similar_sq_mtx_simm: "A \ B \ B \ A" + apply(unfold similar_sq_mtx_def, clarsimp) + apply(rule_tac x="P\<^sup>-\<^sup>1" in exI, simp add: mult.assoc) + by (metis mtx_invertibleD(2) mult.assoc mult.left_neutral) + +lemma similar_sq_mtx_trans: "A \ B \ B \ C \ A \ C" + unfolding similar_sq_mtx_matrix using similar_matrix_trans by blast + +lemma power_sq_mtx_diag: "(sq_mtx_diag f)^n = (\\\\ i. f i^n)" + by (induct n, simp_all) + +lemma power_similiar_sq_mtx_diag_eq: + assumes "mtx_invertible P" + and "A = P\<^sup>-\<^sup>1 * (sq_mtx_diag f) * P" + shows "A^n = P\<^sup>-\<^sup>1 * (\\\\ i. f i^n) * P" +proof(induct n, simp_all add: assms) + fix n::nat + have "P\<^sup>-\<^sup>1 * sq_mtx_diag f * P * (P\<^sup>-\<^sup>1 * (\\\\ i. f i ^ n) * P) = + P\<^sup>-\<^sup>1 * sq_mtx_diag f * (\\\\ i. f i ^ n) * P" + by (metis (no_types, lifting) assms(1) mtx_invertibleD(2) mult.assoc mult.right_neutral) + also have "... = P\<^sup>-\<^sup>1 * (\\\\ i. f i * f i ^ n) * P" + by (simp add: mult.assoc) + finally show "P\<^sup>-\<^sup>1 * sq_mtx_diag f * P * (P\<^sup>-\<^sup>1 * (\\\\ i. f i ^ n) * P) = + P\<^sup>-\<^sup>1 * (\\\\ i. f i * f i ^ n) * P" . +qed + +lemma power_similar_sq_mtx_diag: + assumes "A \ (sq_mtx_diag f)" + shows "A^n \ (\\\\ i. f i^n)" + using assms power_similiar_sq_mtx_diag_eq + unfolding similar_sq_mtx_def by blast + + +subsection \ Banach space of square matrices \ + +lemma Cauchy_cols: + fixes X :: "nat \ ('a::finite) sq_mtx" + assumes "Cauchy X" + shows "Cauchy (\n. \\\ i (X n))" +proof(unfold Cauchy_def dist_norm, clarsimp) + fix \::real assume "\ > 0" + then obtain M where M_def:"\m\M. \n\M. \X m - X n\ < \" + using \Cauchy X\ unfolding Cauchy_def by(simp add: dist_sq_mtx_def) metis + {fix m n assume "m \ M" and "n \ M" + hence "\ > \X m - X n\" + using M_def by blast + moreover have "\X m - X n\ \ \(X m - X n) *\<^sub>V \ i\" + by(rule le_mtx_norm[of _ "X m - X n"], force) + moreover have "\(X m - X n) *\<^sub>V \ i\ = \X m *\<^sub>V \ i - X n *\<^sub>V \ i\" + by (simp add: mtx_vec_mult_minus_rdistrib) + moreover have "... = \\\\ i (X m) - \\\ i (X n)\" + by (simp add: mtx_vec_mult_minus_rdistrib mtx_vec_mult_canon) + ultimately have "\\\\ i (X m) - \\\ i (X n)\ < \" + by linarith} + thus "\M. \m\M. \n\M. \\\\ i (X m) - \\\ i (X n)\ < \" + by blast +qed + +lemma col_convergence: + assumes "\i. (\n. \\\ i (X n)) \ L $ i" + shows "X \ to_mtx (transpose L)" +proof(unfold LIMSEQ_def dist_norm, clarsimp) + let ?L = "to_mtx (transpose L)" + let ?a = "CARD('a)" fix \::real assume "\ > 0" + hence "\ / ?a > 0" by simp + hence "\i. \ N. \n\N. \\\\ i (X n) - L $ i\ < \/?a" + using assms unfolding LIMSEQ_def dist_norm convergent_def by blast + then obtain N where "\i. \n\N. \\\\ i (X n) - L $ i\ < \/?a" + using finite_nat_minimal_witness[of "\ i n. \\\\ i (X n) - L $ i\ < \/?a"] by blast + also have "\i n. (\\\ i (X n) - L $ i) = (\\\ i (X n - ?L))" + unfolding minus_sq_mtx_def by(transfer, simp add: transpose_def vec_eq_iff column_def) + ultimately have N_def:"\i. \n\N. \\\\ i (X n - ?L)\ < \/?a" + by auto + have "\n\N. \X n - ?L\ < \" + proof(rule allI, rule impI) + fix n::nat assume "N \ n" + hence "\ i. \\\\ i (X n - ?L)\ < \/?a" + using N_def by blast + hence "(\i\UNIV. \\\\ i (X n - ?L)\) < (\(i::'a)\UNIV. \/?a)" + using sum_strict_mono[of _ "\i. \\\\ i (X n - ?L)\"] by force + moreover have "\X n - ?L\ \ (\i\UNIV. \\\\ i (X n - ?L)\)" + using sq_mtx_norm_le_sum_col by blast + moreover have "(\(i::'a)\UNIV. \/?a) = \" + by force + ultimately show "\X n - ?L\ < \" + by linarith + qed + thus "\no. \n\no. \X n - ?L\ < \" + by blast +qed + +instance sq_mtx :: (finite) banach +proof(standard) + fix X :: "nat \ 'a sq_mtx" + assume "Cauchy X" + hence "\i. Cauchy (\n. \\\ i (X n))" + using Cauchy_cols by blast + hence obs: "\i. \! L. (\n. \\\ i (X n)) \ L" + using Cauchy_convergent convergent_def LIMSEQ_unique by fastforce + define L where "L = (\ i. lim (\n. \\\ i (X n)))" + hence "\i. (\n. \\\ i (X n)) \ L $ i" + using obs theI_unique[of "\L. (\n. \\\ _ (X n)) \ L" "L $ _"] by (simp add: lim_def) + thus "convergent X" + using col_convergence unfolding convergent_def by blast +qed + +lemma exp_similiar_sq_mtx_diag_eq: + assumes "mtx_invertible P" + and "A = P\<^sup>-\<^sup>1 * (\\\\ i. f i) * P" + shows "exp A = P\<^sup>-\<^sup>1 * exp (\\\\ i. f i) * P" +proof(unfold exp_def power_similiar_sq_mtx_diag_eq[OF assms]) + have "(\n. P\<^sup>-\<^sup>1 * (\\\\ i. f i ^ n) * P /\<^sub>R fact n) = + (\n. P\<^sup>-\<^sup>1 * ((\\\\ i. f i ^ n) /\<^sub>R fact n) * P)" + by simp + also have "... = (\n. P\<^sup>-\<^sup>1 * ((\\\\ i. f i ^ n) /\<^sub>R fact n)) * P" + apply(subst suminf_multr[OF bounded_linear.summable[OF bounded_linear_mult_right]]) + unfolding power_sq_mtx_diag[symmetric] by (simp_all add: summable_exp_generic) + also have "... = P\<^sup>-\<^sup>1 * (\n. (\\\\ i. f i ^ n) /\<^sub>R fact n) * P" + apply(subst suminf_mult[of _ "P\<^sup>-\<^sup>1"]) + unfolding power_sq_mtx_diag[symmetric] + by (simp_all add: summable_exp_generic) + finally show "(\n. P\<^sup>-\<^sup>1 * (\\\\ i. f i ^ n) * P /\<^sub>R fact n) = + P\<^sup>-\<^sup>1 * (\n. sq_mtx_diag f ^ n /\<^sub>R fact n) * P" + unfolding power_sq_mtx_diag by simp +qed + +lemma exp_similiar_sq_mtx_diag: + assumes "A \ sq_mtx_diag f" + shows "exp A \ exp (sq_mtx_diag f)" + using assms exp_similiar_sq_mtx_diag_eq + unfolding similar_sq_mtx_def by blast + +lemma suminf_sq_mtx_diag: + assumes "\i. (\n. f n i) sums (suminf (\n. f n i))" + shows "(\n. (\\\\ i. f n i)) = (\\\\ i. \n. f n i)" +proof(rule suminfI, unfold sums_def LIMSEQ_iff, clarsimp simp: norm_sq_mtx_diag) + let ?g = "\n i. \(\nn. f n i)\" + fix r::real assume "r > 0" + have "\i. \no. \n\no. ?g n i < r" + using assms \r > 0\ unfolding sums_def LIMSEQ_iff by clarsimp + then obtain N where key: "\i. \n\N. ?g n i < r" + using finite_nat_minimal_witness[of "\i n. ?g n i < r"] by blast + {fix n::nat + assume "n \ N" + obtain i where i_def: "Max {x. \i. x = ?g n i} = ?g n i" + using cMax_finite_ex[of "{x. \i. x = ?g n i}"] by auto + hence "?g n i < r" + using key \n \ N\ by blast + hence "Max {x. \i. x = ?g n i} < r" + unfolding i_def[symmetric] .} + thus "\N. \n\N. Max {x. \i. x = ?g n i} < r" + by blast +qed + +lemma exp_sq_mtx_diag: "exp (sq_mtx_diag f) = (\\\\ i. exp (f i))" + apply(unfold exp_def, simp add: power_sq_mtx_diag scaleR_sq_mtx_diag) + apply(rule suminf_sq_mtx_diag) + using exp_converges[of "f _"] + unfolding sums_def LIMSEQ_iff exp_def by force + +lemma exp_scaleR_diagonal1: + assumes "mtx_invertible P" and "A = P\<^sup>-\<^sup>1 * (\\\\ i. f i) * P" + shows "exp (t *\<^sub>R A) = P\<^sup>-\<^sup>1 * (\\\\ i. exp (t * f i)) * P" +proof- + have "exp (t *\<^sub>R A) = exp (P\<^sup>-\<^sup>1 * (t *\<^sub>R sq_mtx_diag f) * P)" + using assms by simp + also have "... = P\<^sup>-\<^sup>1 * (\\\\ i. exp (t * f i)) * P" + by (metis assms(1) exp_similiar_sq_mtx_diag_eq exp_sq_mtx_diag scaleR_sq_mtx_diag) + finally show "exp (t *\<^sub>R A) = P\<^sup>-\<^sup>1 * (\\\\ i. exp (t * f i)) * P" . +qed + +lemma exp_scaleR_diagonal2: + assumes "mtx_invertible P" and "A = P * (\\\\ i. f i) * P\<^sup>-\<^sup>1" + shows "exp (t *\<^sub>R A) = P * (\\\\ i. exp (t * f i)) * P\<^sup>-\<^sup>1" + apply(subst sq_mtx_inv_idempotent[OF assms(1), symmetric]) + apply(rule exp_scaleR_diagonal1) + by (simp_all add: assms) + + +subsection \ Examples \ + +definition "mtx A = to_mtx (vector (map vector A))" + +lemma vector_nth_eq: "(vector A) $ i = foldr (\x f n. (f (n + 1))(n := x)) A (\n x. 0) 1 i" + unfolding vector_def by simp + +lemma mtx_ith_eq[simp]: "mtx A $$ i $ j = foldr (\x f n. (f (n + 1))(n := x)) + (map (\l. vec_lambda (foldr (\x f n. (f (n + 1))(n := x)) l (\n x. 0) 1)) A) (\n x. 0) 1 i $ j" + unfolding mtx_def vector_def by (simp add: vector_nth_eq) + +subsubsection \ 2x2 matrices \ + +lemma mtx2_eq_iff: "(mtx + ([a1, b1] # + [c1, d1] # []) :: 2 sq_mtx) = mtx + ([a2, b2] # + [c2, d2] # []) \ a1 = a2 \ b1 = b2 \ c1 = c2 \ d1 = d2" + apply(simp add: sq_mtx_eq_iff, safe) + using exhaust_2 by force+ + +lemma mtx2_to_mtx: "mtx + ([a, b] # + [c, d] # []) = + to_mtx (\ i j::2. if i=1 \ j=1 then a + else (if i=1 \ j=2 then b + else (if i=2 \ j=1 then c + else d)))" + apply(subst sq_mtx_eq_iff) + using exhaust_2 by force + +abbreviation diag2 :: "real \ real \ 2 sq_mtx" + where "diag2 \\<^sub>1 \\<^sub>2 \ mtx + ([\\<^sub>1, 0] # + [0, \\<^sub>2] # [])" + +lemma diag2_eq: "diag2 (\ 1) (\ 2) = (\\\\ i. \ i)" + apply(simp add: sq_mtx_eq_iff) + using exhaust_2 by (force simp: axis_def) + +lemma one_mtx2: "(1::2 sq_mtx) = diag2 1 1" + apply(subst sq_mtx_eq_iff) + using exhaust_2 by force + +lemma zero_mtx2: "(0::2 sq_mtx) = diag2 0 0" + by (simp add: sq_mtx_eq_iff) + +lemma scaleR_mtx2: "k *\<^sub>R mtx + ([a, b] # + [c, d] # []) = mtx + ([k*a, k*b] # + [k*c, k*d] # [])" + by (simp add: sq_mtx_eq_iff) + +lemma uminus_mtx2: "-mtx + ([a, b] # + [c, d] # []) = (mtx + ([-a, -b] # + [-c, -d] # [])::2 sq_mtx)" + by (simp add: sq_mtx_uminus_eq sq_mtx_eq_iff) + +lemma plus_mtx2: "mtx + ([a1, b1] # + [c1, d1] # []) + mtx + ([a2, b2] # + [c2, d2] # []) = ((mtx + ([a1+a2, b1+b2] # + [c1+c2, d1+d2] # []))::2 sq_mtx)" + by (simp add: sq_mtx_eq_iff) + +lemma minus_mtx2: "mtx + ([a1, b1] # + [c1, d1] # []) - mtx + ([a2, b2] # + [c2, d2] # []) = ((mtx + ([a1-a2, b1-b2] # + [c1-c2, d1-d2] # []))::2 sq_mtx)" + by (simp add: sq_mtx_eq_iff) + +lemma times_mtx2: "mtx + ([a1, b1] # + [c1, d1] # []) * mtx + ([a2, b2] # + [c2, d2] # []) = ((mtx + ([a1*a2+b1*c2, a1*b2+b1*d2] # + [c1*a2+d1*c2, c1*b2+d1*d2] # []))::2 sq_mtx)" + unfolding sq_mtx_times_eq UNIV_2 + by (simp add: sq_mtx_eq_iff) + +subsubsection \ 3x3 matrices \ + +lemma mtx3_to_mtx: "mtx + ([a\<^sub>1\<^sub>1, a\<^sub>1\<^sub>2, a\<^sub>1\<^sub>3] # + [a\<^sub>2\<^sub>1, a\<^sub>2\<^sub>2, a\<^sub>2\<^sub>3] # + [a\<^sub>3\<^sub>1, a\<^sub>3\<^sub>2, a\<^sub>3\<^sub>3] # []) = + to_mtx (\ i j::3. if i=1 \ j=1 then a\<^sub>1\<^sub>1 + else (if i=1 \ j=2 then a\<^sub>1\<^sub>2 + else (if i=1 \ j=3 then a\<^sub>1\<^sub>3 + else (if i=2 \ j=1 then a\<^sub>2\<^sub>1 + else (if i=2 \ j=2 then a\<^sub>2\<^sub>2 + else (if i=2 \ j=3 then a\<^sub>2\<^sub>3 + else (if i=3 \ j=1 then a\<^sub>3\<^sub>1 + else (if i=3 \ j=2 then a\<^sub>3\<^sub>2 + else a\<^sub>3\<^sub>3))))))))" + apply(simp add: sq_mtx_eq_iff) + using exhaust_3 by force + +abbreviation diag3 :: "real \ real \ real \ 3 sq_mtx" + where "diag3 \\<^sub>1 \\<^sub>2 \\<^sub>3 \ mtx + ([\\<^sub>1, 0, 0] # + [0, \\<^sub>2, 0] # + [0, 0, \\<^sub>3] # [])" + +lemma diag3_eq: "diag3 (\ 1) (\ 2) (\ 3) = (\\\\ i. \ i)" + apply(simp add: sq_mtx_eq_iff) + using exhaust_3 by (force simp: axis_def) + +lemma one_mtx3: "(1::3 sq_mtx) = diag3 1 1 1" + apply(subst sq_mtx_eq_iff) + using exhaust_3 by force + +lemma zero_mtx3: "(0::3 sq_mtx) = diag3 0 0 0" + by (simp add: sq_mtx_eq_iff) + +lemma scaleR_mtx3: "k *\<^sub>R mtx + ([a\<^sub>1\<^sub>1, a\<^sub>1\<^sub>2, a\<^sub>1\<^sub>3] # + [a\<^sub>2\<^sub>1, a\<^sub>2\<^sub>2, a\<^sub>2\<^sub>3] # + [a\<^sub>3\<^sub>1, a\<^sub>3\<^sub>2, a\<^sub>3\<^sub>3] # []) = mtx + ([k*a\<^sub>1\<^sub>1, k*a\<^sub>1\<^sub>2, k*a\<^sub>1\<^sub>3] # + [k*a\<^sub>2\<^sub>1, k*a\<^sub>2\<^sub>2, k*a\<^sub>2\<^sub>3] # + [k*a\<^sub>3\<^sub>1, k*a\<^sub>3\<^sub>2, k*a\<^sub>3\<^sub>3] # [])" + by (simp add: sq_mtx_eq_iff) + +lemma plus_mtx3: "mtx + ([a\<^sub>1\<^sub>1, a\<^sub>1\<^sub>2, a\<^sub>1\<^sub>3] # + [a\<^sub>2\<^sub>1, a\<^sub>2\<^sub>2, a\<^sub>2\<^sub>3] # + [a\<^sub>3\<^sub>1, a\<^sub>3\<^sub>2, a\<^sub>3\<^sub>3] # []) + mtx + ([b\<^sub>1\<^sub>1, b\<^sub>1\<^sub>2, b\<^sub>1\<^sub>3] # + [b\<^sub>2\<^sub>1, b\<^sub>2\<^sub>2, b\<^sub>2\<^sub>3] # + [b\<^sub>3\<^sub>1, b\<^sub>3\<^sub>2, b\<^sub>3\<^sub>3] # []) = (mtx + ([a\<^sub>1\<^sub>1+b\<^sub>1\<^sub>1, a\<^sub>1\<^sub>2+b\<^sub>1\<^sub>2, a\<^sub>1\<^sub>3+b\<^sub>1\<^sub>3] # + [a\<^sub>2\<^sub>1+b\<^sub>2\<^sub>1, a\<^sub>2\<^sub>2+b\<^sub>2\<^sub>2, a\<^sub>2\<^sub>3+b\<^sub>2\<^sub>3] # + [a\<^sub>3\<^sub>1+b\<^sub>3\<^sub>1, a\<^sub>3\<^sub>2+b\<^sub>3\<^sub>2, a\<^sub>3\<^sub>3+b\<^sub>3\<^sub>3] # [])::3 sq_mtx)" + by (subst sq_mtx_eq_iff) simp + +lemma minus_mtx3: "mtx + ([a\<^sub>1\<^sub>1, a\<^sub>1\<^sub>2, a\<^sub>1\<^sub>3] # + [a\<^sub>2\<^sub>1, a\<^sub>2\<^sub>2, a\<^sub>2\<^sub>3] # + [a\<^sub>3\<^sub>1, a\<^sub>3\<^sub>2, a\<^sub>3\<^sub>3] # []) - mtx + ([b\<^sub>1\<^sub>1, b\<^sub>1\<^sub>2, b\<^sub>1\<^sub>3] # + [b\<^sub>2\<^sub>1, b\<^sub>2\<^sub>2, b\<^sub>2\<^sub>3] # + [b\<^sub>3\<^sub>1, b\<^sub>3\<^sub>2, b\<^sub>3\<^sub>3] # []) = (mtx + ([a\<^sub>1\<^sub>1-b\<^sub>1\<^sub>1, a\<^sub>1\<^sub>2-b\<^sub>1\<^sub>2, a\<^sub>1\<^sub>3-b\<^sub>1\<^sub>3] # + [a\<^sub>2\<^sub>1-b\<^sub>2\<^sub>1, a\<^sub>2\<^sub>2-b\<^sub>2\<^sub>2, a\<^sub>2\<^sub>3-b\<^sub>2\<^sub>3] # + [a\<^sub>3\<^sub>1-b\<^sub>3\<^sub>1, a\<^sub>3\<^sub>2-b\<^sub>3\<^sub>2, a\<^sub>3\<^sub>3-b\<^sub>3\<^sub>3] # [])::3 sq_mtx)" + by (simp add: sq_mtx_eq_iff) + +lemma times_mtx3: "mtx + ([a\<^sub>1\<^sub>1, a\<^sub>1\<^sub>2, a\<^sub>1\<^sub>3] # + [a\<^sub>2\<^sub>1, a\<^sub>2\<^sub>2, a\<^sub>2\<^sub>3] # + [a\<^sub>3\<^sub>1, a\<^sub>3\<^sub>2, a\<^sub>3\<^sub>3] # []) * mtx + ([b\<^sub>1\<^sub>1, b\<^sub>1\<^sub>2, b\<^sub>1\<^sub>3] # + [b\<^sub>2\<^sub>1, b\<^sub>2\<^sub>2, b\<^sub>2\<^sub>3] # + [b\<^sub>3\<^sub>1, b\<^sub>3\<^sub>2, b\<^sub>3\<^sub>3] # []) = (mtx + ([a\<^sub>1\<^sub>1*b\<^sub>1\<^sub>1+a\<^sub>1\<^sub>2*b\<^sub>2\<^sub>1+a\<^sub>1\<^sub>3*b\<^sub>3\<^sub>1, a\<^sub>1\<^sub>1*b\<^sub>1\<^sub>2+a\<^sub>1\<^sub>2*b\<^sub>2\<^sub>2+a\<^sub>1\<^sub>3*b\<^sub>3\<^sub>2, a\<^sub>1\<^sub>1*b\<^sub>1\<^sub>3+a\<^sub>1\<^sub>2*b\<^sub>2\<^sub>3+a\<^sub>1\<^sub>3*b\<^sub>3\<^sub>3] # + [a\<^sub>2\<^sub>1*b\<^sub>1\<^sub>1+a\<^sub>2\<^sub>2*b\<^sub>2\<^sub>1+a\<^sub>2\<^sub>3*b\<^sub>3\<^sub>1, a\<^sub>2\<^sub>1*b\<^sub>1\<^sub>2+a\<^sub>2\<^sub>2*b\<^sub>2\<^sub>2+a\<^sub>2\<^sub>3*b\<^sub>3\<^sub>2, a\<^sub>2\<^sub>1*b\<^sub>1\<^sub>3+a\<^sub>2\<^sub>2*b\<^sub>2\<^sub>3+a\<^sub>2\<^sub>3*b\<^sub>3\<^sub>3] # + [a\<^sub>3\<^sub>1*b\<^sub>1\<^sub>1+a\<^sub>3\<^sub>2*b\<^sub>2\<^sub>1+a\<^sub>3\<^sub>3*b\<^sub>3\<^sub>1, a\<^sub>3\<^sub>1*b\<^sub>1\<^sub>2+a\<^sub>3\<^sub>2*b\<^sub>2\<^sub>2+a\<^sub>3\<^sub>3*b\<^sub>3\<^sub>2, a\<^sub>3\<^sub>1*b\<^sub>1\<^sub>3+a\<^sub>3\<^sub>2*b\<^sub>2\<^sub>3+a\<^sub>3\<^sub>3*b\<^sub>3\<^sub>3] # [])::3 sq_mtx)" + unfolding sq_mtx_times_eq + unfolding UNIV_3 by (simp add: sq_mtx_eq_iff) + +end \ No newline at end of file diff --git a/thys/Matrices_for_ODEs/document/root.bib b/thys/Matrices_for_ODEs/document/root.bib new file mode 100644 --- /dev/null +++ b/thys/Matrices_for_ODEs/document/root.bib @@ -0,0 +1,70 @@ +%% This BibTeX bibliography file was created using BibDesk. +%% http://bibdesk.sourceforge.net/ + + +%% Created for Jonathan Julian Huerta y Munive at 2019-06-10 16:26:22 +0100 + + +%% Saved with string encoding Unicode (UTF-8) + +@article{ArmstrongGS16, + Author = {Alasdair Armstrong and Victor B. F. Gomes and Georg Struth}, + Journal = {Formal Aspects of Computing}, + Number = 2, + Pages = {265--293}, + Title = {Building program construction and verification tools from algebraic principles}, + Volume = 28, + Year = 2016} + +@Unpublished{FosterMS19, + author = {Simon Foster and + Jonathan Juli{\'{a}}n Huerta y Munive and + Georg Struth}, + title = {Differential {H}oare Logics and Refinement Calculi for Hybrid Systems + with {I}sabelle/{HOL}}, + journal = {CoRR}, + volume = {abs/1910.13554}, + note = {\href{http://arxiv.org/abs/1909.05618}{arXiv:1909.05618}[cs.LO]}, + year = {2019} +} + +@Unpublished{MuniveS19, + author = {Huerta y Munive, Jonathan Juli\'an and Struth, G.}, + title = {Predicate Transformer Semantics for Hybrid Systems: Verification Components for {I}sabelle/{HOL}}, + note = {\href{https://arxiv.org/abs/arXiv:1909.05618}{arXiv:1909.05618} [cs.LO]}, + year = 2019, +} + +@article{ImmlerH12a, + Author = {Fabian Immler and Johannes H{\"{o}}lzl}, + Journal = {Archive of Formal Proofs}, + Title = {Ordinary Differential Equations}, + Url = {https://www.isa-afp.org/entries/Ordinary_Differential_Equations.shtml}, + Year = {2012}, + Bdsk-Url-1 = {https://www.isa-afp.org/entries/Ordinary_Differential_Equations.shtml}} + +@book{Platzer10, + Author = {Andr\'e Platzer}, + Publisher = {Springer}, + Title = {Logical Analysis of Hybrid Systems}, + Year = 2010} + +@article{afp:hybrid, + Author = {Huerta y Munive, Jonathan Juli\'an}, + Journal = {Archive of Formal Proofs}, + Title = {Verification Components for Hybrid Systems}, + Year = 2019} + +@article{afp:transem, + Author = {Georg Struth}, + Journal = {Archive of Formal Proofs}, + Title = {Transformer Semantics}, + Year = 2018} + +@article{afp:vericomp, + Author = {Victor B. F. Gomes and Georg Struth}, + Journal = {Archive of Formal Proofs}, + Title = {Program Construction and Verification Components Based on {K}leene Algebra}, + Year = 2016} + + diff --git a/thys/Matrices_for_ODEs/document/root.tex b/thys/Matrices_for_ODEs/document/root.tex new file mode 100644 --- /dev/null +++ b/thys/Matrices_for_ODEs/document/root.tex @@ -0,0 +1,76 @@ +\documentclass[11pt,a4paper]{article} +\usepackage{isabelle,isabellesym} + +% further packages required for unusual symbols (see also +% isabellesym.sty), use only when needed + +\usepackage{amssymb} + %for \, \, \, \, \, \, + %\, \, \, \, \, + %\, \, \ + +%\usepackage{eurosym} + %for \ + +%\usepackage[only,bigsqcap]{stmaryrd} + %for \ + +%\usepackage{eufrak} + %for \ ... \, \ ... \ (also included in amssymb) + +%\usepackage{textcomp} + %for \, \, \, \, \, + %\ + +% this should be the last package used +\usepackage{pdfsetup} + +% urls in roman style, theory text in math-similar italics +\urlstyle{rm} +\isabellestyle{it} + +% for uniform font size +%\renewcommand{\isastyle}{\isastyleminor} + +\renewcommand{\isasymlonglonglongrightarrow}{$\longrightarrow$} + + +\begin{document} + +\title{Matrices for ODEs} +\author{Jonathan Juli\'an Huerta y Munive} +\maketitle + +\begin{abstract} + Our theories formalise various matrix properties that serve to establish + existence, uniqueness and characterisation of the solution to affine + systems of ordinary differential equations (ODEs). In particular, we + formalise the operator and maximum norm of matrices. Then we use + them to prove that square matrices form a Banach space, and in this + setting, we show an instance of Picard-Lindel\"of's theorem for affine + systems of ODEs. Finally, we apply this formalisation by verifying three + simple hybrid programs. +\end{abstract} + +\tableofcontents + +% sane default for proof documents +\parindent 0pt\parskip 0.5ex + +\section{Introductory Remarks} + +Affine systems of ordinary differential equations (ODEs) are those whose associated vector fields are linear transformations. That is, if there is a matrix-valued function $A:\mathbb{R}\to M_{n\times n}(\mathbb{R})$ and vector function $B:\mathbb{R}\to\mathbb{R}^n$ such that the system of ODEs $x'\, t=f\, (t,x\, t)$ can be rewritten as $x'\, t=A\cdot (x\, t)+B\, t$, then the system is affine. Similarly, the associated linear system of ODEs is $x'\, t=A\cdot (x\, t)$ for matrix-vector multiplication $\cdot$. Our theories formalise affine (hence linear) systems of ordinary differential equations. For this purpose, we extend the ODE libraries of~\cite{ImmlerH12a} and linear algebra in HOL-Analysis. We add to them various results about invertibility of matrices, their diagonalisation, their operator and maximum norms, and properties relating them with vectors. We also define a new type of square matrices and prove that this is a Banach space. Then we obtain results about derivatives of matrix-vector multiplication and use them to prove Picard-Lindel\"of's theorem as formalised in~\cite{afp:hybrid}. The Banach space instance allows us to characterise the general solution to affine systems of ODEs in terms of the matrix-exponential. Finally, we use the components of~\cite{afp:hybrid} to do three simple verification examples in the style of differential dynamic logic~\cite{Platzer10} as showcased in~\cite{ArmstrongGS16,FosterMS19,MuniveS19}. A paper with a detailed overview of the various contributions that this formalisation adds to the verification components will be available soon in ArXiv. + +% generated text of all theories +\input{session} + +% optional bibliography +\bibliographystyle{abbrv} +\bibliography{root} + +\end{document} + +%%% Local Variables: +%%% mode: latex +%%% TeX-master: t +%%% End: diff --git a/thys/Power_Sum_Polynomials/Power_Sum_Polynomials.thy b/thys/Power_Sum_Polynomials/Power_Sum_Polynomials.thy new file mode 100644 --- /dev/null +++ b/thys/Power_Sum_Polynomials/Power_Sum_Polynomials.thy @@ -0,0 +1,412 @@ +section \Power sum polynomials\ +(* + File: Power_Sum_Polynomials.thy + Author: Manuel Eberl, TU München +*) +theory Power_Sum_Polynomials +imports + "Symmetric_Polynomials.Symmetric_Polynomials" + "HOL-Computational_Algebra.Field_as_Ring" + Power_Sum_Polynomials_Library +begin + +subsection \Definition\ + +text \ + For $n$ indeterminates $X_1,\ldots,X_n$, we define the $k$-th power sum polynomial as + \[p_k(X_1, \ldots, X_n) = X_1^k + \ldots + X_n^k\ .\] +\ +lift_definition powsum_mpoly_aux :: "nat set \ nat \ (nat \\<^sub>0 nat) \\<^sub>0 'a :: {semiring_1,zero_neq_one}" is + "\X k mon. if infinite X \ k = 0 \ mon \ 0 then 0 + else if k = 0 \ mon = 0 then of_nat (card X) + else if finite X \ (\x\X. mon = Poly_Mapping.single x k) then 1 else 0" + by auto + +lemma lookup_powsum_mpoly_aux: + "Poly_Mapping.lookup (powsum_mpoly_aux X k) mon = + (if infinite X \ k = 0 \ mon \ 0 then 0 + else if k = 0 \ mon = 0 then of_nat (card X) + else if finite X \ (\x\X. mon = Poly_Mapping.single x k) then 1 else 0)" + by transfer' simp + +lemma lookup_sym_mpoly_aux_monom_singleton [simp]: + assumes "finite X" "x \ X" "k > 0" + shows "Poly_Mapping.lookup (powsum_mpoly_aux X k) (Poly_Mapping.single x k) = 1" + using assms by (auto simp: lookup_powsum_mpoly_aux) + +lemma lookup_sym_mpoly_aux_monom_singleton': + assumes "finite X" "k > 0" + shows "Poly_Mapping.lookup (powsum_mpoly_aux X k) (Poly_Mapping.single x k) = (if x \ X then 1 else 0)" + using assms by (auto simp: lookup_powsum_mpoly_aux) + +lemma keys_powsum_mpoly_aux: "m \ keys (powsum_mpoly_aux A k) \ keys m \ A" + by transfer' (auto split: if_splits simp: keys_monom_of_set) + + +lift_definition powsum_mpoly :: "nat set \ nat \ 'a :: {semiring_1,zero_neq_one} mpoly" is + "powsum_mpoly_aux" . + +lemma vars_powsum_mpoly_subset: "vars (powsum_mpoly A k) \ A" + using keys_powsum_mpoly_aux by (auto simp: vars_def powsum_mpoly.rep_eq) + +lemma powsum_mpoly_infinite: "\finite A \ powsum_mpoly A k = 0" + by (transfer, transfer) auto + +lemma coeff_powsum_mpoly: + "MPoly_Type.coeff (powsum_mpoly X k) mon = + (if infinite X \ k = 0 \ mon \ 0 then 0 + else if k = 0 \ mon = 0 then of_nat (card X) + else if finite X \ (\x\X. mon = Poly_Mapping.single x k) then 1 else 0)" + by transfer' (simp add: lookup_powsum_mpoly_aux) + +lemma coeff_powsum_mpoly_0_right: + "MPoly_Type.coeff (powsum_mpoly X 0) mon = (if mon = 0 then of_nat (card X) else 0)" + by transfer' (auto simp add: lookup_powsum_mpoly_aux) + +lemma coeff_powsum_mpoly_singleton: + assumes "finite X" "k > 0" + shows "MPoly_Type.coeff (powsum_mpoly X k) (Poly_Mapping.single x k) = (if x \ X then 1 else 0)" + using assms by transfer' (simp add: lookup_powsum_mpoly_aux) + +lemma coeff_powsum_mpoly_singleton_eq_1 [simp]: + assumes "finite X" "x \ X" "k > 0" + shows "MPoly_Type.coeff (powsum_mpoly X k) (Poly_Mapping.single x k) = 1" + using assms by (simp add: coeff_powsum_mpoly_singleton) + +lemma coeff_powsum_mpoly_singleton_eq_0 [simp]: + assumes "finite X" "x \ X" "k > 0" + shows "MPoly_Type.coeff (powsum_mpoly X k) (Poly_Mapping.single x k) = 0" + using assms by (simp add: coeff_powsum_mpoly_singleton) + +lemma powsum_mpoly_0 [simp]: "powsum_mpoly X 0 = of_nat (card X)" + by (intro mpoly_eqI ext) (auto simp: coeff_powsum_mpoly_0_right of_nat_mpoly_eq mpoly_coeff_Const) + +lemma powsum_mpoly_empty [simp]: "powsum_mpoly {} k = 0" + by (intro mpoly_eqI) (auto simp: coeff_powsum_mpoly) + +lemma powsum_mpoly_altdef: "powsum_mpoly X k = (\x\X. monom (Poly_Mapping.single x k) 1)" +proof (cases "finite X") + case [simp]: True + show ?thesis + proof (cases "k = 0") + case True + thus ?thesis by auto + next + case False + show ?thesis + proof (intro mpoly_eqI, goal_cases) + case (1 mon) + show ?case using False + by (cases "\x\X. mon = Poly_Mapping.single x k") + (auto simp: coeff_powsum_mpoly coeff_monom when_def) + qed + qed +qed (auto simp: powsum_mpoly_infinite) + +text \ + Power sum polynomials are symmetric: +\ +lemma symmetric_powsum_mpoly [intro]: + assumes "A \ B" + shows "symmetric_mpoly A (powsum_mpoly B k)" + unfolding powsum_mpoly_altdef +proof (rule symmetric_mpoly_symmetric_sum) + fix x \ + assume "x \ B" "\ permutes A" + thus "mpoly_map_vars \ (MPoly_Type.monom (Poly_Mapping.single x k) 1) = + MPoly_Type.monom (Poly_Mapping.single (\ x) k) 1" + using assms by (auto simp: mpoly_map_vars_monom permutes_bij permutep_single + bij_imp_bij_inv permutes_inv_inv) +qed (use assms in \auto simp: permutes_subset\) + +lemma insertion_powsum_mpoly [simp]: "insertion f (powsum_mpoly X k) = (\i\X. f i ^ k)" + unfolding powsum_mpoly_altdef insertion_sum insertion_single by simp + +lemma powsum_mpoly_nz: + assumes "finite X" "X \ {}" "k > 0" + shows "(powsum_mpoly X k :: 'a :: {semiring_1, zero_neq_one} mpoly) \ 0" +proof - + from assms obtain x where "x \ X" by auto + hence "coeff (powsum_mpoly X k) (Poly_Mapping.single x k) = (1 :: 'a)" + using assms by (auto simp: coeff_powsum_mpoly) + thus ?thesis by auto +qed + +lemma powsum_mpoly_eq_0_iff: + assumes "k > 0" + shows "powsum_mpoly X k = 0 \ infinite X \ X = {}" + using assms powsum_mpoly_nz[of X k] by (auto simp: powsum_mpoly_infinite) + + +subsection \The Girard--Newton Theorem\ + +text \ + The following is a nice combinatorial proof of the Girard--Newton Theorem due to + Doron Zeilberger~\cite{zeilberger}. + + The precise statement is this: + + Let $e_k$ denote the $k$-th elementary symmetric polynomial in $X_1,\ldots,X_n$. + This is the sum of all monomials that can be formed by taking the product of $k$ + distinct variables. + + Next, let $p_k = X_1^k + \ldots + X_n^k$ denote that $k$-th symmetric power sum polynomial + in $X_1,\ldots,X_n$. + + Then the following equality holds: + \[(-1)^k k e_k + \sum_{i=0}^{k-1} (-1)^i e_i p_{k-i}\] +\ +theorem Girard_Newton: + assumes "finite X" + shows "(-1) ^ k * of_nat k * sym_mpoly X k + + (\i :: "(nat set \ nat) set" + where "\ = {(A, j). A \ X \ card A \ k \ j \ X \ (card A = k \ j \ A)}" + define \1 :: "(nat set \ nat) set" + where "\1 = {A\Pow X. card A < k} \ X" + define \2 :: "(nat set \ nat) set" + where "\2 = (SIGMA A:{A\Pow X. card A = k}. A)" + + have \_split: "\ = \1 \ \2" "\1 \ \2 = {}" + by (auto simp: \_def \1_def \2_def) + have [intro]: "finite \1" "finite \2" + using assms finite_subset[of _ X] by (auto simp: \1_def \2_def intro!: finite_SigmaI) + have [intro]: "finite \" + by (subst \_split) auto + + \ \ + We define a `weight' function \w\ from \\\ to the ring of polynomials as + \[w(A,j) = (-1)^{|A|} x_j^{k-|A|} \prod_{i\in A} x_i\ .\] + \ + define w :: "nat set \ nat \ 'a mpoly" + where "w = (\(A, j). monom (monom_of_set A + sng j (k - card A)) ((-1) ^ card A))" + + \ \The sum of these weights over all of \\\ is precisely the sum that we want to show equals 0:\ + have "?lhs = (\x\\. w x)" + proof - + have "(\x\\. w x) = (\x\\1. w x) + (\x\\2. w x)" + by (subst \_split, subst sum.union_disjoint, use \_split(2) in auto) + + also have "(\x\\1. w x) = (\ix\\1. w x) = (\A | A \ X \ card A < k. \j\X. w (A, j))" + using assms by (subst sum.Sigma) (auto simp: \1_def) + also have "\ = (\A | A \ X \ card A < k. \j\X. + monom (monom_of_set A) ((-1) ^ card A) * monom (sng j (k - card A)) 1)" + unfolding w_def by (intro sum.cong) (auto simp: mult_monom) + also have "\ = (\A | A \ X \ card A < k. monom (monom_of_set A) ((-1) ^ card A) * + powsum_mpoly X (k - card A))" + by (simp add: sum_distrib_left powsum_mpoly_altdef) + also have "\ = (\(i,A) \ (SIGMA i:{.. X \ card A = i}). + monom (monom_of_set A) ((-1) ^ i) * powsum_mpoly X (k - i))" + by (rule sum.reindex_bij_witness[of _ snd "\A. (card A, A)"]) auto + also have "\ = (\iA | A \ X \ card A = i. + monom (monom_of_set A) 1 * monom 0 ((-1) ^ i) * powsum_mpoly X (k - i))" + using assms by (subst sum.Sigma) (auto simp: mult_monom) + also have "\ = (\ix\\2. w x) = (-1) ^ k * of_nat k * sym_mpoly X k" + proof - + have "(\x\\2. w x) = (\(A,j)\\2. monom (monom_of_set A) ((- 1) ^ k))" + by (intro sum.cong) (auto simp: \2_def w_def mpoly_monom_0_eq_Const intro!: sum.cong) + also have "\ = (\A | A \ X \ card A = k. \j\A. monom (monom_of_set A) ((- 1) ^ k))" + using assms finite_subset[of _ X] by (subst sum.Sigma) (auto simp: \2_def) + also have "(\A. monom (monom_of_set A) ((- 1) ^ k) :: 'a mpoly) = + (\A. monom 0 ((-1) ^ k) * monom (monom_of_set A) 1)" + by (auto simp: fun_eq_iff mult_monom) + also have "monom 0 ((-1) ^ k) = (-1) ^ k" + by (auto simp: mpoly_monom_0_eq_Const mpoly_Const_power mpoly_Const_uminus) + also have "(\A | A \ X \ card A = k. \j\A. (- 1) ^ k * monom (monom_of_set A) 1) = + ((-1) ^ k * of_nat k * sym_mpoly X k :: 'a mpoly)" + by (auto simp: sum_distrib_left sum_distrib_right mult_ac sym_mpoly_altdef) + finally show ?thesis . + qed + + finally show ?thesis by (simp add: algebra_simps) + qed + + \ \Next, we show that the weights sum to 0:\ + also have "(\x\\. w x) = 0" + proof - + \ \We define a function \T\ that is a involutory permutation of \\\. + To be more precise, it bijectively maps those elements \(A,j)\ of \\\ with \j \ A\ + to those where \j \ A\ and the other way round. `Involutory' means that \T\ is its + own inverse function, i.\,e.\ $T(T(x)) = x$.\ + define T :: "nat set \ nat \ nat set \ nat" + where "T = (\(A, j). if j \ A then (A - {j}, j) else (insert j A, j))" + have [simp]: "T (T x) = x" for x + by (auto simp: T_def split: prod.splits) + have [simp]: "T x \ \" if "x \ \" for x + proof - + have [simp]: "n \ n - Suc 0 \ n = 0" for n + by auto + show ?thesis using that assms finite_subset[of _ X] + by (auto simp: T_def \_def split: prod.splits) + qed + have "snd (T x) \ fst (T x) \ snd x \ fst x" if "x \ \" for x + by (auto simp: T_def split: prod.splits) + hence bij: "bij_betw T {x\\. snd x \ fst x} {x\\. snd x \ fst x}" + by (intro bij_betwI[of _ _ _ T]) auto + + \\Crucially, we show that \<^term>\T\ flips the weight of each element:\ + have [simp]: "w (T x) = -w x" if "x \ \" for x + proof - + obtain A j where [simp]: "x = (A, j)" by force + + \ \Since \<^term>\T\ is an involution, we can assume w.\,l.\,o.\,g.\ that \j \ A\:\ + have aux: "w (T (A, j)) = - w (A, j)" if "(A, j) \ \" "j \ A" for j A + proof - + from that have [simp]: "j \ A" "A \ X" and "k > 0" + using finite_subset[OF _ assms, of A] by (auto simp: \_def intro!: Nat.gr0I) + have [simp]: "finite A" + using finite_subset[OF _ assms, of A] by auto + from that have "card A \ k" + by (auto simp: \_def) + + have card: "card A = Suc (card (A - {j}))" + using card.remove[of A j] by auto + hence card_less: "card (A - {j}) < card A" by linarith + + have "w (T (A, j)) = monom (monom_of_set (A - {j}) + sng j (k - card (A - {j}))) + ((- 1) ^ card (A - {j}))" by (simp add: w_def T_def) + also have "(- 1) ^ card (A - {j}) = ((- 1) ^ Suc (Suc (card (A - {j}))) :: 'a)" + by simp + also have "Suc (card (A - {j})) = card A" + using card by simp + also have "k - card (A - {j}) = Suc (k - card A)" + using \k > 0\ \card A \ k\ card_less by (subst card) auto + also have "monom_of_set (A - {j}) + sng j (Suc (k - card A)) = + monom_of_set A + sng j (k - card A)" + by (transfer fixing: A j k) (auto simp: fun_eq_iff) + also have "monom \ ((-1)^ Suc (card A)) = -w (A, j)" + by (simp add: w_def monom_uminus) + finally show ?thesis . + qed + + show ?thesis + proof (cases "j \ A") + case True + with aux[of A j] that show ?thesis by auto + next + case False + hence "snd (T x) \ fst (T x)" + by (auto simp: T_def split: prod.splits) + with aux[of "fst (T x)" "snd (T x)"] that show ?thesis by auto + qed + qed + + text \ + We can now show fairly easily that the sum is equal to zero. + \ + have *: "\ = {x\\. snd x \ fst x} \ {x\\. snd x \ fst x}" + by auto + have "(\x\\. w x) = (\x | x \ \ \ snd x \ fst x. w x) + (\x | x \ \ \ snd x \ fst x. w x)" + using \finite \\ by (subst *, subst sum.union_disjoint) auto + also have "(\x | x \ \ \ snd x \ fst x. w x) = (\x | x \ \ \ snd x \ fst x. w (T x))" + using sum.reindex_bij_betw[OF bij, of w] by simp + also have "\ = -(\x | x \ \ \ snd x \ fst x. w x)" + by (simp add: sum_negf) + finally show "(\x\\. w x) = 0" + by simp + qed + + finally show ?thesis . +qed + +text \ + The following variant of the theorem holds for \k > n\. Note that this is now a + linear recurrence relation with constant coefficients for $p_k$ in terms of + $e_0, \ldots, e_n$. +\ +corollary Girard_Newton': + assumes "finite X" and "k > card X" + shows "(\i\card X. (-1) ^ i * sym_mpoly X i * powsum_mpoly X (k - i)) = + (0 :: 'a :: comm_ring_1 mpoly)" +proof - + have "(0 :: 'a mpoly) = (\i = (\i\card X. (- 1) ^ i * sym_mpoly X i * powsum_mpoly X (k - i))" + using assms by (intro sum.mono_neutral_right) auto + finally show ?thesis .. +qed + +text \ + The following variant is the Newton--Girard Theorem solved for $e_k$, giving us + an explicit way to determine $e_k$ from $e_0, \ldots, e_{k-1}$ and $p_1, \ldots, p_k$: +\ +corollary sym_mpoly_recurrence: + assumes k: "k > 0" and "finite X" + shows "(sym_mpoly X k :: 'a :: field_char_0 mpoly) = + -smult (1 / of_nat k) (\i=1..k. (-1) ^ i * sym_mpoly X (k - i) * powsum_mpoly X i)" +proof - + define e p :: "nat \ 'a mpoly" where [simp]: "e = sym_mpoly X" "p = powsum_mpoly X" + have *: "0 = (-1) ^ k * of_nat k * e k + + (\i = smult (1 / of_nat k) (of_nat k) * e k + + smult (1 / of_nat k) (\iii=1..k. (-1) ^ i * e (k-i) * p i)" + by (intro sum.reindex_bij_witness[of _ "\i. k - i" "\i. k - i"]) + (auto simp: minus_one_power_iff) + finally show ?thesis unfolding e_p_def by algebra +qed + +text \ + Analogously, the following is the theorem solved for $p_k$, giving us a + way to determine $p_k$ from $e_0, \ldots, e_k$ and $p_1, \ldots, p_{k-1}$: +\ +corollary powsum_mpoly_recurrence: + assumes k: "k > 0" and X: "finite X" + shows "(powsum_mpoly X k :: 'a :: comm_ring_1 mpoly) = + (-1) ^ (k + 1) * of_nat k * sym_mpoly X k - + (\i=1.. 'a mpoly" where [simp]: "e = sym_mpoly X" "p = powsum_mpoly X" + have *: "0 = (-1) ^ k * of_nat k * e k + + (\ii=1.. + Again, if we assume $k > n$, the above takes a much simpler form and is, in fact, + a linear recurrence with constant coefficients: +\ +lemma powsum_mpoly_recurrence': + assumes k: "k > card X" and X: "finite X" + shows "(powsum_mpoly X k :: 'a :: comm_ring_1 mpoly) = + -(\i=1..card X. (-1) ^ i * sym_mpoly X i * powsum_mpoly X (k - i))" +proof - + define e p :: "nat \ 'a mpoly" where [simp]: "e = sym_mpoly X" "p = powsum_mpoly X" + have "p k = (-1) ^ (k + 1) * of_nat k * e k - (\i=1.. = -(\i=1..i=1..i=1..card X. (-1) ^ i * e i * p (k - i))" + using assms by (intro sum.mono_neutral_right) auto + finally show ?thesis by simp +qed + +end \ No newline at end of file diff --git a/thys/Power_Sum_Polynomials/Power_Sum_Polynomials_Library.thy b/thys/Power_Sum_Polynomials/Power_Sum_Polynomials_Library.thy new file mode 100644 --- /dev/null +++ b/thys/Power_Sum_Polynomials/Power_Sum_Polynomials_Library.thy @@ -0,0 +1,544 @@ +(* + File: Power_Sum_Polynomials_Library.thy + Author: Manuel Eberl, TU München +*) +section \Auxiliary material\ +theory Power_Sum_Polynomials_Library +imports + "Polynomial_Factorization.Fundamental_Theorem_Algebra_Factorized" + "Symmetric_Polynomials.Symmetric_Polynomials" + "HOL-Computational_Algebra.Computational_Algebra" +begin + +subsection \Miscellaneous\ + +lemma atLeastAtMost_nat_numeral: + "atLeastAtMost m (numeral k :: nat) = + (if m \ numeral k then insert (numeral k) (atLeastAtMost m (pred_numeral k)) + else {})" + by (simp add: numeral_eq_Suc atLeastAtMostSuc_conv) + +lemma sum_in_Rats [intro]: "(\x. x \ A \ f x \ \) \ sum f A \ \" + by (induction A rule: infinite_finite_induct) auto + +(* TODO Move *) +lemma (in monoid_mult) prod_list_distinct_conv_prod_set: + "distinct xs \ prod_list (map f xs) = prod f (set xs)" + by (induct xs) simp_all + +lemma (in monoid_mult) interv_prod_list_conv_prod_set_nat: + "prod_list (map f [m..i = 0..< length xs. xs ! i)" + using interv_prod_list_conv_prod_set_nat [of "(!) xs" 0 "length xs"] by (simp add: map_nth) + + +lemma gcd_poly_code_aux_reduce: + "gcd_poly_code_aux p 0 = normalize p" + "q \ 0 \ gcd_poly_code_aux p q = gcd_poly_code_aux q (primitive_part (pseudo_mod p q))" + by (subst gcd_poly_code_aux.simps; simp)+ + +lemma coprimeI_primes: + fixes a b :: "'a :: factorial_semiring" + assumes "a \ 0 \ b \ 0" + assumes "\p. prime p \ p dvd a \ p dvd b \ False" + shows "coprime a b" +proof (rule coprimeI) + fix d assume d: "d dvd a" "d dvd b" + with assms(1) have [simp]: "d \ 0" by auto + show "is_unit d" + proof (rule ccontr) + assume "\is_unit d" + then obtain p where p: "prime p" "p dvd d" + using prime_divisor_exists[of d] by auto + from assms(2)[of p] and p and d show False + using dvd_trans by auto + qed +qed + +lemma coprime_pderiv_imp_squarefree: + assumes "coprime p (pderiv p)" + shows "squarefree p" +proof (rule squarefreeI) + fix d assume d: "d ^ 2 dvd p" + then obtain q where q: "p = d ^ 2 * q" + by (elim dvdE) + hence "d dvd p" "d dvd pderiv p" + by (auto simp: pderiv_mult pderiv_power_Suc numeral_2_eq_2) + with assms show "is_unit d" + using not_coprimeI by blast +qed + +lemma squarefree_field_poly_iff: + fixes p :: "'a :: {field_char_0,euclidean_ring_gcd,semiring_gcd_mult_normalize} poly" + assumes [simp]: "p \ 0" + shows "squarefree p \ coprime p (pderiv p)" +proof + assume "squarefree p" + show "coprime p (pderiv p)" + proof (rule coprimeI_primes) + fix d assume d: "d dvd p" "d dvd pderiv p" "prime d" + from d(1) obtain q where q: "p = d * q" + by (elim dvdE) + from d(2) and q have "d dvd q * pderiv d" + by (simp add: pderiv_mult dvd_add_right_iff) + with \prime d\ have "d dvd q \ d dvd pderiv d" + using prime_dvd_mult_iff by blast + thus False + proof + assume "d dvd q" + hence "d ^ 2 dvd p" + by (auto simp: q power2_eq_square) + with \squarefree p\ show False + using d(3) not_prime_unit squarefreeD by blast + next + assume "d dvd pderiv d" + hence "Polynomial.degree d = 0" by simp + moreover have "d \ 0" using d by auto + ultimately show False + using d(3) is_unit_iff_degree not_prime_unit by blast + qed + qed auto +qed (use coprime_pderiv_imp_squarefree[of p] in auto) + +lemma coprime_pderiv_imp_rsquarefree: + assumes "coprime (p :: 'a :: field_char_0 poly) (pderiv p)" + shows "rsquarefree p" + unfolding rsquarefree_roots +proof safe + fix x assume "poly p x = 0" "poly (pderiv p) x = 0" + hence "[:-x, 1:] dvd p" "[:-x, 1:] dvd pderiv p" + by (auto simp: poly_eq_0_iff_dvd) + with assms have "is_unit [:-x, 1:]" + using not_coprimeI by blast + thus False by auto +qed + +lemma poly_of_nat [simp]: "poly (of_nat n) x = of_nat n" + by (induction n) auto + +lemma poly_of_int [simp]: "poly (of_int n) x = of_int n" + by (cases n) auto + +lemma order_eq_0_iff: "p \ 0 \ order x p = 0 \ poly p x \ 0" + by (auto simp: order_root) + +lemma order_pos_iff: "p \ 0 \ order x p > 0 \ poly p x = 0" + by (auto simp: order_root) + +lemma order_prod: + assumes "\x. x \ A \ f x \ 0" + shows "order x (\y\A. f y) = (\y\A. order x (f y))" + using assms by (induction A rule: infinite_finite_induct) (auto simp: order_mult) + +lemma order_prod_mset: + assumes "0 \# A" + shows "order x (prod_mset A) = sum_mset (image_mset (order x) A)" + using assms by (induction A) (auto simp: order_mult) + +lemma order_prod_list: + assumes "0 \ set xs" + shows "order x (prod_list xs) = sum_list (map (order x) xs)" + using assms by (induction xs) (auto simp: order_mult) + +lemma order_power: "p \ 0 \ order x (p ^ n) = n * order x p" + by (induction n) (auto simp: order_mult) + + +lemma smult_0_right [simp]: "MPoly_Type.smult p 0 = 0" + by (transfer, transfer) auto + +lemma mult_smult_right [simp]: + fixes c :: "'a :: comm_semiring_0" + shows "p * MPoly_Type.smult c q = MPoly_Type.smult c (p * q)" + by (simp add: smult_conv_mult mult_ac) + +lemma mapping_single_eq_iff [simp]: + "Poly_Mapping.single a b = Poly_Mapping.single c d \ b = 0 \ d = 0 \ a = c \ b = d" + by transfer (unfold fun_eq_iff when_def, metis) + +lemma monom_of_set_plus_monom_of_set: + assumes "A \ B = {}" "finite A" "finite B" + shows "monom_of_set A + monom_of_set B = monom_of_set (A \ B)" + using assms by transfer (auto simp: fun_eq_iff) + +lemma mpoly_monom_0_eq_Const: "monom 0 c = Const c" + by (intro mpoly_eqI) (auto simp: coeff_monom when_def mpoly_coeff_Const) + +lemma mpoly_Const_0 [simp]: "Const 0 = 0" + by (intro mpoly_eqI) (auto simp: mpoly_coeff_Const mpoly_coeff_0) + +lemma mpoly_Const_1 [simp]: "Const 1 = 1" + by (intro mpoly_eqI) (auto simp: mpoly_coeff_Const mpoly_coeff_1) + +lemma mpoly_Const_uminus: "Const (-a) = -Const a" + by (intro mpoly_eqI) (auto simp: mpoly_coeff_Const) + +lemma mpoly_Const_add: "Const (a + b) = Const a + Const b" + by (intro mpoly_eqI) (auto simp: mpoly_coeff_Const) + +lemma mpoly_Const_mult: "Const (a * b) = Const a * Const b" + unfolding mpoly_monom_0_eq_Const [symmetric] mult_monom by simp + +lemma mpoly_Const_power: "Const (a ^ n) = Const a ^ n" + by (induction n) (auto simp: mpoly_Const_mult) + +lemma of_nat_mpoly_eq: "of_nat n = Const (of_nat n)" +proof (induction n) + case 0 + have "0 = (Const 0 :: 'a mpoly)" + by (intro mpoly_eqI) (auto simp: mpoly_coeff_Const) + thus ?case + by simp +next + case (Suc n) + have "1 + Const (of_nat n) = Const (1 + of_nat n)" + by (intro mpoly_eqI) (auto simp: mpoly_coeff_Const mpoly_coeff_1) + thus ?case + using Suc by auto +qed + +lemma insertion_of_nat [simp]: "insertion f (of_nat n) = of_nat n" + by (simp add: of_nat_mpoly_eq) + +lemma insertion_monom_of_set [simp]: + "insertion f (monom (monom_of_set X) c) = c * (\i\X. f i)" +proof (cases "finite X") + case [simp]: True + have "insertion f (monom (monom_of_set X) c) = c * (\a. f a ^ (if a \ X then 1 else 0))" + by (auto simp: lookup_monom_of_set) + also have "(\a. f a ^ (if a \ X then 1 else 0)) = (\i\X. f i ^ (if i \ X then 1 else 0))" + by (intro Prod_any.expand_superset) auto + also have "\ = (\i\X. f i)" + by (intro prod.cong) auto + finally show ?thesis . +qed (auto simp: lookup_monom_of_set) + + +(* TODO: Move! Version in AFP is too weak! *) +lemma symmetric_mpoly_symmetric_sum: + assumes "\\. \ permutes A \ g \ permutes X" + assumes "\x \. x \ X \ \ permutes A \ mpoly_map_vars \ (f x) = f (g \ x)" + shows "symmetric_mpoly A (\x\X. f x)" + unfolding symmetric_mpoly_def +proof safe + fix \ assume \: "\ permutes A" + have "mpoly_map_vars \ (sum f X) = (\x\X. mpoly_map_vars \ (f x))" + by simp + also have "\ = (\x\X. f (g \ x))" + by (intro sum.cong assms \ refl) + also have "\ = (\x\g \`X. f x)" + using assms(1)[OF \] by (subst sum.reindex) (auto simp: permutes_inj_on) + also have "g \ ` X = X" + using assms(1)[OF \] by (simp add: permutes_image) + finally show "mpoly_map_vars \ (sum f X) = sum f X" . +qed + +lemma sym_mpoly_0 [simp]: + assumes "finite A" + shows "sym_mpoly A 0 = 1" + using assms by (transfer, transfer) (auto simp: fun_eq_iff when_def) + +lemma sym_mpoly_eq_0 [simp]: + assumes "k > card A" + shows "sym_mpoly A k = 0" +proof (transfer fixing: A k, transfer fixing: A k, intro ext) + fix mon + have "\(finite A \ (\Y\A. card Y = k \ mon = monom_of_set Y))" + proof safe + fix Y assume Y: "finite A" "Y \ A" "k = card Y" "mon = monom_of_set Y" + hence "card Y \ card A" by (intro card_mono) auto + with Y and assms show False by simp + qed + thus "(if finite A \ (\Y\A. card Y = k \ mon = monom_of_set Y) then 1 else 0) = 0" + by auto +qed + +lemma coeff_sym_mpoly_monom_of_set_eq_0: + assumes "finite X" "Y \ X" "card Y \ k" + shows "MPoly_Type.coeff (sym_mpoly X k) (monom_of_set Y) = 0" + using assms finite_subset[of _ X] by (auto simp: coeff_sym_mpoly) + +lemma coeff_sym_mpoly_monom_of_set_eq_0': + assumes "finite X" "\Y \ X" "finite Y" + shows "MPoly_Type.coeff (sym_mpoly X k) (monom_of_set Y) = 0" + using assms finite_subset[of _ X] by (auto simp: coeff_sym_mpoly) + + +subsection \The set of roots of a univariate polynomial\ + +lift_definition poly_roots :: "'a :: idom poly \ 'a multiset" is + "\p x. if p = 0 then 0 else order x p" +proof - + fix p :: "'a poly" + show "(\x. if p = 0 then 0 else order x p) \ multiset" + by (cases "p = 0") (auto simp: multiset_def order_pos_iff poly_roots_finite) +qed + +lemma poly_roots_0 [simp]: "poly_roots 0 = {#}" + by transfer auto + +lemma poly_roots_1 [simp]: "poly_roots 1 = {#}" + by transfer auto + +lemma count_poly_roots [simp]: + assumes "p \ 0" + shows "count (poly_roots p) x = order x p" + using assms by transfer auto + +lemma in_poly_roots_iff [simp]: "p \ 0 \ x \# poly_roots p \ poly p x = 0" + by (subst count_greater_zero_iff [symmetric], subst count_poly_roots) (auto simp: order_pos_iff) + +lemma set_mset_poly_roots: "p \ 0 \ set_mset (poly_roots p) = {x. poly p x = 0}" + using in_poly_roots_iff[of p] by blast + +lemma count_poly_roots': "count (poly_roots p) x = (if p = 0 then 0 else order x p)" + by transfer' auto + +lemma poly_roots_const [simp]: "poly_roots [:c:] = {#}" + by (intro multiset_eqI) (auto simp: count_poly_roots' order_eq_0_iff) + +lemma poly_roots_linear [simp]: "poly_roots [:-x, 1:] = {#x#}" + by (intro multiset_eqI) (auto simp: count_poly_roots' order_eq_0_iff) + +lemma poly_roots_monom [simp]: "c \ 0 \ poly_roots (Polynomial.monom c n) = replicate_mset n 0" + by (intro multiset_eqI) (auto simp: count_poly_roots' order_eq_0_iff poly_monom) + +lemma poly_roots_smult [simp]: "c \ 0 \ poly_roots (Polynomial.smult c p) = poly_roots p" + by (intro multiset_eqI) (auto simp: count_poly_roots' order_smult) + +lemma poly_roots_mult: "p \ 0 \ q \ 0 \ poly_roots (p * q) = poly_roots p + poly_roots q" + by (intro multiset_eqI) (auto simp: count_poly_roots' order_mult) + +lemma poly_roots_prod: + assumes "\x. x \ A \ f x \ 0" + shows "poly_roots (prod f A) = (\x\A. poly_roots (f x))" + using assms by (induction A rule: infinite_finite_induct) (auto simp: poly_roots_mult) + +lemma poly_roots_prod_mset: + assumes "0 \# A" + shows "poly_roots (prod_mset A) = sum_mset (image_mset poly_roots A)" + using assms by (induction A) (auto simp: poly_roots_mult) + +lemma poly_roots_prod_list: + assumes "0 \ set xs" + shows "poly_roots (prod_list xs) = sum_list (map poly_roots xs)" + using assms by (induction xs) (auto simp: poly_roots_mult) + +lemma poly_roots_power: "p \ 0 \ poly_roots (p ^ n) = repeat_mset n (poly_roots p)" + by (induction n) (auto simp: poly_roots_mult) + +lemma rsquarefree_poly_roots_eq: + assumes "rsquarefree p" + shows "poly_roots p = mset_set {x. poly p x = 0}" +proof (rule multiset_eqI) + fix x :: 'a + from assms show "count (poly_roots p) x = count (mset_set {x. poly p x = 0}) x" + by (cases "poly p x = 0") (auto simp: poly_roots_finite order_eq_0_iff rsquarefree_def) +qed + +lemma rsquarefree_imp_distinct_roots: + assumes "rsquarefree p" and "mset xs = poly_roots p" + shows "distinct xs" +proof (cases "p = 0") + case [simp]: False + have *: "mset xs = mset_set {x. poly p x = 0}" + using assms by (simp add: rsquarefree_poly_roots_eq) + hence "set_mset (mset xs) = set_mset (mset_set {x. poly p x = 0})" + by (simp only: ) + hence [simp]: "set xs = {x. poly p x = 0}" + by (simp add: poly_roots_finite) + from * show ?thesis + by (subst distinct_count_atmost_1) (auto simp: poly_roots_finite) +qed (use assms in auto) + +lemma poly_roots_factorization: + fixes p c A + assumes [simp]: "c \ 0" + defines "p \ Polynomial.smult c (prod_mset (image_mset (\x. [:-x, 1:]) A))" + shows "poly_roots p = A" +proof - + have "poly_roots p = poly_roots (\x\#A. [:-x, 1:])" + by (auto simp: p_def) + also have "\ = A" + by (subst poly_roots_prod_mset) (auto simp: image_mset.compositionality o_def) + finally show ?thesis . +qed + +lemma fundamental_theorem_algebra_factorized': + fixes p :: "complex poly" + shows "p = Polynomial.smult (Polynomial.lead_coeff p) + (prod_mset (image_mset (\x. [:-x, 1:]) (poly_roots p)))" +proof (cases "p = 0") + case [simp]: False + obtain xs where + xs: "Polynomial.smult (Polynomial.lead_coeff p) (\x\xs. [:-x, 1:]) = p" + "length xs = Polynomial.degree p" + using fundamental_theorem_algebra_factorized[of p] by auto + define A where "A = mset xs" + + note xs(1) + also have "(\x\xs. [:-x, 1:]) = prod_mset (image_mset (\x. [:-x, 1:]) A)" + unfolding A_def by (induction xs) auto + finally have *: "Polynomial.smult (Polynomial.lead_coeff p) (\x\#A. [:- x, 1:]) = p" . + also have "A = poly_roots p" + using poly_roots_factorization[of "Polynomial.lead_coeff p" A] + by (subst * [symmetric]) auto + finally show ?thesis .. +qed auto + +lemma poly_roots_eq_imp_eq: + fixes p q :: "complex poly" + assumes "Polynomial.lead_coeff p = Polynomial.lead_coeff q" + assumes "poly_roots p = poly_roots q" + shows "p = q" +proof (cases "p = 0 \ q = 0") + case False + hence [simp]: "p \ 0" "q \ 0" + by auto + have "p = Polynomial.smult (Polynomial.lead_coeff p) + (prod_mset (image_mset (\x. [:-x, 1:]) (poly_roots p)))" + by (rule fundamental_theorem_algebra_factorized') + also have "\ = Polynomial.smult (Polynomial.lead_coeff q) + (prod_mset (image_mset (\x. [:-x, 1:]) (poly_roots q)))" + by (simp add: assms) + also have "\ = q" + by (rule fundamental_theorem_algebra_factorized' [symmetric]) + finally show ?thesis . +qed (use assms in auto) + +lemma Sum_any_zeroI': "(\x. P x \ f x = 0) \ Sum_any (\x. f x when P x) = 0" + by (auto simp: Sum_any.expand_set) + +(* TODO: This was not really needed here, but it is important nonetheless. + It should go in the Symmetric_Polynomials entry. *) +lemma sym_mpoly_insert: + assumes "finite X" "x \ X" + shows "(sym_mpoly (insert x X) (Suc k) :: 'a :: semiring_1 mpoly) = + monom (monom_of_set {x}) 1 * sym_mpoly X k + sym_mpoly X (Suc k)" (is "?lhs = ?A + ?B") +proof (rule mpoly_eqI) + fix mon + show "coeff ?lhs mon = coeff (?A + ?B) mon" + proof (cases "\i. lookup mon i \ 1 \ (i \ insert x X \ lookup mon i = 0)") + case False + then obtain i where i: "lookup mon i > 1 \ i \ insert x X \ lookup mon i > 0" + by (auto simp: not_le) + + have "coeff ?A mon = prod_fun (coeff (monom (monom_of_set {x}) 1)) + (coeff (sym_mpoly X k)) mon" + by (simp add: coeff_mpoly_times) + also have "\ = (\l. \q. coeff (monom (monom_of_set {x}) 1) l * coeff (sym_mpoly X k) q + when mon = l + q)" + unfolding prod_fun_def + by (intro Sum_any.cong, subst Sum_any_right_distrib, force) + (auto simp: Sum_any_right_distrib when_def intro!: Sum_any.cong) + also have "\ = 0" + proof (rule Sum_any_zeroI, rule Sum_any_zeroI') + fix ma mb assume *: "mon = ma + mb" + show "coeff (monom (monom_of_set {x}) (1::'a)) ma * coeff (sym_mpoly X k) mb = 0" + proof (cases "i = x") + case [simp]: True + show ?thesis + proof (cases "lookup mb i > 0") + case True + hence "coeff (sym_mpoly X k) mb = 0" using \x \ X\ + by (auto simp: coeff_sym_mpoly lookup_monom_of_set split: if_splits) + thus ?thesis + using mult_not_zero by blast + next + case False + hence "coeff (monom (monom_of_set {x}) 1) ma = 0" + using i by (auto simp: coeff_monom when_def * lookup_add) + thus ?thesis + using mult_not_zero by blast + qed + next + case [simp]: False + show ?thesis + proof (cases "lookup ma i > 0") + case False + hence "lookup mb i = lookup mon i" + using * by (auto simp: lookup_add) + hence "coeff (sym_mpoly X k) mb = 0" using i + by (auto simp: coeff_sym_mpoly lookup_monom_of_set split: if_splits) + thus ?thesis + using mult_not_zero by blast + next + case True + hence "coeff (monom (monom_of_set {x}) 1) ma = 0" + using i by (auto simp: coeff_monom when_def * lookup_add) + thus ?thesis + using mult_not_zero by blast + qed + qed + qed + finally have "coeff ?A mon = 0" . + moreover from False have "coeff ?lhs mon = 0" + by (subst coeff_sym_mpoly) (auto simp: lookup_monom_of_set split: if_splits) + moreover from False have "coeff (sym_mpoly X (Suc k)) mon = 0" + by (subst coeff_sym_mpoly) (auto simp: lookup_monom_of_set split: if_splits) + ultimately show ?thesis + by auto + next + case True + define A where "A = keys mon" + have A: "A \ insert x X" + using True by (auto simp: A_def) + have [simp]: "mon = monom_of_set A" + unfolding A_def using True by transfer (force simp: fun_eq_iff le_Suc_eq) + have "finite A" + using finite_subset A assms by blast + show ?thesis + proof (cases "x \ A") + case False + have "coeff ?A mon = prod_fun (coeff (monom (monom_of_set {x}) 1)) + (coeff (sym_mpoly X k)) (monom_of_set A)" + by (simp add: coeff_mpoly_times) + also have "\ = (\l. \q. coeff (monom (monom_of_set {x}) 1) l * coeff (sym_mpoly X k) q + when monom_of_set A = l + q)" + unfolding prod_fun_def + by (intro Sum_any.cong, subst Sum_any_right_distrib, force) + (auto simp: Sum_any_right_distrib when_def intro!: Sum_any.cong) + also have "\ = 0" + proof (rule Sum_any_zeroI, rule Sum_any_zeroI') + fix ma mb assume *: "monom_of_set A = ma + mb" + hence "keys ma \ A" + using \finite A\ by transfer (auto simp: fun_eq_iff split: if_splits) + thus "coeff (monom (monom_of_set {x}) (1::'a)) ma * coeff (sym_mpoly X k) mb = 0" + using \x \ A\ by (auto simp: coeff_monom when_def) + qed + finally show ?thesis + using False A assms finite_subset[of _ "insert x X"] finite_subset[of _ X] + by (auto simp: coeff_sym_mpoly) + next + case True + have "mon = monom_of_set {x} + monom_of_set (A - {x})" + using \x \ A\ \finite A\ by (auto simp: monom_of_set_plus_monom_of_set) + also have "coeff ?A \ = coeff (sym_mpoly X k) (monom_of_set (A - {x}))" + by (subst coeff_monom_mult) auto + also have "\ = (if card A = Suc k then 1 else 0)" + proof (cases "card A = Suc k") + case True + thus ?thesis + using assms \finite A\ \x \ A\ A + by (subst coeff_sym_mpoly_monom_of_set) auto + next + case False + thus ?thesis + using assms \x \ A\ A \finite A\ card_Suc_Diff1[of A x] + by (subst coeff_sym_mpoly_monom_of_set_eq_0) auto + qed + moreover have "coeff ?B (monom_of_set A) = 0" + using assms \x \ A\ \finite A\ + by (subst coeff_sym_mpoly_monom_of_set_eq_0') auto + moreover have "coeff ?lhs (monom_of_set A) = (if card A = Suc k then 1 else 0)" + using assms A \finite A\ finite_subset[of _ "insert x X"] by (auto simp: coeff_sym_mpoly) + ultimately show ?thesis by simp + qed + qed +qed + + +end \ No newline at end of file diff --git a/thys/Power_Sum_Polynomials/Power_Sum_Puzzle.thy b/thys/Power_Sum_Polynomials/Power_Sum_Puzzle.thy new file mode 100644 --- /dev/null +++ b/thys/Power_Sum_Polynomials/Power_Sum_Puzzle.thy @@ -0,0 +1,476 @@ +(* + File: Power_Sum_Puzzle.thy + Author: Manuel Eberl, TU München +*) +section \Power sum puzzles\ +theory Power_Sum_Puzzle +imports + Power_Sum_Polynomials + "Polynomial_Factorization.Rational_Root_Test" +begin + +subsection \General setting and results\ + +text \ + We now consider the following situation: Given unknown complex numbers $x_1,\ldots,x_n$, + define $p_k = x_1^k + \ldots + x_n^k$. Also, define $e_k := e_k(x_1,\ldots,x_n)$ where + $e_k(X_1,\ldots,X_n)$ is the $k$-th elementary symmetric polynomial. + + What is the relationship between the sequences $e_k$ and $p_k$; in particular, + how can we determine one from the other? +\ +locale power_sum_puzzle = + fixes x :: "nat \ complex" + fixes n :: nat +begin + +text \ + We first introduce the notation $p_k := x_1 ^ k + \ldots + x_n ^ k$: +\ +definition p where "p k = (\i + Similarly, we introduce the notation $e_k = e_k(x_1,\ldots, x_n)$ where + $e_k(X_1,\ldots,X_n)$ is the $k$-th elementary symmetric polynomial (i.\,e. the sum of + all monomials that can be formed by taking the product of exactly $k$ distinct variables). +\ +definition e where "e k = (\Y | Y \ {.. card Y = k. prod x Y)" + +lemma e_altdef: "e k = insertion x (sym_mpoly {.. + It is clear that $e_k$ vanishes for $k > n$. +\ +lemma e_eq_0 [simp]: "k > n \ e k = 0" + by (simp add: e_altdef) + +lemma e_0 [simp]: "e 0 = 1" + by (simp add: e_altdef) + + +text \ + The recurrences we got from the Girard--Newton Theorem earlier now directly give us + analogous recurrences for $e_k$ and $p_k$: +\ +lemma e_recurrence: + assumes k: "k > 0" + shows "e k = -(\i=1..k. (- 1) ^ i * e (k - i) * p i) / of_nat k" + using assms unfolding e_altdef p_altdef + by (subst sym_mpoly_recurrence) + (auto simp: insertion_sum insertion_add insertion_mult insertion_power insertion_sym_mpoly) + +lemma p_recurrence: + assumes k: "k > 0" + shows "p k = -of_nat k * (-1) ^ k * e k - (\i=1.. n" + shows "p k = -(\i=1..n. (-1) ^ i * e i * p (k - i))" + using assms unfolding e_altdef p_altdef + by (subst powsum_mpoly_recurrence') + (auto simp: insertion_sum insertion_add insertion_mult insertion_diff + insertion_power insertion_sym_mpoly) + + +text \ + It is clear from this recurrence that if $p_1$ to $p_n$ are rational, then so are the $e_k$: +\ +lemma e_in_Rats: + assumes "\k. k \ {1..n} \ p k \ \" + shows "e k \ \" +proof (cases "k \ n") + case True + thus ?thesis + proof (induction k rule: less_induct) + case (less k) + show ?case + proof (cases "k = 0") + case False + thus ?thesis using assms less + by (subst e_recurrence) (auto intro!: Rats_divide) + qed auto + qed +qed auto + +text \ + Analogously, if $p_1$ to $p_n$ are rational, then so are all the other $p_k$: +\ +lemma p_in_Rats: + assumes "\k. k \ {1..n} \ p k \ \" + shows "p k \ \" +proof (induction k rule: less_induct) + case (less k) + consider "k = 0" | "k \ {1..n}" | "k > n" + by force + thus ?case + proof cases + assume "k > n" + thus ?thesis + using less assms by (subst p_recurrence'') (auto intro!: sum_in_Rats Rats_mult e_in_Rats) + qed (use assms in auto) +qed + + +text \ + Next, we define the unique monic polynomial that has $x_1, \ldots, x_n$ as its roots + (respecting multiplicity): +\ +definition Q :: "complex poly" where "Q = (\ii. [:-x i, 1:]"] + by (simp add: Q_def degree_prod_eq_sum_degree) + +text \ + By Vieta's Theorem, we then have: + \[Q(X) = \sum_{k=0}^n (-1)^{n-k} e_{n-k} X^k\] + In other words: The above allows us to determine the $x_1, \ldots, x_n$ explicitly. + They are, in fact, precisely the roots of the above polynomial (respecting multiplicity). + Since this polynomial depends only on the $e_k$, which are in turn determined by + $p_1, \ldots, p_n$, this means that these are the \<^emph>\only\ solutions of this puzzle + (up to permutation of the $x_i$). +\ +lemma coeff_Q: "Polynomial.coeff Q k = (if k > n then 0 else (-1) ^ (n - k) * e (n - k))" +proof (cases "k \ n") + case True + thus ?thesis + using coeff_poly_from_roots[of "{..k\n. Polynomial.monom ((-1) ^ (n - k) * e (n - k)) k)" + by (subst poly_as_sum_of_monoms [symmetric]) (simp add: coeff_Q) + +text \ + The following theorem again shows that $x_1, \ldots, x_n$ are precisely the roots + of \<^term>\Q\, respecting multiplicity. +\ +theorem mset_x_eq_poly_roots_Q: "{#x i. i \# mset_set {..i = {#x i. i \# mset_set {..Existence of solutions\ + +text \ + So far, we have assumed a solution to the puzzle and then shown the properties that this + solution must fulfil. However, we have not yet shown that there \<^emph>\is\ a solution. + We will do that now. + + Let $n$ be a natural number and $f_k$ some sequence of complex numbers. We will show that + there are $x_1, \ldots, x_n$ so that $x_1 ^ k + \ldots + x_n ^ k = f_k$ for any $1\leq k\leq n$. +\ +locale power_sum_puzzle_existence = + fixes f :: "nat \ complex" and n :: nat +begin + +text \ + First, we define a sequence of numbers \e'\ analogously to the sequence \e\ before, + except that we replace all occurrences of the power sum $p_k$ with $f_k$ (recall that in the end + we want $p_k = f_k$). +\ +fun e' :: "nat \ complex" + where "e' k = (if k = 0 then 1 else if k > n then 0 + else -(\i=1..k. (-1) ^ i * e' (k - i) * f i) / of_nat k)" + +lemmas [simp del] = e'.simps + +lemma e'_0 [simp]: "e' 0 = 1" + by (simp add: e'.simps) + +lemma e'_eq_0 [simp]: "k > n \ e' k = 0" + by (auto simp: e'.simps) + +text \ + Just as before, we can show the following recurrence for \f\ in terms of \e'\: +\ +lemma f_recurrence: + assumes k: "k > 0" "k \ n" + shows "f k = -of_nat k * (-1) ^ k * e' k - (\i=1..i=1..k. (- 1) ^ i * e' (k - i) * f i)" + using assms by (subst e'.simps) (simp add: field_simps) + hence "(-1)^k * (-of_nat k * e' k) = (-1)^k * (\i=1..k. (- 1) ^ i * e' (k - i) * f i)" + by simp + also have "\ = f k + (-1) ^ k * (\i=1..i=1..i=1.. = (\i=1..i. k - i" "\i. k - i"]) auto + finally show ?thesis + by (simp add: algebra_simps) +qed + +text \ + We now define a polynomial whose roots will be precisely the solution $x_1, \ldots, x_n$ to our + problem. +\ +lift_definition Q' :: "complex poly" is "\k. if k > n then 0 else (-1) ^ (n - k) * e' (n - k)" + using eventually_gt_at_top[of n] unfolding cofinite_eq_sequentially + by eventually_elim auto + +lemma coeff_Q': "Polynomial.coeff Q' k = (if k > n then 0 else (-1) ^ (n - k) * e' (n - k))" + by transfer auto + +lemma lead_coeff_Q': "Polynomial.coeff Q' n = 1" + by (simp add: coeff_Q') + +lemma degree_Q' [simp]: "Polynomial.degree Q' = n" +proof (rule antisym) + show "Polynomial.degree Q' \ n" + by (rule le_degree) (auto simp: coeff_Q') + show "Polynomial.degree Q' \ n" + by (rule degree_le) (auto simp: coeff_Q') +qed + +text \ + Since the complex numbers are algebraically closed, this polynomial splits into + linear factors: +\ +definition Root :: "nat \ complex" + where "Root = (SOME Root. Q' = (\iir\rs. [:-r, 1:]) = Q'" "length rs = n" + using fundamental_theorem_algebra_factorized[of Q'] lead_coeff_Q' by auto + have "Q' = (\r\rs. [:-r, 1:])" + by (simp add: rs) + also have "\ = (\r=0..Root. Q' = (\i + We can therefore now use the results from before for these $x_1, \ldots, x_n$. +\ +sublocale power_sum_puzzle Root n . + +text \ + Vieta's theorem gives us an expression for the coefficients of \Q'\ in terms of + $e_k(x_1,\ldots,x_n)$. This shows that our \e'\ is indeed exactly the same as \e\. +\ +lemma e'_eq_e: "e' k = e k" +proof (cases "k \ n") + case True + from True have "e' k = (-1) ^ k * poly.coeff Q' (n - k)" + by (simp add: coeff_Q') + also have "Q' = (\x (n - k) = e k" + using True coeff_poly_from_roots[of "{.. + It then follows by a simple induction that $p_k = f_k$ for $1\leq k\leq n$, as intended: +\ +lemma p_eq_f: + assumes "k > 0" "k \ n" + shows "p k = f k" + using assms +proof (induction k rule: less_induct) + case (less k) + thus "p k = f k" + using p_recurrence[of k] f_recurrence[of k] less by (simp add: e'_eq_e) +qed + +end + +text \ + Here is a more condensed form of the above existence theorem: +\ +theorem power_sum_puzzle_has_solution: + fixes f :: "nat \ complex" + shows "\Root. \k\{1..n}. (\ik\{1..n}. (\iA specific puzzle\ + +text \ + We now look at one particular instance of this puzzle, which was given as an exercise in + \<^emph>\Abstract Algebra\ by Dummit and Foote (Exercise 23 in Section 14.6)~\cite{dummit}. + + Suppose we know that + $x + y + z = 1$, $x^2 + y^2 + z^2 = 2$, and $x^3 + y^3 + z^3 = 3$. Then what is + $x^5+y^5+z^5$? What about any arbitrary $x^n+y^n+z^n$? +\ +locale power_sum_puzzle_example = + fixes x y z :: complex + assumes xyz: "x + y + z = 1" + "x^2 + y^2 + z^2 = 2" + "x^3 + y^3 + z^3 = 3" +begin + +text \ + We reuse the results we have shown in the general case before. +\ +definition f where "f n = [x,y,z] ! n" + +sublocale power_sum_puzzle f 3 . + +text \ + We can simplify \<^term>\p\ a bit more now. +\ +lemma p_altdef': "p k = x ^ k + y ^ k + z ^ k" + unfolding p_def f_def by (simp add: eval_nat_numeral) + +lemma p_base [simp]: "p (Suc 0) = 1" "p 2 = 2" "p 3 = 3" + using xyz by (simp_all add: p_altdef') + +text \ + We can easily compute all the non-zero values of \<^term>\e\ recursively: +\ +lemma e_Suc_0 [simp]: "e (Suc 0) = 1" + by (subst e_recurrence; simp) + +lemma e_2 [simp]: "e 2 = -1/2" + by (subst e_recurrence; simp add: atLeastAtMost_nat_numeral) + +lemma e_3 [simp]: "e 3 = 1/6" + by (subst e_recurrence; simp add: atLeastAtMost_nat_numeral) + +text \ + Plugging in all the values, the recurrence relation for \<^term>\p\ now looks like this: +\ +lemma p_recurrence''': "k > 3 \ p k = p (k-3) / 6 + p (k-2) / 2 + p (k-1)" + using p_recurrence''[of k] by (simp add: atLeastAtMost_nat_numeral) + +text \ + Also note again that all $p_k$ are rational: +\ +lemma p_in_Rats': "p k \ \" +proof - + have *: "{1..3} = {1, 2, (3::nat)}" + by auto + also have "\k\\. p k \ \" + by auto + finally show ?thesis + using p_in_Rats[of k] by simp +qed + +text \ + The above recurrence has the characteristic polynomial $X^3 - X^2 - \frac{1}{2} X - \frac{1}{6}$ + (which is exactly our \<^term>\Q\), so we know that can now specify $x$, $y$, and $z$ + more precisely: They are the roots of that polynomial (in unspecified order). +\ + +lemma xyz_eq: "{#x, y, z#} = poly_roots [:-1/6, -1/2, -1, 1:]" +proof - + have "image_mset f (mset_set {..<3}) = poly_roots Q" + using mset_x_eq_poly_roots_Q . + also have "image_mset f (mset_set {..<3}) = {#x, y, z#}" + by (simp add: numeral_3_eq_3 lessThan_Suc f_def Multiset.union_ac) + also have "Q = [:-1/6, -1/2, -1, 1:]" + by (simp add: Q_altdef atMost_nat_numeral Polynomial.monom_altdef + power3_eq_cube power2_eq_square) + finally show ?thesis . +qed + +text \ + Using the rational root test, we can easily show that $x$, $y$, and $z$ are irrational. +\ +lemma xyz_irrational: "set_mset (poly_roots [:-1/6, -1/2, -1, 1::complex:]) \ \ = {}" +proof - + define p :: "rat poly" where "p = [:-1/6, -1/2, -1, 1:]" + have "rational_root_test p = None" + unfolding p_def by code_simp + hence "\(\x::rat. poly p x = 0)" + by (rule rational_root_test) + hence "\(\x\\. poly (map_poly of_rat p) x = (0 :: complex))" + by (auto simp: Rats_def) + also have "map_poly of_rat p = [:-1/6, -1/2, -1, 1 :: complex:]" + by (simp add: p_def of_rat_minus of_rat_divide) + finally show ?thesis + by auto +qed + + +text \ + This polynomial is \<^emph>\squarefree\, so these three roots are, in fact, unique (so that there are + indeed $3! = 6$ possible permutations). +\ +lemma rsquarefree: "rsquarefree [:-1/6, -1/2, -1, 1 :: complex:]" + by (rule coprime_pderiv_imp_rsquarefree) + (auto simp: pderiv_pCons coprime_iff_gcd_eq_1 gcd_poly_code gcd_poly_code_def content_def + primitive_part_def gcd_poly_code_aux_reduce pseudo_mod_def pseudo_divmod_def + Let_def Polynomial.monom_altdef normalize_poly_def) + +lemma distinct_xyz: "distinct [x, y, z]" + by (rule rsquarefree_imp_distinct_roots[OF rsquarefree]) (simp_all add: xyz_eq) + + +text \ + While these roots \<^emph>\can\ be written more explicitly in radical form, they are not very pleasant + to look at. We therefore only compute a few values of \p\ just for fun: +\ +lemma "p 4 = 25 / 6" and "p 5 = 6" and "p 10 = 15539 / 432" + by (simp_all add: p_recurrence''') + +text \ + Lastly, let us (informally) examine the asymptotics of this problem. + + Two of the roots have a norm of roughly $\beta \approx 0.341$, while the remaining root + \\\ is roughly 1.431. Consequently, $x^n + y^n + z^n$ is asymptotically equivalent to $\alpha^n$, + with the error being bounded by $2\cdot \beta^n$ and therefore goes to 0 very quickly. + + For $p(10) = \frac{15539}{432} \approx 35.97$, for instance, this approximation is correct + up to 6 decimals (a relative error of about 0.0001\,\%). +\ + +end + + +text \ + To really emphasise that the above puzzle has a solution and the locale is not `vacuous', + here is an interpretation of the locale using the existence theorem from before: +\ +notepad +begin + define f :: "nat \ complex" where "f = (\k. [1,2,3] ! (k - 1))" + obtain Root :: "nat \ complex" where Root: "\k. k \ {1..3} \ (\i<3. Root i ^ k) = f k" + using power_sum_puzzle_has_solution[of 3 f] by metis + define x y z where "x = Root 0" "y = Root 1" "z = Root 2" + have "x + y + z = 1" and "x^2 + y^2 + z^2 = 2" and "x^3 + y^3 + z^3 = 3" + using Root[of 1] Root[of 2] Root[of 3] by (simp_all add: eval_nat_numeral x_y_z_def f_def) + then interpret power_sum_puzzle_example x y z + by unfold_locales + have "p 5 = 6" + by (simp add: p_recurrence''') +end + +end \ No newline at end of file diff --git a/thys/Power_Sum_Polynomials/ROOT b/thys/Power_Sum_Polynomials/ROOT new file mode 100644 --- /dev/null +++ b/thys/Power_Sum_Polynomials/ROOT @@ -0,0 +1,13 @@ +chapter AFP + +session Power_Sum_Polynomials (AFP) = Symmetric_Polynomials + + options [timeout = 600] + sessions + "HOL-Computational_Algebra" + Polynomial_Factorization + theories + Power_Sum_Puzzle + document_files + "root.tex" + "root.bib" + diff --git a/thys/Power_Sum_Polynomials/document/root.bib b/thys/Power_Sum_Polynomials/document/root.bib new file mode 100644 --- /dev/null +++ b/thys/Power_Sum_Polynomials/document/root.bib @@ -0,0 +1,20 @@ +@article{zeilberger, + title = "A combinatorial proof of {N}ewton's identities", + journal = "Discrete Mathematics", + volume = "49", + number = "3", + pages = "319", + year = "1984", + issn = "0012-365X", + doi = "https://doi.org/10.1016/0012-365X(84)90171-7", + url = "http://www.sciencedirect.com/science/article/pii/0012365X84901717", + author = "Doron Zeilberger" +} + +@book{dummit, + title={Abstract Algebra}, + author={Dummit, D. S. and Foote, R. M.}, + isbn={9780471433347}, + year={2003}, + publisher={Wiley} +} diff --git a/thys/Power_Sum_Polynomials/document/root.tex b/thys/Power_Sum_Polynomials/document/root.tex new file mode 100644 --- /dev/null +++ b/thys/Power_Sum_Polynomials/document/root.tex @@ -0,0 +1,38 @@ +\documentclass[11pt,a4paper]{article} +\usepackage{isabelle,isabellesym} +\usepackage{amsfonts, amsmath, amssymb} + +% this should be the last package used +\usepackage{pdfsetup} + +% urls in roman style, theory text in math-similar italics +\urlstyle{rm} +\isabellestyle{it} + +\begin{document} + +\title{Power Sum Polynomials\\ and the Girard--Newton Theorem} +\author{Manuel Eberl} +\maketitle + +\begin{abstract} +This article provides a formalisation of the symmetric multivariate polynomials known as \emph{power sum polynomials}. These are of the form $p_n(X_1,\ldots, X_k) = X_1 ^ n + \ldots + X_k ^ n$. A formal proof of the Girard--Newton Theorem is also given. This theorem relates the power sum polynomials to the elementary symmetric polynomials $s_k$ in the form of a recurrence relation $(-1)^k k s_k = \sum_{i=0}^{k-1} (-1)^i s_i p_{k-i}$\ . + +As an application, this is then used to solve a generalised form of a puzzle given as an exercise in Dummit and Foote's \emph{Abstract Algebra}: For $k$ complex unknowns $x_1, \ldots, x_k$, define $p_j := x_1^j + \ldots + x_k^j$. Then for each vector $a\in\mathbb{C}^k$, show that there is exactly one solution to the system $p_1 = a_1, \ldots, p_k = a_k$ up to permutation of the $x_i$ and determine the value of $p_i$ for $i>k$. +\end{abstract} + +\tableofcontents +\newpage +\parindent 0pt\parskip 0.5ex + +\input{session} + +\bibliographystyle{abbrv} +\bibliography{root} + +\end{document} + +%%% Local Variables: +%%% mode: latex +%%% TeX-master: t +%%% End: diff --git a/thys/ROOTS b/thys/ROOTS --- a/thys/ROOTS +++ b/thys/ROOTS @@ -1,532 +1,542 @@ ADS_Functor AODV Attack_Trees Auto2_HOL Auto2_Imperative_HOL AVL-Trees AWN Abortable_Linearizable_Modules Abs_Int_ITP2012 Abstract-Hoare-Logics Abstract-Rewriting Abstract_Completeness Abstract_Soundness Adaptive_State_Counting Affine_Arithmetic Aggregation_Algebras Akra_Bazzi Algebraic_Numbers Algebraic_VCs Allen_Calculus Amortized_Complexity AnselmGod Applicative_Lifting Approximation_Algorithms Architectural_Design_Patterns Aristotles_Assertoric_Syllogistic Arith_Prog_Rel_Primes ArrowImpossibilityGS AutoFocus-Stream Automatic_Refinement AxiomaticCategoryTheory BDD BNF_Operations +Banach_Steinhaus Bell_Numbers_Spivey Berlekamp_Zassenhaus Bernoulli Bertrands_Postulate Bicategory BinarySearchTree Binding_Syntax_Theory Binomial-Heaps Binomial-Queues BNF_CC Bondy Boolean_Expression_Checkers Bounded_Deducibility_Security Buchi_Complementation Budan_Fourier Buffons_Needle Buildings BytecodeLogicJmlTypes C2KA_DistributedSystems CAVA_Automata CAVA_LTL_Modelchecker CCS CISC-Kernel CRDT CYK CakeML CakeML_Codegen Call_Arity Card_Equiv_Relations Card_Multisets Card_Number_Partitions Card_Partitions Cartan_FP Case_Labeling Catalan_Numbers Category Category2 Category3 Cauchy Cayley_Hamilton Certification_Monads Chord_Segments Circus Clean ClockSynchInst Closest_Pair_Points CofGroups Coinductive Coinductive_Languages Collections Comparison_Sort_Lower_Bound Compiling-Exceptions-Correctly Completeness Complete_Non_Orders Complex_Geometry Complx ComponentDependencies ConcurrentGC ConcurrentIMP Concurrent_Ref_Alg Concurrent_Revisions Consensus_Refined Constructive_Cryptography Constructor_Funs Containers CoreC++ Core_DOM Count_Complex_Roots CryptHOL CryptoBasedCompositionalProperties DFS_Framework DPT-SAT-Solver DataRefinementIBP Datatype_Order_Generator Decl_Sem_Fun_PL Decreasing-Diagrams Decreasing-Diagrams-II Deep_Learning Density_Compiler Dependent_SIFUM_Refinement Dependent_SIFUM_Type_Systems Depth-First-Search Derangements Deriving Descartes_Sign_Rule Dict_Construction Differential_Dynamic_Logic Differential_Game_Logic Dijkstra_Shortest_Path Diophantine_Eqns_Lin_Hom Dirichlet_L Dirichlet_Series Discrete_Summation DiscretePricing DiskPaxos DynamicArchitectures Dynamic_Tables E_Transcendental Echelon_Form EdmondsKarp_Maxflow Efficient-Mergesort Elliptic_Curves_Group_Law Encodability_Process_Calculi Epistemic_Logic Ergodic_Theory Error_Function Euler_MacLaurin Euler_Partition Example-Submission Factored_Transition_System_Bounding Farkas FFT FLP FOL-Fitting FOL_Harrison FOL_Seq_Calc1 Falling_Factorial_Sum FeatherweightJava Featherweight_OCL Fermat3_4 FileRefinement FinFun Finger-Trees Finite_Automata_HF First_Order_Terms First_Welfare_Theorem Fishburn_Impossibility Fisher_Yates Flow_Networks Floyd_Warshall Flyspeck-Tame FocusStreamsCaseStudies +Forcing Formal_SSA Formula_Derivatives Fourier Free-Boolean-Algebra Free-Groups FunWithFunctions FunWithTilings Functional-Automata Functional_Ordered_Resolution_Prover Furstenberg_Topology GPU_Kernel_PL Gabow_SCC Game_Based_Crypto Gauss-Jordan-Elim-Fun Gauss_Jordan Gauss_Sums +Gaussian_Integers GenClock General-Triangle Generalized_Counting_Sort Generic_Deriving Generic_Join GewirthPGCProof Girth_Chromatic GoedelGod Goodstein_Lambda GraphMarkingIBP Graph_Saturation Graph_Theory Green Groebner_Bases Groebner_Macaulay Gromov_Hyperbolicity Group-Ring-Module HOL-CSP HOLCF-Prelude HRB-Slicing Heard_Of Hello_World HereditarilyFinite Hermite Hidden_Markov_Models Higher_Order_Terms Hoare_Time HotelKeyCards Huffman Hybrid_Logic Hybrid_Multi_Lane_Spatial_Logic Hybrid_Systems_VCs HyperCTL IEEE_Floating_Point IMAP-CRDT IMO2019 IMP2 IMP2_Binary_Heap IP_Addresses Imperative_Insertion_Sort Impossible_Geometry Incompleteness Incredible_Proof_Machine Inductive_Confidentiality InfPathElimination InformationFlowSlicing InformationFlowSlicing_Inter Integration Interval_Arithmetic_Word32 Iptables_Semantics +Irrational_Series_Erdos_Straus Irrationality_J_Hancl Isabelle_C Isabelle_Meta_Model Jacobson_Basic_Algebra Jinja JinjaThreads JiveDataStoreModel Jordan_Hoelder Jordan_Normal_Form KAD KAT_and_DRA KBPs KD_Tree Key_Agreement_Strong_Adversaries Kleene_Algebra Knuth_Bendix_Order Knot_Theory +Knuth_Bendix_Order Knuth_Morris_Pratt Koenigsberg_Friendship Kruskal Kuratowski_Closure_Complement LLL_Basis_Reduction LLL_Factorization LOFT LTL LTL_to_DRA LTL_to_GBA LTL_Master_Theorem +LTL_Normal_Form Lam-ml-Normalization LambdaAuth LambdaMu Lambda_Free_KBOs Lambda_Free_RPOs +Lambert_W Landau_Symbols Laplace_Transform Latin_Square LatticeProperties Lambda_Free_EPO Launchbury Lazy-Lists-II Lazy_Case Lehmer Lifting_Definition_Option LightweightJava LinearQuantifierElim Linear_Inequalities Linear_Programming Linear_Recurrences Liouville_Numbers List-Index List-Infinite List_Interleaving List_Inversions List_Update LocalLexing Localization_Ring Locally-Nameless-Sigma Lowe_Ontological_Argument Lower_Semicontinuous Lp Lucas_Theorem MFMC_Countable MSO_Regex_Equivalence Markov_Models Marriage Mason_Stothers +Matrices_for_ODEs Matrix Matrix_Tensor Matroids Max-Card-Matching Median_Of_Medians_Selection Menger Mersenne_Primes MFODL_Monitor_Optimized MFOTL_Monitor MiniML Minimal_SSA Minkowskis_Theorem Minsky_Machines Modal_Logics_for_NTS Modular_Assembly_Kit_Security Monad_Memo_DP Monad_Normalisation MonoBoolTranAlgebra MonoidalCategory Monomorphic_Monad MuchAdoAboutTwo Multirelations Multi_Party_Computation Myhill-Nerode Name_Carrying_Type_Inference Nat-Interval-Logic Native_Word Nested_Multisets_Ordinals Network_Security_Policy_Verification Neumann_Morgenstern_Utility No_FTL_observers Nominal2 Noninterference_CSP Noninterference_Concurrent_Composition Noninterference_Generic_Unwinding Noninterference_Inductive_Unwinding Noninterference_Ipurge_Unwinding Noninterference_Sequential_Composition NormByEval Nullstellensatz Octonions Open_Induction OpSets Optics Optimal_BST Orbit_Stabiliser Order_Lattice_Props Ordered_Resolution_Prover Ordinal Ordinals_and_Cardinals Ordinary_Differential_Equations PCF PLM Pell POPLmark-deBruijn PSemigroupsConvolution Pairing_Heap Paraconsistency Parity_Game Partial_Function_MR Partial_Order_Reduction Password_Authentication_Protocol Perfect-Number-Thm Perron_Frobenius Pi_Calculus Pi_Transcendental Planarity_Certificates Polynomial_Factorization Polynomial_Interpolation Polynomials Poincare_Bendixson Poincare_Disc Pop_Refinement Posix-Lexing Possibilistic_Noninterference +Power_Sum_Polynomials Pratt_Certificate Presburger-Automata Prim_Dijkstra_Simple Prime_Distribution_Elementary Prime_Harmonic_Series Prime_Number_Theorem Priority_Queue_Braun Priority_Search_Trees Probabilistic_Noninterference Probabilistic_Prime_Tests Probabilistic_System_Zoo Probabilistic_Timed_Automata Probabilistic_While Projective_Geometry Program-Conflict-Analysis Promela Proof_Strategy_Language PropResPI Propositional_Proof_Systems Prpu_Maxflow PseudoHoops Psi_Calculi Ptolemys_Theorem QHLProver QR_Decomposition Quantales Quaternions Quick_Sort_Cost RIPEMD-160-SPARK ROBDD RSAPSS Ramsey-Infinite Random_BSTs Randomised_BSTs Random_Graph_Subgraph_Threshold Randomised_Social_Choice Rank_Nullity_Theorem Real_Impl +Recursion-Addition Recursion-Theory-I Refine_Imperative_HOL Refine_Monadic RefinementReactive Regex_Equivalence Regular-Sets Regular_Algebras Relation_Algebra Relational-Incorrectness-Logic Rep_Fin_Groups Residuated_Lattices Resolution_FOL Rewriting_Z Ribbon_Proofs Robbins-Conjecture Root_Balanced_Tree Routing Roy_Floyd_Warshall SATSolverVerification SDS_Impossibility SIFPL SIFUM_Type_Systems SPARCv8 Safe_OCL Saturation_Framework Secondary_Sylow Security_Protocol_Refinement Selection_Heap_Sort SenSocialChoice Separata Separation_Algebra Separation_Logic_Imperative_HOL SequentInvertibility Shivers-CFA ShortestPath Show Sigma_Commit_Crypto Signature_Groebner Simpl Simple_Firewall Simplex Skew_Heap Skip_Lists Slicing Sliding_Window_Algorithm Smooth_Manifolds Sort_Encodings Source_Coding_Theorem Special_Function_Bounds Splay_Tree Sqrt_Babylonian Stable_Matching Statecharts Stellar_Quorums Stern_Brocot Stewart_Apollonius Stirling_Formula Stochastic_Matrices Stone_Algebras Stone_Kleene_Relation_Algebras Stone_Relation_Algebras Store_Buffer_Reduction Stream-Fusion Stream_Fusion_Code Strong_Security Sturm_Sequences Sturm_Tarski Stuttering_Equivalence Subresultants Subset_Boolean_Algebras SumSquares SuperCalc Surprise_Paradox Symmetric_Polynomials Szpilrajn TESL_Language TLA Tail_Recursive_Functions Tarskis_Geometry Taylor_Models Timed_Automata Topology TortoiseHare Transcendence_Series_Hancl_Rucki Transformer_Semantics Transition_Systems_and_Automata Transitive-Closure Transitive-Closure-II Treaps Tree-Automata Tree_Decomposition Triangle Trie Twelvefold_Way Tycon Types_Tableaus_and_Goedels_God Universal_Turing_Machine UPF UPF_Firewall UpDown_Scheme UTP Valuation VectorSpace VeriComp Verified-Prover VerifyThis2018 VerifyThis2019 Vickrey_Clarke_Groves VolpanoSmith WHATandWHERE_Security WebAssembly Weight_Balanced_Trees Well_Quasi_Orders Winding_Number_Eval WOOT_Strong_Eventual_Consistency Word_Lib WorkerWrapper XML Zeta_Function Zeta_3_Irrational ZFC_in_HOL pGCL diff --git a/thys/Recursion-Addition/ROOT b/thys/Recursion-Addition/ROOT new file mode 100644 --- /dev/null +++ b/thys/Recursion-Addition/ROOT @@ -0,0 +1,10 @@ +chapter AFP + +session "Recursion-Addition" (AFP) = ZF + + options [timeout = 300] + theories + recursion + + document_files + "root.tex" + diff --git a/thys/Recursion-Addition/document/root.tex b/thys/Recursion-Addition/document/root.tex new file mode 100644 --- /dev/null +++ b/thys/Recursion-Addition/document/root.tex @@ -0,0 +1,36 @@ +\documentclass[11pt,a4paper]{article} +\usepackage{isabelle,isabellesym} + +% this should be the last package used +\usepackage{pdfsetup} + +% urls in roman style, theory text in math-similar italics +\urlstyle{rm} +\isabellestyle{it} + + +\begin{document} + +\title{Recursion Theorem} +\author{Georgy Dunaev} +\maketitle + +\begin{abstract} + This document contains a proof of the recursion theorem. + This is a mechanization of the proof of the recursion theorem from + the text \textit{Introduction to Set Theory}, by Karel Hrbacek + and Thomas Jech. This implementation may be used as the basis for + a model of Peano Arithmetic in ZF\@. While recursion and the natural + numbers are already available in ZF, this clean development + is much easier to follow. +\end{abstract} + +\tableofcontents + +% include generated text of all theories +\input{session} + +\bibliographystyle{abbrv} +\bibliography{root} + +\end{document} diff --git a/thys/Recursion-Addition/recursion.thy b/thys/Recursion-Addition/recursion.thy new file mode 100644 --- /dev/null +++ b/thys/Recursion-Addition/recursion.thy @@ -0,0 +1,1525 @@ +(* Title: Recursion theorem + Author: Georgy Dunaev , 2020 + Maintainer: Georgy Dunaev +*) +section "Recursion Submission" + +text \Recursion Theorem is proved in the following document. +It also contains the addition on natural numbers. +The development is done in the context of Zermelo-Fraenkel set theory.\ + +theory recursion + imports ZF +begin + +section \Basic Set Theory\ +text \Useful lemmas about sets, functions and natural numbers\ +lemma pisubsig : \Pi(A,P)\Pow(Sigma(A,P))\ +proof + fix x + assume \x \ Pi(A,P)\ + hence \x \ {f\Pow(Sigma(A,P)). A\domain(f) & function(f)}\ + by (unfold Pi_def) + thus \x \ Pow(Sigma(A, P))\ + by (rule CollectD1) +qed + +lemma apparg: + fixes f A B + assumes T0:\f:A\B\ + assumes T1:\f ` a = b\ + assumes T2:\a \ A\ + shows \\a, b\ \ f\ +proof(rule iffD2[OF func.apply_iff], rule T0) + show T:\a \ A \ f ` a = b\ + by (rule conjI[OF T2 T1]) +qed + +theorem nat_induct_bound : + assumes H0:\P(0)\ + assumes H1:\!!x. x\nat \ P(x) \ P(succ(x))\ + shows \\n\nat. P(n)\ +proof(rule ballI) + fix n + assume H2:\n\nat\ + show \P(n)\ + proof(rule nat_induct[of n]) + from H2 show \n\nat\ by assumption + next + show \P(0)\ by (rule H0) + next + fix x + assume H3:\x\nat\ + assume H4:\P(x)\ + show \P(succ(x))\ by (rule H1[OF H3 H4]) + qed +qed + +theorem nat_Tr : \\n\nat. m\n \ m\nat\ +proof(rule nat_induct_bound) + show \m \ 0 \ m \ nat\ by auto +next + fix x + assume H0:\x \ nat\ + assume H1:\m \ x \ m \ nat\ + show \m \ succ(x) \ m \ nat\ + proof(rule impI) + assume H2:\m\succ(x)\ + show \m \ nat\ + proof(rule succE[OF H2]) + assume H3:\m = x\ + from H0 and H3 show \m \ nat\ + by auto + next + assume H4:\m \ x\ + show \m \ nat\ + by(rule mp[OF H1 H4]) + qed + qed +qed + +(* Natural numbers are linearly ordered. *) +theorem zeroleq : \\n\nat. 0\n \ 0=n\ +proof(rule ballI) + fix n + assume H1:\n\nat\ + show \0\n\0=n\ + proof(rule nat_induct[of n]) + from H1 show \n \ nat\ by assumption + next + show \0 \ 0 \ 0 = 0\ by (rule disjI2, rule refl) + next + fix x + assume H2:\x\nat\ + assume H3:\ 0 \ x \ 0 = x\ + show \0 \ succ(x) \ 0 = succ(x)\ + proof(rule disjE[OF H3]) + assume H4:\0\x\ + show \0 \ succ(x) \ 0 = succ(x)\ + proof(rule disjI1) + show \0 \ succ(x)\ + by (rule succI2[OF H4]) + qed + next + assume H4:\0=x\ + show \0 \ succ(x) \ 0 = succ(x)\ + proof(rule disjI1) + have q:\x \ succ(x)\ by auto + from q and H4 show \0 \ succ(x)\ by auto + qed + qed + qed +qed + +theorem JH2_1ii : \m\succ(n) \ m\n\m=n\ + by auto + +theorem nat_transitive:\\n\nat. \k. \m. k \ m \ m \ n \ k \ n\ +proof(rule nat_induct_bound) + show \\k. \m. k \ m \ m \ 0 \ k \ 0\ + proof(rule allI, rule allI, rule impI) + fix k m + assume H:\k \ m \ m \ 0\ + then have H:\m \ 0\ by auto + then show \k \ 0\ by auto + qed +next + fix n + assume H0:\n \ nat\ + assume H1:\\k. + \m. + k \ m \ m \ n \ + k \ n\ + show \\k. \m. + k \ m \ + m \ succ(n) \ + k \ succ(n)\ + proof(rule allI, rule allI, rule impI) + fix k m + assume H4:\k \ m \ m \ succ(n)\ + hence H4':\m \ succ(n)\ by (rule conjunct2) + hence H4'':\m\n \ m=n\ by (rule succE, auto) + from H4 have Q:\k \ m\ by (rule conjunct1) + have H1S:\\m. k \ m \ m \ n \ k \ n\ + by (rule spec[OF H1]) + have H1S:\k \ m \ m \ n \ k \ n\ + by (rule spec[OF H1S]) + show \k \ succ(n)\ + proof(rule disjE[OF H4'']) + assume L:\m\n\ + from Q and L have QL:\k \ m \ m \ n\ by auto + have G:\k \ n\ by (rule mp [OF H1S QL]) + show \k \ succ(n)\ + by (rule succI2[OF G]) + next + assume L:\m=n\ + from Q have F:\k \ succ(m)\ by auto + from L and Q show \k \ succ(n)\ by auto + qed + qed +qed + +theorem nat_xninx : \\n\nat. \(n\n)\ +proof(rule nat_induct_bound) + show \0\0\ + by auto +next + fix x + assume H0:\x\nat\ + assume H1:\x\x\ + show \succ(x) \ succ(x)\ + proof(rule contrapos[OF H1]) + assume Q:\succ(x) \ succ(x)\ + have D:\succ(x)\x \ succ(x)=x\ + by (rule JH2_1ii[OF Q]) + show \x\x\ + proof(rule disjE[OF D]) + assume Y1:\succ(x)\x\ + have U:\x\succ(x)\ by (rule succI1) + have T:\x \ succ(x) \ succ(x) \ x \ x \ x\ + by (rule spec[OF spec[OF bspec[OF nat_transitive H0]]]) + have R:\x \ succ(x) \ succ(x) \ x\ + by (rule conjI[OF U Y1]) + show \x\x\ + by (rule mp[OF T R]) + next + assume Y1:\succ(x)=x\ + show \x\x\ + by (rule subst[OF Y1], rule Q) + qed + qed +qed + +theorem nat_asym : \\n\nat. \m. \(n\m \ m\n)\ +proof(rule ballI, rule allI) + fix n m + assume H0:\n \ nat\ + have Q:\\(n\n)\ + by(rule bspec[OF nat_xninx H0]) + show \\ (n \ m \ m \ n)\ + proof(rule contrapos[OF Q]) + assume W:\(n \ m \ m \ n)\ + show \n\n\ + by (rule mp[OF spec[OF spec[OF bspec[OF nat_transitive H0]]] W]) + qed +qed + +theorem zerolesucc :\\n\nat. 0 \ succ(n)\ +proof(rule nat_induct_bound) + show \0\1\ + by auto +next + fix x + assume H0:\x\nat\ + assume H1:\0\succ(x)\ + show \0\succ(succ(x))\ + proof + assume J:\0 \ succ(x)\ + show \0 = succ(x)\ + by(rule notE[OF J H1]) + qed +qed + +theorem succ_le : \\n\nat. succ(m)\succ(n) \ m\n\ +proof(rule nat_induct_bound) + show \ succ(m) \ 1 \ m \ 0\ + by blast +next + fix x + assume H0:\x \ nat\ + assume H1:\succ(m) \ succ(x) \ m \ x\ + show \ succ(m) \ + succ(succ(x)) \ + m \ succ(x)\ + proof(rule impI) + assume J0:\succ(m) \ succ(succ(x))\ + show \m \ succ(x)\ + proof(rule succE[OF J0]) + assume R:\succ(m) = succ(x)\ + hence R:\m=x\ by (rule upair.succ_inject) + from R and succI1 show \m \ succ(x)\ by auto + next + assume R:\succ(m) \ succ(x)\ + have R:\m\x\ by (rule mp[OF H1 R]) + then show \m \ succ(x)\ by auto + qed + qed +qed + +theorem succ_le2 : \\n\nat. \m. succ(m)\succ(n) \ m\n\ +proof + fix n + assume H:\n\nat\ + show \\m. succ(m) \ succ(n) \ m \ n\ + proof + fix m + from succ_le and H show \succ(m) \ succ(n) \ m \ n\ by auto + qed +qed + +theorem le_succ : \\n\nat. m\n \ succ(m)\succ(n)\ +proof(rule nat_induct_bound) + show \m \ 0 \ succ(m) \ 1\ + by auto +next + fix x + assume H0:\x\nat\ + assume H1:\m \ x \ succ(m) \ succ(x)\ + show \m \ succ(x) \ + succ(m) \ succ(succ(x))\ + proof(rule impI) + assume HR1:\m\succ(x)\ + show \succ(m) \ succ(succ(x))\ + proof(rule succE[OF HR1]) + assume Q:\m = x\ + from Q show \succ(m) \ succ(succ(x))\ + by auto + next + assume Q:\m \ x\ + have Q:\succ(m) \ succ(x)\ + by (rule mp[OF H1 Q]) + from Q show \succ(m) \ succ(succ(x))\ + by (rule succI2) + qed + qed +qed + +theorem nat_linord:\\n\nat. \m\nat. m\n\m=n\n\m\ +proof(rule ballI) + fix n + assume H1:\n\nat\ + show \\m\nat. m \ n \ m = n \ n \ m\ + proof(rule nat_induct[of n]) + from H1 show \n\nat\ by assumption + next + show \\m\nat. m \ 0 \ m = 0 \ 0 \ m\ + proof + fix m + assume J:\m\nat\ + show \ m \ 0 \ m = 0 \ 0 \ m\ + proof(rule disjI2) + have Q:\0\m\0=m\ by (rule bspec[OF zeroleq J]) + show \m = 0 \ 0 \ m\ + by (rule disjE[OF Q], auto) + qed + qed + next + fix x + assume K:\x\nat\ + assume M:\\m\nat. m \ x \ m = x \ x \ m\ + show \\m\nat. + m \ succ(x) \ + m = succ(x) \ + succ(x) \ m\ + proof(rule nat_induct_bound) + show \0 \ succ(x) \ 0 = succ(x) \ succ(x) \ 0\ + proof(rule disjI1) + show \0 \ succ(x)\ + by (rule bspec[OF zerolesucc K]) + qed + next + fix y + assume H0:\y \ nat\ + assume H1:\y \ succ(x) \ y = succ(x) \ succ(x) \ y\ + show \succ(y) \ succ(x) \ + succ(y) = succ(x) \ + succ(x) \ succ(y)\ + proof(rule disjE[OF H1]) + assume W:\y\succ(x)\ + show \succ(y) \ succ(x) \ + succ(y) = succ(x) \ + succ(x) \ succ(y)\ + proof(rule succE[OF W]) + assume G:\y=x\ + show \succ(y) \ succ(x) \ + succ(y) = succ(x) \ + succ(x) \ succ(y)\ + by (rule disjI2, rule disjI1, rule subst[OF G], rule refl) + next + assume G:\y \ x\ + have R:\succ(y) \ succ(x)\ + by (rule mp[OF bspec[OF le_succ K] G]) + show \succ(y) \ succ(x) \ + succ(y) = succ(x) \ + succ(x) \ succ(y)\ + by(rule disjI1, rule R) + qed + next + assume W:\y = succ(x) \ succ(x) \ y\ + show \succ(y) \ succ(x) \ + succ(y) = succ(x) \ + succ(x) \ succ(y)\ + proof(rule disjE[OF W]) + assume W:\y=succ(x)\ + show \succ(y) \ succ(x) \ + succ(y) = succ(x) \ + succ(x) \ succ(y)\ + by (rule disjI2, rule disjI2, rule subst[OF W], rule succI1) + next + assume W:\succ(x)\y\ + show \succ(y) \ succ(x) \ + succ(y) = succ(x) \ + succ(x) \ succ(y)\ + by (rule disjI2, rule disjI2, rule succI2[OF W]) + qed + qed + qed + qed +qed + +lemma tgb: + assumes knat: \k\nat\ + assumes D: \t \ k \ A\ + shows \t \ Pow(nat \ A)\ +proof - + from D + have q:\t\{t\Pow(Sigma(k,%_.A)). k\domain(t) & function(t)}\ + by(unfold Pi_def) + have J:\t \ Pow(k \ A)\ + by (rule CollectD1[OF q]) + have G:\k \ A \ nat \ A\ + proof(rule func.Sigma_mono) + from knat + show \k\nat\ + by (rule QUniv.naturals_subset_nat) + next + show \\x. x \ k \ A \ A\ + by auto + qed + show \t \ Pow(nat \ A)\ + by (rule subsetD, rule func.Pow_mono[OF G], rule J) +qed + +section \Compatible set\ +text \Union of compatible set of functions is a function.\ + +definition compat :: \[i,i]\o\ + where "compat(f1,f2) == \x.\y1.\y2.\x,y1\ \ f1 \ \x,y2\ \ f2 \ y1=y2" + +lemma compatI [intro]: + assumes H:\\x y1 y2.\\x,y1\ \ f1; \x,y2\ \ f2\\y1=y2\ + shows \compat(f1,f2)\ +proof(unfold compat_def) + show \\x y1 y2. \x, y1\ \ f1 \ \x, y2\ \ f2 \ y1 = y2\ + proof(rule allI | rule impI)+ + fix x y1 y2 + assume K:\\x, y1\ \ f1 \ \x, y2\ \ f2\ + have K1:\\x, y1\ \ f1\ by (rule conjunct1[OF K]) + have K2:\\x, y2\ \ f2\ by (rule conjunct2[OF K]) + show \y1 = y2\ by (rule H[OF K1 K2]) + qed +qed + +lemma compatD: + assumes H: \compat(f1,f2)\ + shows \\x y1 y2.\\x,y1\ \ f1; \x,y2\ \ f2\\y1=y2\ +proof - + fix x y1 y2 + assume Q1:\\x, y1\ \ f1\ + assume Q2:\\x, y2\ \ f2\ + from H have H:\\x y1 y2. \x, y1\ \ f1 \ \x, y2\ \ f2 \ y1 = y2\ + by (unfold compat_def) + show \y1=y2\ + proof(rule mp[OF spec[OF spec[OF spec[OF H]]]]) + show \\x, y1\ \ f1 \ \x, y2\ \ f2\ + by(rule conjI[OF Q1 Q2]) + qed +qed + +lemma compatE: + assumes H: \compat(f1,f2)\ + and W:\(\x y1 y2.\\x,y1\ \ f1; \x,y2\ \ f2\\y1=y2) \ E\ +shows \E\ + by (rule W, rule compatD[OF H], assumption+) + + +definition compatset :: \i\o\ + where "compatset(S) == \f1\S.\f2\S. compat(f1,f2)" + +lemma compatsetI [intro] : + assumes 1:\\f1 f2. \f1\S;f2\S\ \ compat(f1,f2)\ + shows \compatset(S)\ + by (unfold compatset_def, rule ballI, rule ballI, rule 1, assumption+) + +lemma compatsetD: + assumes H: \compatset(S)\ + shows \\f1 f2.\f1\S; f2\S\\compat(f1,f2)\ +proof - + fix f1 f2 + assume H1:\f1\S\ + assume H2:\f2\S\ + from H have H:\\f1\S.\f2\S. compat(f1,f2)\ + by (unfold compatset_def) + show \compat(f1,f2)\ + by (rule bspec[OF bspec[OF H H1] H2]) +qed + +lemma compatsetE: + assumes H: \compatset(S)\ + and W:\(\f1 f2.\f1\S; f2\S\\compat(f1,f2)) \ E\ +shows \E\ + by (rule W, rule compatsetD[OF H], assumption+) + +theorem upairI1 : \a \ {a, b}\ +proof + assume \a \ {b}\ + show \a = a\ by (rule refl) +qed + +theorem upairI2 : \b \ {a, b}\ +proof + assume H:\b \ {b}\ + have Y:\b \ {b}\ by (rule upair.singletonI) + show \b = a\ by (rule notE[OF H Y]) +qed + +theorem sinup : \{x} \ \x, xa\\ +proof (unfold Pair_def) + show \{x} \ {{x, x}, {x, xa}}\ + proof (rule IFOL.subst) + show \{x} \ {{x},{x,xa}}\ + by (rule upairI1) + next + show \{{x}, {x, xa}} = {{x, x}, {x, xa}}\ + by blast + qed +qed + +theorem compatsetunionfun : + fixes S + assumes H0:\compatset(S)\ + shows \function(\S)\ +proof(unfold function_def) + show \ \x y1. \x, y1\ \ \S \ + (\y2. \x, y2\ \ \S \ y1 = y2)\ + proof(rule allI, rule allI, rule impI, rule allI, rule impI) + fix x y1 y2 + assume F1:\\x, y1\ \ \S\ + assume F2:\\x, y2\ \ \S\ + show \y1=y2\ + proof(rule UnionE[OF F1], rule UnionE[OF F2]) + fix f1 f2 + assume J1:\\x, y1\ \ f1\ + assume J2:\\x, y2\ \ f2\ + assume K1:\f1 \ S\ + assume K2:\f2 \ S\ + have R:\compat(f1,f2)\ + by (rule compatsetD[OF H0 K1 K2]) + show \y1=y2\ + by(rule compatD[OF R J1 J2]) + qed + qed +qed + +theorem mkel : + assumes 1:\A\ + assumes 2:\A\B\ + shows \B\ + by (rule 2, rule 1) + +theorem valofunion : + fixes S + assumes H0:\compatset(S)\ + assumes W:\f\S\ + assumes Q:\f:A\B\ + assumes T:\a\A\ + assumes P:\f ` a = v\ + shows N:\(\S)`a = v\ +proof - + have K:\\a, v\ \ f\ + by (rule apparg[OF Q P T]) + show N:\(\S)`a = v\ + proof(rule function_apply_equality) + show \function(\S)\ + by(rule compatsetunionfun[OF H0]) + next + show \\a, v\ \ \S\ + by(rule UnionI[OF W K ]) + qed +qed + +section "Partial computation" + +definition satpc :: \[i,i,i] \ o \ + where \satpc(t,\,g) == \n \ \ . t`succ(n) = g ` \ + +text \$m$-step computation based on $a$ and $g$\ +definition partcomp :: \[i,i,i,i,i]\o\ + where \partcomp(A,t,m,a,g) == (t:succ(m)\A) \ (t`0=a) \ satpc(t,m,g)\ + +lemma partcompI [intro]: + assumes H1:\(t:succ(m)\A)\ + assumes H2:\(t`0=a)\ + assumes H3:\satpc(t,m,g)\ + shows \partcomp(A,t,m,a,g)\ +proof (unfold partcomp_def, auto) + show \t \ succ(m) \ A\ by (rule H1) + show \(t`0=a)\ by (rule H2) + show \satpc(t,m,g)\ by (rule H3) +qed + +lemma partcompD1: \partcomp(A,t,m,a,g) \ t \ succ(m) \ A\ + by (unfold partcomp_def, auto) + +lemma partcompD2: \partcomp(A,t,m,a,g) \ (t`0=a)\ + by (unfold partcomp_def, auto) + +lemma partcompD3: \partcomp(A,t,m,a,g) \ satpc(t,m,g)\ + by (unfold partcomp_def, auto) + +lemma partcompE [elim] : + assumes 1:\partcomp(A,t,m,a,g)\ + and 2:\\(t:succ(m)\A) ; (t`0=a) ; satpc(t,m,g)\ \ E\ + shows \E\ + by (rule 2, rule partcompD1[OF 1], rule partcompD2[OF 1], rule partcompD3[OF 1]) + +text \If we add ordered pair in the middle of partial computation then +it will not change.\ +lemma addmiddle: +(* fixes t m a g*) + assumes mnat:\m\nat\ + assumes F:\partcomp(A,t,m,a,g)\ + assumes xinm:\x\m\ + shows \cons(\succ(x), g ` \t ` x, x\\, t) = t\ +proof(rule partcompE[OF F]) + assume F1:\t \ succ(m) \ A\ + assume F2:\t ` 0 = a\ + assume F3:\satpc(t, m, g)\ + from F3 + have W:\\n\m. t ` succ(n) = g ` \t ` n, n\\ + by (unfold satpc_def) + have U:\t ` succ(x) = g ` \t ` x, x\\ + by (rule bspec[OF W xinm]) + have E:\\succ(x), (g ` \t ` x, x\)\ \ t\ + proof(rule apparg[OF F1 U]) + show \succ(x) \ succ(m)\ + by(rule mp[OF bspec[OF le_succ mnat] xinm]) + qed + show ?thesis + by (rule equalities.cons_absorb[OF E]) +qed + + +section \Set of functions \ +text \It is denoted as $F$ on page 48 in "Introduction to Set Theory".\ +definition pcs :: \[i,i,i]\i\ + where \pcs(A,a,g) == {t\Pow(nat*A). \m\nat. partcomp(A,t,m,a,g)}\ + +lemma pcs_uniq : + assumes F1:\m1\nat\ + assumes F2:\m2\nat\ + assumes H1: \partcomp(A,f1,m1,a,g)\ + assumes H2: \partcomp(A,f2,m2,a,g)\ + shows \\n\nat. n\succ(m1) \ n\succ(m2) \ f1`n = f2`n\ +proof(rule partcompE[OF H1], rule partcompE[OF H2]) + assume H11:\f1 \ succ(m1) \ A\ + assume H12:\f1 ` 0 = a \ + assume H13:\satpc(f1, m1, g)\ + assume H21:\f2 \ succ(m2) \ A\ + assume H22:\f2 ` 0 = a\ + assume H23:\satpc(f2, m2, g)\ + show \\n\nat. n\succ(m1) \ n\succ(m2) \ f1`n = f2`n\ +proof(rule nat_induct_bound) + from H12 and H22 + show \0\succ(m1) \ 0\succ(m2) \ f1 ` 0 = f2 ` 0\ + by auto +next + fix x + assume J0:\x\nat\ + assume J1:\x \ succ(m1) \ x \ succ(m2) \ f1 ` x = f2 ` x\ + from H13 have G1:\\n \ m1 . f1`succ(n) = g ` \ + by (unfold satpc_def, auto) + from H23 have G2:\\n \ m2 . f2`succ(n) = g ` \ + by (unfold satpc_def, auto) + show \succ(x) \ succ(m1) \ succ(x) \ succ(m2) \ + f1 ` succ(x) = f2 ` succ(x)\ + proof + assume K:\succ(x) \ succ(m1) \ succ(x) \ succ(m2)\ + from K have K1:\succ(x) \ succ(m1)\ by auto + from K have K2:\succ(x) \ succ(m2)\ by auto + have K1':\x \ m1\ by (rule mp[OF bspec[OF succ_le F1] K1]) + have K2':\x \ m2\ by (rule mp[OF bspec[OF succ_le F2] K2]) + have U1:\x\succ(m1)\ + by (rule Nat.succ_in_naturalD[OF K1 Nat.nat_succI[OF F1]]) + have U2:\x\succ(m2)\ + by (rule Nat.succ_in_naturalD[OF K2 Nat.nat_succI[OF F2]]) + have Y1:\f1`succ(x) = g ` \ + by (rule bspec[OF G1 K1']) + have Y2:\f2`succ(x) = g ` \ + by (rule bspec[OF G2 K2']) + have \f1 ` x = f2 ` x\ + by(rule mp[OF J1 conjI[OF U1 U2]]) + then have Y:\g ` = g ` \ by auto + from Y1 and Y2 and Y + show \f1 ` succ(x) = f2 ` succ(x)\ + by auto + qed +qed +qed + +lemma domainsubsetfunc : + assumes Q:\f1\f2\ + shows \domain(f1)\domain(f2)\ +proof + fix x + assume H:\x \ domain(f1)\ + show \x \ domain(f2)\ + proof(rule domainE[OF H]) + fix y + assume W:\\x, y\ \ f1\ + have \\x, y\ \ f2\ + by(rule subsetD[OF Q W]) + then show \x \ domain(f2)\ + by(rule domainI) + qed +qed + +lemma natdomfunc: + assumes 1:\q\A\ + assumes J0:\f1 \ Pow(nat \ A)\ + assumes U:\m1 \ domain(f1)\ + shows \m1\nat\ +proof - + from J0 have J0 : \f1 \ nat \ A\ + by auto + have J0:\domain(f1) \ domain(nat \ A)\ + by(rule func.domain_mono[OF J0]) + have F:\m1 \ domain(nat \ A)\ + by(rule subsetD[OF J0 U]) + have R:\domain(nat \ A) = nat\ + by (rule equalities.domain_of_prod[OF 1]) + show \m1 \ nat\ + by(rule subst[OF R], rule F) +qed + +lemma pcs_lem : + assumes 1:\q\A\ + shows \compatset(pcs(A, a, g))\ +proof (*(rule compatsetI)*) + fix f1 f2 + assume H1:\f1 \ pcs(A, a, g)\ + then have H1':\f1 \ {t\Pow(nat*A). \m\nat. partcomp(A,t,m,a,g)}\ by (unfold pcs_def) + hence H1'A:\f1 \ Pow(nat*A)\ by auto + hence H1'A:\f1 \ (nat*A)\ by auto + assume H2:\f2 \ pcs(A, a, g)\ + then have H2':\f2 \ {t\Pow(nat*A). \m\nat. partcomp(A,t,m,a,g)}\ by (unfold pcs_def) + show \compat(f1, f2)\ + proof(rule compatI) + fix x y1 y2 + assume P1:\\x, y1\ \ f1\ + assume P2:\\x, y2\ \ f2\ + show \y1 = y2\ + proof(rule CollectE[OF H1'], rule CollectE[OF H2']) + assume J0:\f1 \ Pow(nat \ A)\ + assume J1:\f2 \ Pow(nat \ A)\ + assume J2:\\m\nat. partcomp(A, f1, m, a, g)\ + assume J3:\\m\nat. partcomp(A, f2, m, a, g)\ + show \y1 = y2\ + proof(rule bexE[OF J2], rule bexE[OF J3]) + fix m1 m2 + assume K1:\partcomp(A, f1, m1, a, g)\ + assume K2:\partcomp(A, f2, m2, a, g)\ + hence K2':\(f2:succ(m2)\A) \ (f2`0=a) \ satpc(f2,m2,g)\ + by (unfold partcomp_def) + from K1 have K1'A:\(f1:succ(m1)\A)\ by (rule partcompD1) + from K2' have K2'A:\(f2:succ(m2)\A)\ by auto + from K1'A have K1'AD:\domain(f1) = succ(m1)\ + by(rule domain_of_fun) + from K2'A have K2'AD:\domain(f2) = succ(m2)\ + by(rule domain_of_fun) + have L1:\f1`x=y1\ + by (rule func.apply_equality[OF P1], rule K1'A) + have L2:\f2`x=y2\ + by(rule func.apply_equality[OF P2], rule K2'A) + have m1nat:\m1\nat\ + proof(rule natdomfunc[OF 1 J0]) + show \m1 \ domain(f1)\ + by (rule ssubst[OF K1'AD], auto) + qed + have m2nat:\m2\nat\ + proof(rule natdomfunc[OF 1 J1]) + show \m2 \ domain(f2)\ + by (rule ssubst[OF K2'AD], auto) + qed + have G1:\\x, y1\ \ (nat*A)\ + by(rule subsetD[OF H1'A P1]) + have KK:\x\nat\ + by(rule SigmaE[OF G1], auto) + (*x is in the domain of f1 i.e. succ(m1) +so we can have both x \ ?m1.2 \ x \ ?m2.2 +how to prove that m1 \ nat ? from J0 ! f1 is a subset of nat \ A*) + have W:\f1`x=f2`x\ + proof(rule mp[OF bspec[OF pcs_uniq KK] ]) + show \m1 \ nat\ + by (rule m1nat) + next + show \m2 \ nat\ + by (rule m2nat) + next + show \partcomp(A, f1, m1, a, g)\ + by (rule K1) + next + show \partcomp(A, f2, m2, a, g)\ + by (rule K2) + next + (* P1:\\x, y1\ \ f1\ + K1'A:\(f1:succ(m1)\A)\ + *) + have U1:\x \ succ(m1)\ + by (rule func.domain_type[OF P1 K1'A]) + have U2:\x \ succ(m2)\ + by (rule func.domain_type[OF P2 K2'A]) + show \x \ succ(m1) \ x \ succ(m2)\ + by (rule conjI[OF U1 U2]) + qed + from L1 and W and L2 + show \y1 = y2\ by auto + qed + qed + qed +qed + +theorem fuissu : \f \ X -> Y \ f \ X\Y\ +proof + fix w + assume H1 : \f \ X -> Y\ + then have J1:\f \ {q\Pow(Sigma(X,\_.Y)). X\domain(q) & function(q)}\ + by (unfold Pi_def) + then have J2:\f \ Pow(Sigma(X,\_.Y))\ + by auto + then have J3:\f \ Sigma(X,\_.Y)\ + by auto + assume H2 : \w \ f\ + from J3 and H2 have \w\Sigma(X,\_.Y)\ + by auto + then have J4:\w \ (\x\X. (\y\Y. {\x,y\}))\ + by auto + show \w \ X*Y\ + proof (rule UN_E[OF J4]) + fix x + assume V1:\x \ X\ + assume V2:\w \ (\y\Y. {\x, y\})\ + show \w \ X \ Y\ + proof (rule UN_E[OF V2]) + fix y + assume V3:\y \ Y\ + assume V4:\w \ {\x, y\}\ + then have V4:\w = \x, y\\ + by auto + have v5:\\x, y\ \ Sigma(X,\_.Y)\ + proof(rule SigmaI) + show \x \ X\ by (rule V1) + next + show \y \ Y\ by (rule V3) + qed + then have V5:\\x, y\ \ X*Y\ + by auto + from V4 and V5 show \w \ X \ Y\ by auto + qed + qed +qed + +theorem recuniq : + fixes f + assumes H0:\f \ nat -> A \ f ` 0 = a \ satpc(f, nat, g)\ + fixes t + assumes H1:\t \ nat -> A \ t ` 0 = a \ satpc(t, nat, g)\ + fixes x + shows \f=t\ +proof - + from H0 have H02:\\n \ nat. f`succ(n) = g ` <(f`n), n>\ by (unfold satpc_def, auto) + from H0 have H01:\f ` 0 = a\ by auto + from H0 have H00:\f \ nat -> A\ by auto + from H1 have H12:\\n \ nat. t`succ(n) = g ` <(t`n), n>\ by (unfold satpc_def, auto) + from H1 have H11:\t ` 0 = a\ by auto + from H1 have H10:\t \ nat -> A\ by auto + show \f=t\ + proof (rule fun_extension[OF H00 H10]) + fix x + assume K: \x \ nat\ + show \(f ` x) = (t ` x)\ + proof(rule nat_induct[of x]) + show \x \ nat\ by (rule K) + next + from H01 and H11 show \f ` 0 = t ` 0\ + by auto + next + fix x + assume A:\x\nat\ + assume B:\f`x = t`x\ + show \f ` succ(x) = t ` succ(x)\ + proof - + from H02 and A have H02':\f`succ(x) = g ` <(f`x), x>\ + by (rule bspec) + from H12 and A have H12':\t`succ(x) = g ` <(t`x), x>\ + by (rule bspec) + from B and H12' have H12'':\t`succ(x) = g ` <(f`x), x>\ by auto + from H12'' and H02' show \f ` succ(x) = t ` succ(x)\ by auto + qed + qed + qed +qed + +section \Lemmas for recursion theorem\ + +locale recthm = + fixes A :: "i" + and a :: "i" + and g :: "i" + assumes hyp1 : \a \ A\ + and hyp2 : \g : ((A*nat)\A)\ +begin + +lemma l3:\function(\pcs(A, a, g))\ + by (rule compatsetunionfun, rule pcs_lem, rule hyp1) + +lemma l1 : \\pcs(A, a, g) \ nat \ A\ +proof + fix x + assume H:\x \ \pcs(A, a, g)\ + hence H:\x \ \{t\Pow(nat*A). \m\nat. partcomp(A,t,m,a,g)}\ + by (unfold pcs_def) + show \x \ nat \ A\ + proof(rule UnionE[OF H]) + fix B + assume J1:\x\B\ + assume J2:\B \ {t \ Pow(nat \ A) . + \m\nat. partcomp(A, t, m, a, g)}\ + hence J2:\B \ Pow(nat \ A)\ by auto + hence J2:\B \ nat \ A\ by auto + from J1 and J2 show \x \ nat \ A\ + by auto + qed +qed + +lemma le1: + assumes H:\x\1\ + shows \x=0\ +proof + show \x \ 0\ + proof + fix z + assume J:\z\x\ + show \z\0\ + proof(rule succE[OF H]) + assume J:\x\0\ + show \z\0\ + by (rule notE[OF not_mem_empty J]) + next + assume K:\x=0\ + from J and K show \z\0\ + by auto + qed + qed +next + show \0 \ x\ by auto +qed + +lemma lsinglfun : \function({\0, a\})\ +proof(unfold function_def) + show \ \x y. \x, y\ \ {\0, a\} \ + (\y'. \x, y'\ \ {\0, a\} \ + y = y')\ + proof(rule allI,rule allI,rule impI,rule allI,rule impI) + fix x y y' + assume H0:\\x, y\ \ {\0, a\}\ + assume H1:\\x, y'\ \ {\0, a\}\ + show \y = y'\ + proof(rule upair.singletonE[OF H0],rule upair.singletonE[OF H1]) + assume H0:\\x, y\ = \0, a\\ + assume H1:\\x, y'\ = \0, a\\ + from H0 and H1 have H:\\x, y\ = \x, y'\\ by auto + then show \y = y'\ by auto + qed + qed +qed + +lemma singlsatpc:\satpc({\0, a\}, 0, g)\ +proof(unfold satpc_def) + show \\n\0. {\0, a\} ` succ(n) = + g ` \{\0, a\} ` n, n\\ + by auto +qed + +lemma zerostep : + shows \partcomp(A, {\0, a\}, 0, a, g)\ +proof(unfold partcomp_def) + show \{\0, a\} \ 1 -> A \ {\0, a\} ` 0 = a \ satpc({\0, a\}, 0, g)\ + proof + show \{\0, a\} \ 1 -> A\ + proof (unfold Pi_def) + show \{\0, a\} \ {f \ Pow(1 \ A) . 1 \ domain(f) \ function(f)}\ + proof + show \{\0, a\} \ Pow(1 \ A)\ + proof(rule PowI, rule equalities.singleton_subsetI) + show \\0, a\ \ 1 \ A\ + proof + show \0 \ 1\ by auto + next + show \a \ A\ by (rule hyp1) + qed + qed + next + show \1 \ domain({\0, a\}) \ function({\0, a\})\ + proof + show \1 \ domain({\0, a\})\ + proof + fix x + assume W:\x\1\ + from W have W:\x=0\ by (rule le1) + have Y:\0\domain({\0, a\})\ + by auto + from W and Y + show \x\domain({\0, a\})\ + by auto + qed + next + show \function({\0, a\})\ + by (rule lsinglfun) + qed + qed + qed + show \{\0, a\} ` 0 = a \ satpc({\0, a\}, 0, g)\ + proof + show \{\0, a\} ` 0 = a\ + by (rule func.singleton_apply) + next + show \satpc({\0, a\}, 0, g)\ + by (rule singlsatpc) + qed + qed +qed + +lemma zainupcs : \\0, a\ \ \pcs(A, a, g)\ +proof + show \\0, a\ \ {\0, a\}\ + by auto +next + (* {\0, a\} is a 0-step computation *) + show \{\0, a\} \ pcs(A, a, g)\ + proof(unfold pcs_def) + show \{\0, a\} \ {t \ Pow(nat \ A) . \m\nat. partcomp(A, t, m, a, g)}\ + proof + show \{\0, a\} \ Pow(nat \ A)\ + proof(rule PowI, rule equalities.singleton_subsetI) + show \\0, a\ \ nat \ A\ + proof + show \0 \ nat\ by auto + next + show \a \ A\ by (rule hyp1) + qed + qed + next + show \\m\nat. partcomp(A, {\0, a\}, m, a, g)\ + proof + show \partcomp(A, {\0, a\}, 0, a, g)\ + by (rule zerostep) + next + show \0 \ nat\ by auto + qed + qed + qed +qed + +lemma l2': \0 \ domain(\pcs(A, a, g))\ +proof + show \\0, a\ \ \pcs(A, a, g)\ + by (rule zainupcs) +qed + +text \Push an ordered pair to the end of partial computation t +and obtain another partial computation.\ +lemma shortlem : + assumes mnat:\m\nat\ + assumes F:\partcomp(A,t,m,a,g)\ + shows \partcomp(A,cons(\succ(m), g ` \, t),succ(m),a,g)\ +proof(rule partcompE[OF F]) + assume F1:\t \ succ(m) \ A\ + assume F2:\t ` 0 = a\ + assume F3:\satpc(t, m, g)\ + show ?thesis (*\partcomp(A,cons(\succ(m), g ` \, t),succ(m),a,g)\ *) + proof + have ljk:\cons(\succ(m), g ` \t ` m, m\\, t) \ (cons(succ(m),succ(m)) \ A)\ + proof(rule func.fun_extend3[OF F1]) + show \succ(m) \ succ(m)\ + by (rule upair.mem_not_refl) + have tmA:\t ` m \ A\ + by (rule func.apply_funtype[OF F1], auto) + show \g ` \t ` m, m\ \ A\ + by(rule func.apply_funtype[OF hyp2], auto, rule tmA, rule mnat) + qed + have \cons(\succ(m), g ` \t ` m, m\\, t) \ (cons(succ(m),succ(m)) \ A)\ + by (rule ljk) + then have \cons(\cons(m, m), g ` \t ` m, m\\, t) \ cons(cons(m, m), cons(m, m)) \ A\ + by (unfold succ_def) + then show \cons(\succ(m), g ` \t ` m, m\\, t) \ succ(succ(m)) \ A\ + by (unfold succ_def, assumption) + show \cons(\succ(m), g ` \t ` m, m\\, t) ` 0 = a\ + proof(rule trans, rule func.fun_extend_apply[OF F1]) + show \succ(m) \ succ(m)\ by (rule upair.mem_not_refl) + show \(if 0 = succ(m) then g ` \t ` m, m\ else t ` 0) = a\ + by(rule trans, rule upair.if_not_P, auto, rule F2) + qed + show \satpc(cons(\succ(m), g ` \t ` m, m\\, t), succ(m), g)\ + proof(unfold satpc_def, rule ballI) + fix n + assume Q:\n \ succ(m)\ + show \cons(\succ(m), g ` \t ` m, m\\, t) ` succ(n) += g ` \cons(\succ(m), g ` \t ` m, m\\, t) ` n, n\\ + proof(rule trans, rule func.fun_extend_apply[OF F1], rule upair.mem_not_refl) + show \(if succ(n) = succ(m) then g ` \t ` m, m\ else t ` succ(n)) = + g ` \cons(\succ(m), g ` \t ` m, m\\, t) ` n, n\\ + proof(rule upair.succE[OF Q]) + assume Y:\n=m\ + show \(if succ(n) = succ(m) then g ` \t ` m, m\ else t ` succ(n)) = + g ` \cons(\succ(m), g ` \t ` m, m\\, t) ` n, n\\ + proof(rule trans, rule upair.if_P) + from Y show \succ(n) = succ(m)\ by auto + next + have L1:\t ` m = cons(\succ(m), g ` \t ` m, m\\, t) ` n\ + proof(rule sym, rule trans, rule func.fun_extend_apply[OF F1], rule upair.mem_not_refl) + show \ (if n = succ(m) then g ` \t ` m, m\ else t ` n) = t ` m\ + proof(rule trans, rule upair.if_not_P) + from Y show \t ` n = t ` m\ by auto + show \n \ succ(m)\ + proof(rule not_sym) + show \succ(m) \ n\ + by(rule subst, rule sym, rule Y, rule upair.succ_neq_self) + qed + qed + qed + from Y + have L2:\m = n\ + by auto + have L:\ \t ` m, m\ = \cons(\succ(m), g ` \t ` m, m\\, t) ` n, n\\ + by(rule subst_context2[OF L1 L2]) + show \ g ` \t ` m, m\ = g ` \cons(\succ(m), g ` \t ` m, m\\, t) ` n, n\\ + by(rule subst_context[OF L]) + qed + next + assume Y:\n \ m\ + show \(if succ(n) = succ(m) then g ` \t ` m, m\ else t ` succ(n)) = + g ` \cons(\succ(m), g ` \t ` m, m\\, t) ` n, n\\ + proof(rule trans, rule upair.if_not_P) + show \succ(n) \ succ(m)\ + by(rule contrapos, rule upair.mem_imp_not_eq, rule Y, rule upair.succ_inject, assumption) + next + have X:\cons(\succ(m), g ` \t ` m, m\\, t) ` n = t ` n\ + proof(rule trans, rule func.fun_extend_apply[OF F1], rule upair.mem_not_refl) + show \(if n = succ(m) then g ` \t ` m, m\ else t ` n) = t ` n\ + proof(rule upair.if_not_P) + show \n \ succ(m)\ + proof(rule contrapos) + assume q:"n=succ(m)" + from q and Y have M:\succ(m)\m\ + by auto + show \m\m\ + by(rule Nat.succ_in_naturalD[OF M mnat]) + next + show \m \ m\ by (rule upair.mem_not_refl) + qed + qed + qed + from F3 + have W:\\n\m. t ` succ(n) = g ` \t ` n, n\\ + by (unfold satpc_def) + have U:\t ` succ(n) = g ` \t ` n, n\\ + by (rule bspec[OF W Y]) + show \t ` succ(n) = g ` \cons(\succ(m), g ` \t ` m, m\\, t) ` n, n\\ + by (rule trans, rule U, rule sym, rule subst_context[OF X]) + qed + qed + qed + qed + qed +qed + +lemma l2:\nat \ domain(\pcs(A, a, g))\ +proof + fix x + assume G:\x\nat\ + show \x \ domain(\pcs(A, a, g))\ + proof(rule nat_induct[of x]) + show \x\nat\ by (rule G) + next + fix x + assume Q1:\x\nat\ + assume Q2:\x\domain(\pcs(A, a, g))\ + show \succ(x)\domain(\pcs(A, a, g))\ + proof(rule domainE[OF Q2]) + fix y + assume W1:\\x, y\ \ (\pcs(A, a, g))\ + show \succ(x)\domain(\pcs(A, a, g))\ + proof(rule UnionE[OF W1]) + fix t + assume E1:\\x, y\ \ t\ + assume E2:\t \ pcs(A, a, g)\ + hence E2:\t\{t\Pow(nat*A). \m \ nat. partcomp(A,t,m,a,g)}\ + by(unfold pcs_def) + have E21:\t\Pow(nat*A)\ + by(rule CollectD1[OF E2]) + have E22m:\\m\nat. partcomp(A,t,m,a,g)\ + by(rule CollectD2[OF E2]) + show \succ(x)\domain(\pcs(A, a, g))\ + proof(rule bexE[OF E22m]) + fix m + assume mnat:\m\nat\ + assume E22P:\partcomp(A,t,m,a,g)\ + hence E22:\((t:succ(m)\A) \ (t`0=a)) \ satpc(t,m,g)\ + by(unfold partcomp_def, auto) + hence E223:\satpc(t,m,g)\ by auto + hence E223:\\n \ m . t`succ(n) = g ` \ + by(unfold satpc_def, auto) + from E22 have E221:\(t:succ(m)\A)\ + by auto + from E221 have domt:\domain(t) = succ(m)\ + by (rule func.domain_of_fun) + from E1 have xind:\x \ domain(t)\ + by (rule equalities.domainI) + from xind and domt have xinsm:\x \ succ(m)\ + by auto + show \succ(x)\domain(\pcs(A, a, g))\ + proof + (*proof(rule exE[OF E22])*) + show \ \succ(x), g ` \ \ (\pcs(A, a, g))\ (*?*) + proof + (*t\{\succ(x), g ` \}*) + show \cons(\succ(x), g ` \, t) \ pcs(A, a, g)\ + proof(unfold pcs_def, rule CollectI) + from E21 + have L1:\t \ nat \ A\ + by auto + from Q1 have J1:\succ(x)\nat\ + by auto(*Nat.nat_succI*) + have txA: \t ` x \ A\ + by (rule func.apply_type[OF E221 xinsm]) + from txA and Q1 have txx:\\t ` x, x\ \ A \ nat\ + by auto + have secp: \g ` \t ` x, x\ \ A\ + by(rule func.apply_type[OF hyp2 txx]) + from J1 and secp + have L2:\\succ(x),g ` \t ` x, x\\ \ nat \ A\ + by auto + show \ cons(\succ(x),g ` \t ` x, x\\,t) \ Pow(nat \ A)\ + proof(rule PowI) + show \ cons(\succ(x), g ` \t ` x, x\\, t) \ nat \ A\ + proof + show \\succ(x), g ` \t ` x, x\\ \ nat \ A \ t \ nat \ A\ + by (rule conjI[OF L2 L1]) + qed + qed + next + show \\m \ nat. partcomp(A, cons(\succ(x), g ` \t ` x, x\\, t), m, a, g)\ + proof(rule succE[OF xinsm]) + assume xeqm:\x=m\ + show \\m \ nat. partcomp(A, cons(\succ(x), g ` \t ` x, x\\, t), m, a, g)\ + proof + show \partcomp(A, cons(\succ(x), g ` \t ` x, x\\, t), succ(x), a, g)\ + proof(rule shortlem[OF Q1]) + show \partcomp(A, t, x, a, g)\ + proof(rule subst[of m x], rule sym, rule xeqm) + show \partcomp(A, t, m, a, g)\ + by (rule E22P) + qed + qed + next + from Q1 show \succ(x) \ nat\ by auto + qed + next + assume xinm:\x\m\ + have lmm:\cons(\succ(x), g ` \t ` x, x\\, t) = t\ + by (rule addmiddle[OF mnat E22P xinm]) + show \\m\nat. partcomp(A, cons(\succ(x), g ` \t ` x, x\\, t), m, a, g)\ + by(rule subst[of t], rule sym, rule lmm, rule E22m) + qed + qed + next + show \\succ(x), g ` \t ` x, x\\ \ cons(\succ(x), g ` \t ` x, x\\, t)\ + by auto + qed + qed + qed + qed + qed + next + show \0 \ domain(\pcs(A, a, g))\ + by (rule l2') + qed +qed + +lemma useful : \\m\nat. \t. partcomp(A,t,m,a,g)\ +proof(rule nat_induct_bound) + show \\t. partcomp(A, t, 0, a, g)\ + proof + show \partcomp(A, {\0, a\}, 0, a, g)\ + by (rule zerostep) + qed +next + fix m + assume mnat:\m\nat\ + assume G:\\t. partcomp(A,t,m,a,g)\ + show \\t. partcomp(A,t,succ(m),a,g)\ + proof(rule exE[OF G]) + fix t + assume G:\partcomp(A,t,m,a,g)\ + show \\t. partcomp(A,t,succ(m),a,g)\ + proof + show \partcomp(A,cons(\succ(m), g ` \, t),succ(m),a,g)\ + by(rule shortlem[OF mnat G]) + qed + qed +qed + +lemma l4 : \(\pcs(A,a,g)) \ nat -> A\ +proof(unfold Pi_def) + show \ \pcs(A, a, g) \ {f \ Pow(nat \ A) . nat \ domain(f) \ function(f)}\ + proof + show \\pcs(A, a, g) \ Pow(nat \ A)\ + proof + show \\pcs(A, a, g) \ nat \ A\ + by (rule l1) + qed + next + show \nat \ domain(\pcs(A, a, g)) \ function(\pcs(A, a, g))\ + proof + show \nat \ domain(\pcs(A, a, g))\ + by (rule l2) + next + show \function(\pcs(A, a, g))\ + by (rule l3) + qed + qed +qed + +lemma l5: \(\pcs(A, a, g)) ` 0 = a\ +proof(rule func.function_apply_equality) + show \function(\pcs(A, a, g))\ + by (rule l3) +next + show \\0, a\ \ \pcs(A, a, g)\ + by (rule zainupcs) +qed + +lemma ballE2: + assumes \\x\AA. P(x)\ + assumes \x\AA\ + assumes \P(x) ==> Q\ + shows Q + by (rule assms(3), rule bspec, rule assms(1), rule assms(2)) + +text \ Recall that + \satpc(t,\,g) == \n \ \ . t`succ(n) = g ` \ + \partcomp(A,t,m,a,g) == (t:succ(m)\A) \ (t`0=a) \ satpc(t,m,g)\ + \pcs(A,a,g) == {t\Pow(nat*A). \m. partcomp(A,t,m,a,g)}\ +\ + +lemma l6new: \satpc(\pcs(A, a, g), nat, g)\ +proof (unfold satpc_def, rule ballI) + fix n + assume nnat:\n\nat\ + hence snnat:\succ(n)\nat\ by auto + (* l2:\nat \ domain(\pcs(A, a, g))\ *) + show \(\pcs(A, a, g)) ` succ(n) = g ` \(\pcs(A, a, g)) ` n, n\\ + proof(rule ballE2[OF useful snnat], erule exE) + fix t + assume Y:\partcomp(A, t, succ(n), a, g)\ + show \(\pcs(A, a, g)) ` succ(n) = g ` \(\pcs(A, a, g)) ` n, n\\ + proof(rule partcompE[OF Y]) + assume Y1:\t \ succ(succ(n)) \ A\ + assume Y2:\t ` 0 = a\ + assume Y3:\satpc(t, succ(n), g)\ + hence Y3:\\x \ succ(n) . t`succ(x) = g ` \ + by (unfold satpc_def) + hence Y3:\t`succ(n) = g ` \ + by (rule bspec, auto) + have e1:\(\pcs(A, a, g)) ` succ(n) = t ` succ(n)\ + proof(rule valofunion, rule pcs_lem, rule hyp1) + show \t \ pcs(A, a, g)\ + proof(unfold pcs_def, rule CollectI) + show \t \ Pow(nat \ A)\ + proof(rule tgb) + show \t \ succ(succ(n)) \ A\ by (rule Y1) + next + from snnat + show \succ(succ(n)) \ nat\ by auto + qed + next + show \\m\nat. partcomp(A, t, m, a, g)\ + by(rule bexI, rule Y, rule snnat) + qed + next + show \t \ succ(succ(n)) \ A\ by (rule Y1) + next + show \succ(n) \ succ(succ(n))\ by auto + next + show \t ` succ(n) = t ` succ(n)\ by (rule refl) + qed + have e2:\(\pcs(A, a, g)) ` n = t ` n\ + proof(rule valofunion, rule pcs_lem, rule hyp1) + show \t \ pcs(A, a, g)\ + proof(unfold pcs_def, rule CollectI) + show \t \ Pow(nat \ A)\ + proof(rule tgb) + show \t \ succ(succ(n)) \ A\ by (rule Y1) + next + from snnat + show \succ(succ(n)) \ nat\ by auto + qed + next + show \\m\nat. partcomp(A, t, m, a, g)\ + by(rule bexI, rule Y, rule snnat) + qed + next + show \t \ succ(succ(n)) \ A\ by (rule Y1) + next + show \n \ succ(succ(n))\ by auto + next + show \t ` n = t ` n\ by (rule refl) + qed + have e3:\g ` \(\pcs(A, a, g)) ` n, n\ = g ` \t ` n, n\\ + by (rule subst[OF e2], rule refl) + show \(\pcs(A, a, g)) ` succ(n) = g ` \(\pcs(A, a, g)) ` n, n\\ + by (rule trans, rule e1,rule trans, rule Y3, rule sym, rule e3) + qed + qed +qed + +section "Recursion theorem" + +theorem recursionthm: + shows \\!f. ((f \ (nat\A)) \ ((f`0) = a) \ satpc(f,nat,g))\ +(* where \satpc(t,\,g) == \n \ \ . t`succ(n) = g ` \ *) +proof + show \\f. f \ nat -> A \ f ` 0 = a \ satpc(f, nat, g)\ + proof + show \(\pcs(A,a,g)) \ nat -> A \ (\pcs(A,a,g)) ` 0 = a \ satpc(\pcs(A,a,g), nat, g)\ + proof + show \\pcs(A, a, g) \ nat -> A\ + by (rule l4) + next + show \(\pcs(A, a, g)) ` 0 = a \ satpc(\pcs(A, a, g), nat, g)\ + proof + show \(\pcs(A, a, g)) ` 0 = a\ + by (rule l5) + next + show \satpc(\pcs(A, a, g), nat, g)\ + by (rule l6new) + qed + qed + qed +next + show \\f y. f \ nat -> A \ + f ` 0 = a \ + satpc(f, nat, g) \ + y \ nat -> A \ + y ` 0 = a \ + satpc(y, nat, g) \ + f = y\ + by (rule recuniq) +qed + +end + +section "Lemmas for addition" + +text \ +Let's define function t(x) = (a+x). +Firstly we need to define a function \g:nat \ nat \ nat\, such that +\g`\t`n, n\ = t`succ(n) = a + (n + 1) = (a + n) + 1 = (t`n) + 1\ +So \g`\a, b\ = a + 1\ and \g(p) = succ(pr1(p))\ +and \satpc(t,\,g) \ \n \ \ . t`succ(n) = succ(t`n)\. +\ + +definition addg :: \i\ + where addg_def : \addg == \x\(nat*nat). succ(fst(x))\ + +lemma addgfun: \function(addg)\ + by (unfold addg_def, rule func.function_lam) + +lemma addgsubpow : \addg \ Pow((nat \ nat) \ nat)\ +proof (unfold addg_def, rule subsetD) + show \(\x\nat \ nat. succ(fst(x))) \ nat \ nat \ nat\ + proof(rule func.lam_type) + fix x + assume \x\nat \ nat\ + hence \fst(x)\nat\ by auto + thus \succ(fst(x)) \ nat\ by auto + qed +next + show \nat \ nat \ nat \ Pow((nat \ nat) \ nat)\ + by (rule pisubsig) +qed + +lemma addgdom : \nat \ nat \ domain(addg)\ +proof(unfold addg_def) + have e:\domain(\x\nat \ nat. succ(fst(x))) = nat \ nat\ + by (rule domain_lam) (* "domain(Lambda(A,b)) = A" *) + show \nat \ nat \ + domain(\x\nat \ nat. succ(fst(x)))\ + by (rule subst, rule sym, rule e, auto) +qed + +lemma plussucc: + assumes F:\f \ (nat\nat)\ + assumes H:\satpc(f,nat,addg)\ + shows \\n \ nat . f`succ(n) = succ(f`n)\ +proof + fix n + assume J:\n\nat\ + from H + have H:\\n \ nat . f`succ(n) = (\x\(nat*nat). succ(fst(x)))` \ + by (unfold satpc_def, unfold addg_def) + have H:\f`succ(n) = (\x\(nat*nat). succ(fst(x)))` \ + by (rule bspec[OF H J]) + have Q:\(\x\(nat*nat). succ(fst(x)))` = succ(fst())\ + proof(rule func.beta) + show \\f ` n, n\ \ nat \ nat\ + proof + show \f ` n \ nat\ + by (rule func.apply_funtype[OF F J]) + show \n \ nat\ + by (rule J) + qed + qed + have HQ:\f`succ(n) = succ(fst())\ + by (rule trans[OF H Q]) + have K:\fst() = f`n\ + by auto + hence K:\succ(fst()) = succ(f`n)\ + by (rule subst_context) + show \f`succ(n) = succ(f`n)\ + by (rule trans[OF HQ K]) +qed + +section "Definition of addition" + +text \Theorem that addition of natural numbers exists +and unique in some sense. Due to theorem 'plussucc' the term + \satpc(f,nat,addg)\ + can be replaced here with + \\n \ nat . f`succ(n) = succ(f`n)\.\ +theorem addition: + assumes \a\nat\ + shows + \\!f. ((f \ (nat\nat)) \ ((f`0) = a) \ satpc(f,nat,addg))\ +proof(rule recthm.recursionthm, unfold recthm_def) + show \a \ nat \ addg \ nat \ nat \ nat\ + proof + show \a\nat\ by (rule assms(1)) + next + show \addg \ nat \ nat \ nat\ + proof(unfold Pi_def, rule CollectI) + show \addg \ Pow((nat \ nat) \ nat)\ + by (rule addgsubpow) + next + have A2: \nat \ nat \ domain(addg)\ + by(rule addgdom) + have A3: \function(addg)\ + by (rule addgfun) + show \nat \ nat \ domain(addg) \ function(addg)\ + by(rule conjI[OF A2 A3]) + qed + qed +qed + +end diff --git a/web/entries/Banach_Steinhaus.html b/web/entries/Banach_Steinhaus.html new file mode 100644 --- /dev/null +++ b/web/entries/Banach_Steinhaus.html @@ -0,0 +1,186 @@ + + + + +Banach-Steinhaus Theorem - Archive of Formal Proofs + + + + + + + + + + + + + + + + + + + + + + + + +
+

 

+ + + +

 

+

 

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

 

+

 

+
+
+

 

+

Banach-Steinhaus + + Theorem + +

+

 

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Title:Banach-Steinhaus Theorem
+ Authors: + + Dominique Unruh and + Jose Manuel Rodriguez Caballero +
Submission date:2020-05-02
Abstract: +We formalize in Isabelle/HOL a result +due to S. Banach and H. Steinhaus known as +the Banach-Steinhaus theorem or Uniform boundedness principle: a +pointwise-bounded family of continuous linear operators from a Banach +space to a normed space is uniformly bounded. Our approach is an +adaptation to Isabelle/HOL of a proof due to A. Sokal.
BibTeX: +
@article{Banach_Steinhaus-AFP,
+  author  = {Dominique Unruh and Jose Manuel Rodriguez Caballero},
+  title   = {Banach-Steinhaus Theorem},
+  journal = {Archive of Formal Proofs},
+  month   = may,
+  year    = 2020,
+  note    = {\url{http://isa-afp.org/entries/Banach_Steinhaus.html},
+            Formal proof development},
+  ISSN    = {2150-914x},
+}
+
License:BSD License
+ +

+ + + + + + + + + + + + + + + + + + +
+
+ + + + + + \ No newline at end of file diff --git a/web/entries/Bernoulli.html b/web/entries/Bernoulli.html --- a/web/entries/Bernoulli.html +++ b/web/entries/Bernoulli.html @@ -1,220 +1,220 @@ Bernoulli Numbers - Archive of Formal Proofs

 

 

 

 

 

 

Bernoulli Numbers

 

- +
Title: Bernoulli Numbers
Authors: Lukas Bulwahn (lukas /dot/ bulwahn /at/ gmail /dot/ com) and Manuel Eberl
Submission date: 2017-01-24
Abstract:

Bernoulli numbers were first discovered in the closed-form expansion of the sum 1m + 2m + … + nm for a fixed m and appear in many other places. This entry provides three different definitions for them: a recursive one, an explicit one, and one through their exponential generating function.

In addition, we prove some basic facts, e.g. their relation to sums of powers of integers and that all odd Bernoulli numbers except the first are zero, and some advanced facts like their relationship to the Riemann zeta function on positive even integers.

We also prove the correctness of the Akiyama–Tanigawa algorithm for computing Bernoulli numbers with reasonable efficiency, and we define the periodic Bernoulli polynomials (which appear e.g. in the Euler–MacLaurin summation formula and the expansion of the log-Gamma function) and prove their basic properties.

BibTeX:
@article{Bernoulli-AFP,
   author  = {Lukas Bulwahn and Manuel Eberl},
   title   = {Bernoulli Numbers},
   journal = {Archive of Formal Proofs},
   month   = jan,
   year    = 2017,
   note    = {\url{http://isa-afp.org/entries/Bernoulli.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Used by:Euler_MacLaurin, Stirling_Formula, Zeta_Function
Euler_MacLaurin, Lambert_W, Stirling_Formula, Zeta_Function

\ No newline at end of file diff --git a/web/entries/Forcing.html b/web/entries/Forcing.html new file mode 100644 --- /dev/null +++ b/web/entries/Forcing.html @@ -0,0 +1,191 @@ + + + + +Formalization of Forcing in Isabelle/ZF - Archive of Formal Proofs + + + + + + + + + + + + + + + + + + + + + + + + +
+

 

+ + + +

 

+

 

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

 

+

 

+
+
+

 

+

Formalization + + of + + Forcing + + in + + Isabelle/ZF + +

+

 

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Title:Formalization of Forcing in Isabelle/ZF
+ Authors: + + Emmanuel Gunther (gunther /at/ famaf /dot/ unc /dot/ edu /dot/ ar), + Miguel Pagano and + Pedro Sánchez Terraf +
Submission date:2020-05-06
Abstract: +We formalize the theory of forcing in the set theory framework of +Isabelle/ZF. Under the assumption of the existence of a countable +transitive model of ZFC, we construct a proper generic extension and +show that the latter also satisfies ZFC.
BibTeX: +
@article{Forcing-AFP,
+  author  = {Emmanuel Gunther and Miguel Pagano and Pedro Sánchez Terraf},
+  title   = {Formalization of Forcing in Isabelle/ZF},
+  journal = {Archive of Formal Proofs},
+  month   = may,
+  year    = 2020,
+  note    = {\url{http://isa-afp.org/entries/Forcing.html},
+            Formal proof development},
+  ISSN    = {2150-914x},
+}
+
License:BSD License
+ +

+ + + + + + + + + + + + + + + + + + +
+
+ + + + + + \ No newline at end of file diff --git a/web/entries/Gaussian_Integers.html b/web/entries/Gaussian_Integers.html new file mode 100644 --- /dev/null +++ b/web/entries/Gaussian_Integers.html @@ -0,0 +1,195 @@ + + + + +Gaussian Integers - Archive of Formal Proofs + + + + + + + + + + + + + + + + + + + + + + + + +
+

 

+ + + +

 

+

 

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

 

+

 

+
+
+

 

+

Gaussian + + Integers + +

+

 

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Title:Gaussian Integers
+ Author: + + Manuel Eberl +
Submission date:2020-04-24
Abstract: +

The Gaussian integers are the subring ℤ[i] of the +complex numbers, i. e. the ring of all complex numbers with integral +real and imaginary part. This article provides a definition of this +ring as well as proofs of various basic properties, such as that they +form a Euclidean ring and a full classification of their primes. An +executable (albeit not very efficient) factorisation algorithm is also +provided.

Lastly, this Gaussian integer +formalisation is used in two short applications:

    +
  1. The characterisation of all positive integers that can be +written as sums of two squares
  2. Euclid's +formula for primitive Pythagorean triples
+

While elementary proofs for both of these are already +available in the AFP, the theory of Gaussian integers provides more +concise proofs and a more high-level view.

BibTeX: +
@article{Gaussian_Integers-AFP,
+  author  = {Manuel Eberl},
+  title   = {Gaussian Integers},
+  journal = {Archive of Formal Proofs},
+  month   = apr,
+  year    = 2020,
+  note    = {\url{http://isa-afp.org/entries/Gaussian_Integers.html},
+            Formal proof development},
+  ISSN    = {2150-914x},
+}
+
License:BSD License
Depends on:Polynomial_Factorization
+ +

+ + + + + + + + + + + + + + + + + + +
+
+ + + + + + \ No newline at end of file diff --git a/web/entries/Hybrid_Systems_VCs.html b/web/entries/Hybrid_Systems_VCs.html --- a/web/entries/Hybrid_Systems_VCs.html +++ b/web/entries/Hybrid_Systems_VCs.html @@ -1,201 +1,203 @@ Verification Components for Hybrid Systems - Archive of Formal Proofs

 

 

 

 

 

 

Verification Components for Hybrid Systems

 

- + + +
Title: Verification Components for Hybrid Systems
Author: - Jonathan Julian Huerta y Munive + Jonathan Julian Huerta y Munive (jjhuertaymunive1 /at/ sheffield /dot/ ac /dot/ uk)
Submission date: 2019-09-10
Abstract: These components formalise a semantic framework for the deductive verification of hybrid systems. They support reasoning about continuous evolutions of hybrid programs in the style of differential dynamics logic. Vector fields or flows model these evolutions, and their verification is done with invariants for the former or orbits for the latter. Laws of modal Kleene algebra or categorical predicate transformers implement the verification condition generation. Examples show the approach at work.
BibTeX:
@article{Hybrid_Systems_VCs-AFP,
   author  = {Jonathan Julian Huerta y Munive},
   title   = {Verification Components for Hybrid Systems},
   journal = {Archive of Formal Proofs},
   month   = sep,
   year    = 2019,
   note    = {\url{http://isa-afp.org/entries/Hybrid_Systems_VCs.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: KAD, Ordinary_Differential_Equations, Transformer_Semantics
Used by:Matrices_for_ODEs

\ No newline at end of file diff --git a/web/entries/Irrational_Series_Erdos_Straus.html b/web/entries/Irrational_Series_Erdos_Straus.html new file mode 100644 --- /dev/null +++ b/web/entries/Irrational_Series_Erdos_Straus.html @@ -0,0 +1,201 @@ + + + + +Irrationality Criteria for Series by Erdős and Straus - Archive of Formal Proofs + + + + + + + + + + + + + + + + + + + + + + + + +
+

 

+ + + +

 

+

 

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

 

+

 

+
+
+

 

+

Irrationality + + Criteria + + for + + Series + + by + + Erdős + + and + + Straus + +

+

 

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Title:Irrationality Criteria for Series by Erdős and Straus
+ Authors: + + Angeliki Koutsoukou-Argyraki and + Wenda Li +
Submission date:2020-05-12
Abstract: +We formalise certain irrationality criteria for infinite series of the form: +\[\sum_{n=1}^\infty \frac{b_n}{\prod_{i=1}^n a_i} \] +where $\{b_n\}$ is a sequence of integers and $\{a_n\}$ a sequence of positive integers +with $a_n >1$ for all large n. The results are due to P. Erdős and E. G. Straus +[1]. +In particular, we formalise Theorem 2.1, Corollary 2.10 and Theorem 3.1. +The latter is an application of Theorem 2.1 involving the prime numbers.
BibTeX: +
@article{Irrational_Series_Erdos_Straus-AFP,
+  author  = {Angeliki Koutsoukou-Argyraki and Wenda Li},
+  title   = {Irrationality Criteria for Series by Erdős and Straus},
+  journal = {Archive of Formal Proofs},
+  month   = may,
+  year    = 2020,
+  note    = {\url{http://isa-afp.org/entries/Irrational_Series_Erdos_Straus.html},
+            Formal proof development},
+  ISSN    = {2150-914x},
+}
+
License:BSD License
Depends on:Prime_Distribution_Elementary, Prime_Number_Theorem
+ +

+ + + + + + + + + + + + + + + + + + +
+
+ + + + + + \ No newline at end of file diff --git a/web/entries/LTL.html b/web/entries/LTL.html --- a/web/entries/LTL.html +++ b/web/entries/LTL.html @@ -1,231 +1,231 @@ Linear Temporal Logic - Archive of Formal Proofs

 

 

 

 

 

 

Linear Temporal Logic

 

- +
Title: Linear Temporal Logic
Author: Salomon Sickert (s /dot/ sickert /at/ tum /dot/ de)
Contributor: Benedikt Seidl (benedikt /dot/ seidl /at/ tum /dot/ de)
Submission date: 2016-03-01
Abstract: This theory provides a formalisation of linear temporal logic (LTL) and unifies previous formalisations within the AFP. This entry establishes syntax and semantics for this logic and decouples it from existing entries, yielding a common environment for theories reasoning about LTL. Furthermore a parser written in SML and an executable simplifier are provided.
Change history: [2019-03-12]: Support for additional operators, implementation of common equivalence relations, definition of syntactic fragments of LTL and the minimal disjunctive normal form.
BibTeX:
@article{LTL-AFP,
   author  = {Salomon Sickert},
   title   = {Linear Temporal Logic},
   journal = {Archive of Formal Proofs},
   month   = mar,
   year    = 2016,
   note    = {\url{http://isa-afp.org/entries/LTL.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: Boolean_Expression_Checkers
Used by:LTL_Master_Theorem, LTL_to_DRA, LTL_to_GBA, Promela, Stuttering_Equivalence
LTL_Master_Theorem, LTL_Normal_Form, LTL_to_DRA, LTL_to_GBA, Promela, Stuttering_Equivalence

\ No newline at end of file diff --git a/web/entries/LTL_Master_Theorem.html b/web/entries/LTL_Master_Theorem.html --- a/web/entries/LTL_Master_Theorem.html +++ b/web/entries/LTL_Master_Theorem.html @@ -1,221 +1,223 @@ A Compositional and Unified Translation of LTL into ω-Automata - Archive of Formal Proofs

 

 

 

 

 

 

A Compositional and Unified Translation of LTL into ω-Automata

 

- + + +
Title: A Compositional and Unified Translation of LTL into ω-Automata
Authors: Benedikt Seidl (benedikt /dot/ seidl /at/ tum /dot/ de) and Salomon Sickert (s /dot/ sickert /at/ tum /dot/ de)
Submission date: 2019-04-16
Abstract: We present a formalisation of the unified translation approach of linear temporal logic (LTL) into ω-automata from [1]. This approach decomposes LTL formulas into ``simple'' languages and allows a clear separation of concerns: first, we formalise the purely logical result yielding this decomposition; second, we instantiate this generic theory to obtain a construction for deterministic (state-based) Rabin automata (DRA). We extract from this particular instantiation an executable tool translating LTL to DRAs. To the best of our knowledge this is the first verified translation from LTL to DRAs that is proven to be double exponential in the worst case which asymptotically matches the known lower bound.

[1] Javier Esparza, Jan Kretínský, Salomon Sickert. One Theorem to Rule Them All: A Unified Translation of LTL into ω-Automata. LICS 2018

BibTeX:
@article{LTL_Master_Theorem-AFP,
   author  = {Benedikt Seidl and Salomon Sickert},
   title   = {A Compositional and Unified Translation of LTL into ω-Automata},
   journal = {Archive of Formal Proofs},
   month   = apr,
   year    = 2019,
   note    = {\url{http://isa-afp.org/entries/LTL_Master_Theorem.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: Deriving, LTL, Transition_Systems_and_Automata
Used by:LTL_Normal_Form

\ No newline at end of file diff --git a/web/entries/LTL_Normal_Form.html b/web/entries/LTL_Normal_Form.html new file mode 100644 --- /dev/null +++ b/web/entries/LTL_Normal_Form.html @@ -0,0 +1,211 @@ + + + + +An Efficient Normalisation Procedure for Linear Temporal Logic: Isabelle/HOL Formalisation - Archive of Formal Proofs + + + + + + + + + + + + + + + + + + + + + + + + +
+

 

+ + + +

 

+

 

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

 

+

 

+
+
+

 

+

An + + Efficient + + Normalisation + + Procedure + + for + + Linear + + Temporal + + Logic: + + Isabelle/HOL + + Formalisation + +

+

 

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Title:An Efficient Normalisation Procedure for Linear Temporal Logic: Isabelle/HOL Formalisation
+ Author: + + Salomon Sickert (s /dot/ sickert /at/ tum /dot/ de) +
Submission date:2020-05-08
Abstract: +In the mid 80s, Lichtenstein, Pnueli, and Zuck proved a classical +theorem stating that every formula of Past LTL (the extension of LTL +with past operators) is equivalent to a formula of the form +$\bigwedge_{i=1}^n \mathbf{G}\mathbf{F} \varphi_i \vee +\mathbf{F}\mathbf{G} \psi_i$, where $\varphi_i$ and $\psi_i$ contain +only past operators. Some years later, Chang, Manna, and Pnueli built +on this result to derive a similar normal form for LTL. Both +normalisation procedures have a non-elementary worst-case blow-up, and +follow an involved path from formulas to counter-free automata to +star-free regular expressions and back to formulas. We improve on both +points. We present an executable formalisation of a direct and purely +syntactic normalisation procedure for LTL yielding a normal form, +comparable to the one by Chang, Manna, and Pnueli, that has only a +single exponential blow-up.
BibTeX: +
@article{LTL_Normal_Form-AFP,
+  author  = {Salomon Sickert},
+  title   = {An Efficient Normalisation Procedure for Linear Temporal Logic: Isabelle/HOL Formalisation},
+  journal = {Archive of Formal Proofs},
+  month   = may,
+  year    = 2020,
+  note    = {\url{http://isa-afp.org/entries/LTL_Normal_Form.html},
+            Formal proof development},
+  ISSN    = {2150-914x},
+}
+
License:BSD License
Depends on:LTL, LTL_Master_Theorem
+ +

+ + + + + + + + + + + + + + + + + + +
+
+ + + + + + \ No newline at end of file diff --git a/web/entries/Lambert_W.html b/web/entries/Lambert_W.html new file mode 100644 --- /dev/null +++ b/web/entries/Lambert_W.html @@ -0,0 +1,209 @@ + + + + +The Lambert W Function on the Reals - Archive of Formal Proofs + + + + + + + + + + + + + + + + + + + + + + + + +
+

 

+ + + +

 

+

 

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

 

+

 

+
+
+

 

+

The + + Lambert + + W + + Function + + on + + the + + Reals + +

+

 

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Title:The Lambert W Function on the Reals
+ Author: + + Manuel Eberl +
Submission date:2020-04-24
Abstract: +

The Lambert W function is a multi-valued +function defined as the inverse function of x +↦ x +ex. Besides numerous +applications in combinatorics, physics, and engineering, it also +frequently occurs when solving equations containing both +ex and +x, or both x and log +x.

This article provides a +definition of the two real-valued branches +W0(x) +and +W-1(x) +and proves various properties such as basic identities and +inequalities, monotonicity, differentiability, asymptotic expansions, +and the MacLaurin series of +W0(x) +at x = 0.

BibTeX: +
@article{Lambert_W-AFP,
+  author  = {Manuel Eberl},
+  title   = {The Lambert W Function on the Reals},
+  journal = {Archive of Formal Proofs},
+  month   = apr,
+  year    = 2020,
+  note    = {\url{http://isa-afp.org/entries/Lambert_W.html},
+            Formal proof development},
+  ISSN    = {2150-914x},
+}
+
License:BSD License
Depends on:Bernoulli, Stirling_Formula
+ +

+ + + + + + + + + + + + + + + + + + +
+
+ + + + + + \ No newline at end of file diff --git a/web/entries/Matrices_for_ODEs.html b/web/entries/Matrices_for_ODEs.html new file mode 100644 --- /dev/null +++ b/web/entries/Matrices_for_ODEs.html @@ -0,0 +1,191 @@ + + + + +Matrices for ODEs - Archive of Formal Proofs + + + + + + + + + + + + + + + + + + + + + + + + +
+

 

+ + + +

 

+

 

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

 

+

 

+
+
+

 

+

Matrices + + for + + ODEs + +

+

 

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Title:Matrices for ODEs
+ Author: + + Jonathan Julian Huerta y Munive (jjhuertaymunive1 /at/ sheffield /dot/ ac /dot/ uk) +
Submission date:2020-04-19
Abstract: +Our theories formalise various matrix properties that serve to +establish existence, uniqueness and characterisation of the solution +to affine systems of ordinary differential equations (ODEs). In +particular, we formalise the operator and maximum norm of matrices. +Then we use them to prove that square matrices form a Banach space, +and in this setting, we show an instance of Picard-Lindelöf’s +theorem for affine systems of ODEs. Finally, we use this formalisation +to verify three simple hybrid programs.
BibTeX: +
@article{Matrices_for_ODEs-AFP,
+  author  = {Jonathan Julian Huerta y Munive},
+  title   = {Matrices for ODEs},
+  journal = {Archive of Formal Proofs},
+  month   = apr,
+  year    = 2020,
+  note    = {\url{http://isa-afp.org/entries/Matrices_for_ODEs.html},
+            Formal proof development},
+  ISSN    = {2150-914x},
+}
+
License:BSD License
Depends on:Hybrid_Systems_VCs
+ +

+ + + + + + + + + + + + + + + + + + +
+
+ + + + + + \ No newline at end of file diff --git a/web/entries/Polynomial_Factorization.html b/web/entries/Polynomial_Factorization.html --- a/web/entries/Polynomial_Factorization.html +++ b/web/entries/Polynomial_Factorization.html @@ -1,224 +1,224 @@ Polynomial Factorization - Archive of Formal Proofs

 

 

 

 

 

 

Polynomial Factorization

 

- +
Title: Polynomial Factorization
Authors: René Thiemann and Akihisa Yamada
Submission date: 2016-01-29
Abstract: Based on existing libraries for polynomial interpolation and matrices, we formalized several factorization algorithms for polynomials, including Kronecker's algorithm for integer polynomials, Yun's square-free factorization algorithm for field polynomials, and Berlekamp's algorithm for polynomials over finite fields. By combining the last one with Hensel's lifting, we derive an efficient factorization algorithm for the integer polynomials, which is then lifted for rational polynomials by mechanizing Gauss' lemma. Finally, we assembled a combined factorization algorithm for rational polynomials, which combines all the mentioned algorithms and additionally uses the explicit formula for roots of quadratic polynomials and a rational root test.

As side products, we developed division algorithms for polynomials over integral domains, as well as primality-testing and prime-factorization algorithms for integers.

BibTeX:
@article{Polynomial_Factorization-AFP,
   author  = {René Thiemann and Akihisa Yamada},
   title   = {Polynomial Factorization},
   journal = {Archive of Formal Proofs},
   month   = jan,
   year    = 2016,
   note    = {\url{http://isa-afp.org/entries/Polynomial_Factorization.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: Partial_Function_MR, Polynomial_Interpolation, Show, Sqrt_Babylonian
Used by:Dirichlet_Series, Functional_Ordered_Resolution_Prover, Jordan_Normal_Form, Knuth_Bendix_Order, Linear_Recurrences, Perron_Frobenius, Subresultants
Dirichlet_Series, Functional_Ordered_Resolution_Prover, Gaussian_Integers, Jordan_Normal_Form, Knuth_Bendix_Order, Linear_Recurrences, Perron_Frobenius, Power_Sum_Polynomials, Subresultants

\ No newline at end of file diff --git a/web/entries/Power_Sum_Polynomials.html b/web/entries/Power_Sum_Polynomials.html new file mode 100644 --- /dev/null +++ b/web/entries/Power_Sum_Polynomials.html @@ -0,0 +1,222 @@ + + + + +Power Sum Polynomials - Archive of Formal Proofs + + + + + + + + + + + + + + + + + + + + + + + + +
+

 

+ + + +

 

+

 

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

 

+

 

+
+
+

 

+

Power + + Sum + + Polynomials + +

+

 

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Title:Power Sum Polynomials
+ Author: + + Manuel Eberl +
Submission date:2020-04-24
Abstract: +

This article provides a formalisation of the symmetric +multivariate polynomials known as power sum +polynomials. These are of the form +pn(X1,…, +Xk) = +X1n ++ … + +Xkn. +A formal proof of the Girard–Newton Theorem is also given. This +theorem relates the power sum polynomials to the elementary symmetric +polynomials sk in the form +of a recurrence relation +(-1)k +k sk += +∑i∈[0,k) +(-1)i si +pk-i .

+

As an application, this is then used to solve a generalised +form of a puzzle given as an exercise in Dummit and Foote's +Abstract Algebra: For k +complex unknowns x1, +…, +xk, +define pj := +x1j ++ … + +xkj. +Then for each vector a ∈ +ℂk, show that +there is exactly one solution to the system p1 += a1, …, +pk = +ak up to permutation of +the +xi +and determine the value of +pi for +i>k.

BibTeX: +
@article{Power_Sum_Polynomials-AFP,
+  author  = {Manuel Eberl},
+  title   = {Power Sum Polynomials},
+  journal = {Archive of Formal Proofs},
+  month   = apr,
+  year    = 2020,
+  note    = {\url{http://isa-afp.org/entries/Power_Sum_Polynomials.html},
+            Formal proof development},
+  ISSN    = {2150-914x},
+}
+
License:BSD License
Depends on:Polynomial_Factorization, Symmetric_Polynomials
+ +

+ + + + + + + + + + + + + + + + + + +
+
+ + + + + + \ No newline at end of file diff --git a/web/entries/Prime_Distribution_Elementary.html b/web/entries/Prime_Distribution_Elementary.html --- a/web/entries/Prime_Distribution_Elementary.html +++ b/web/entries/Prime_Distribution_Elementary.html @@ -1,218 +1,218 @@ Elementary Facts About the Distribution of Primes - Archive of Formal Proofs

 

 

 

 

 

 

Elementary Facts About the Distribution of Primes

 

- +
Title: Elementary Facts About the Distribution of Primes
Author: Manuel Eberl
Submission date: 2019-02-21
Abstract:

This entry is a formalisation of Chapter 4 (and parts of Chapter 3) of Apostol's Introduction to Analytic Number Theory. The main topics that are addressed are properties of the distribution of prime numbers that can be shown in an elementary way (i. e. without the Prime Number Theorem), the various equivalent forms of the PNT (which imply each other in elementary ways), and consequences that follow from the PNT in elementary ways. The latter include, most notably, asymptotic bounds for the number of distinct prime factors of n, the divisor function d(n), Euler's totient function φ(n), and lcm(1,…,n).

BibTeX:
@article{Prime_Distribution_Elementary-AFP,
   author  = {Manuel Eberl},
   title   = {Elementary Facts About the Distribution of Primes},
   journal = {Archive of Formal Proofs},
   month   = feb,
   year    = 2019,
   note    = {\url{http://isa-afp.org/entries/Prime_Distribution_Elementary.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: Prime_Number_Theorem, Zeta_Function
Used by:IMO2019, Zeta_3_Irrational
IMO2019, Irrational_Series_Erdos_Straus, Zeta_3_Irrational

\ No newline at end of file diff --git a/web/entries/Prime_Number_Theorem.html b/web/entries/Prime_Number_Theorem.html --- a/web/entries/Prime_Number_Theorem.html +++ b/web/entries/Prime_Number_Theorem.html @@ -1,234 +1,234 @@ The Prime Number Theorem - Archive of Formal Proofs

 

 

 

 

 

 

The Prime Number Theorem

 

- +
Title: The Prime Number Theorem
Authors: Manuel Eberl and Lawrence C. Paulson
Submission date: 2018-09-19
Abstract:

This article provides a short proof of the Prime Number Theorem in several equivalent forms, most notably π(x) ~ x/ln x where π(x) is the number of primes no larger than x. It also defines other basic number-theoretic functions related to primes like Chebyshev's functions ϑ and ψ and the “n-th prime number” function pn. We also show various bounds and relationship between these functions are shown. Lastly, we derive Mertens' First and Second Theorem, i. e. ∑px ln p/p = ln x + O(1) and ∑px 1/p = ln ln x + M + O(1/ln x). We also give explicit bounds for the remainder terms.

The proof of the Prime Number Theorem builds on a library of Dirichlet series and analytic combinatorics. We essentially follow the presentation by Newman. The core part of the proof is a Tauberian theorem for Dirichlet series, which is proven using complex analysis and then used to strengthen Mertens' First Theorem to ∑px ln p/p = ln x + c + o(1).

A variant of this proof has been formalised before by Harrison in HOL Light, and formalisations of Selberg's elementary proof exist both by Avigad et al. in Isabelle and by Carneiro in Metamath. The advantage of the analytic proof is that, while it requires more powerful mathematical tools, it is considerably shorter and clearer. This article attempts to provide a short and clear formalisation of all components of that proof using the full range of mathematical machinery available in Isabelle, staying as close as possible to Newman's simple paper proof.

BibTeX:
@article{Prime_Number_Theorem-AFP,
   author  = {Manuel Eberl and Lawrence C. Paulson},
   title   = {The Prime Number Theorem},
   journal = {Archive of Formal Proofs},
   month   = sep,
   year    = 2018,
   note    = {\url{http://isa-afp.org/entries/Prime_Number_Theorem.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: Stirling_Formula, Zeta_Function
Used by:Prime_Distribution_Elementary, Transcendence_Series_Hancl_Rucki, Zeta_3_Irrational
Irrational_Series_Erdos_Straus, Prime_Distribution_Elementary, Transcendence_Series_Hancl_Rucki, Zeta_3_Irrational

\ No newline at end of file diff --git a/web/entries/Recursion-Addition.html b/web/entries/Recursion-Addition.html new file mode 100644 --- /dev/null +++ b/web/entries/Recursion-Addition.html @@ -0,0 +1,189 @@ + + + + +Recursion Theorem in ZF - Archive of Formal Proofs + + + + + + + + + + + + + + + + + + + + + + + + +
+

 

+ + + +

 

+

 

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

 

+

 

+
+
+

 

+

Recursion + + Theorem + + in + + ZF + +

+

 

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Title:Recursion Theorem in ZF
+ Author: + + Georgy Dunaev (georgedunaev /at/ gmail /dot/ com) +
Submission date:2020-05-11
Abstract: +This document contains a proof of the recursion theorem. This is a +mechanization of the proof of the recursion theorem from the text Introduction to +Set Theory, by Karel Hrbacek and Thomas Jech. This +implementation may be used as the basis for a model of Peano arithmetic in +ZF. While recursion and the natural numbers are already available in Isabelle/ZF, this clean development +is much easier to follow.
BibTeX: +
@article{Recursion-Addition-AFP,
+  author  = {Georgy Dunaev},
+  title   = {Recursion Theorem in ZF},
+  journal = {Archive of Formal Proofs},
+  month   = may,
+  year    = 2020,
+  note    = {\url{http://isa-afp.org/entries/Recursion-Addition.html},
+            Formal proof development},
+  ISSN    = {2150-914x},
+}
+
License:BSD License
+ +

+ + + + + + + + + + + + + + + + + + +
+
+ + + + + + \ No newline at end of file diff --git a/web/entries/Stirling_Formula.html b/web/entries/Stirling_Formula.html --- a/web/entries/Stirling_Formula.html +++ b/web/entries/Stirling_Formula.html @@ -1,212 +1,212 @@ Stirling's formula - Archive of Formal Proofs

 

 

 

 

 

 

Stirling's formula

 

- +
Title: Stirling's formula
Author: Manuel Eberl
Submission date: 2016-09-01
Abstract:

This work contains a proof of Stirling's formula both for the factorial $n! \sim \sqrt{2\pi n} (n/e)^n$ on natural numbers and the real Gamma function $\Gamma(x)\sim \sqrt{2\pi/x} (x/e)^x$. The proof is based on work by Graham Jameson.

This is then extended to the full asymptotic expansion $$\log\Gamma(z) = \big(z - \tfrac{1}{2}\big)\log z - z + \tfrac{1}{2}\log(2\pi) + \sum_{k=1}^{n-1} \frac{B_{k+1}}{k(k+1)} z^{-k}\\ {} - \frac{1}{n} \int_0^\infty B_n([t])(t + z)^{-n}\,\text{d}t$$ uniformly for all complex $z\neq 0$ in the cone $\text{arg}(z)\leq \alpha$ for any $\alpha\in(0,\pi)$, with which the above asymptotic relation for Γ is also extended to complex arguments.

BibTeX:
@article{Stirling_Formula-AFP,
   author  = {Manuel Eberl},
   title   = {Stirling's formula},
   journal = {Archive of Formal Proofs},
   month   = sep,
   year    = 2016,
   note    = {\url{http://isa-afp.org/entries/Stirling_Formula.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: Bernoulli, Landau_Symbols
Used by:Comparison_Sort_Lower_Bound, Prime_Number_Theorem
Comparison_Sort_Lower_Bound, Lambert_W, Prime_Number_Theorem

\ No newline at end of file diff --git a/web/entries/Symmetric_Polynomials.html b/web/entries/Symmetric_Polynomials.html --- a/web/entries/Symmetric_Polynomials.html +++ b/web/entries/Symmetric_Polynomials.html @@ -1,219 +1,219 @@ Symmetric Polynomials - Archive of Formal Proofs

 

 

 

 

 

 

Symmetric Polynomials

 

- +
Title: Symmetric Polynomials
Author: Manuel Eberl
Submission date: 2018-09-25
Abstract:

A symmetric polynomial is a polynomial in variables X1,…,Xn that does not discriminate between its variables, i. e. it is invariant under any permutation of them. These polynomials are important in the study of the relationship between the coefficients of a univariate polynomial and its roots in its algebraic closure.

This article provides a definition of symmetric polynomials and the elementary symmetric polynomials e1,…,en and proofs of their basic properties, including three notable ones:

  • Vieta's formula, which gives an explicit expression for the k-th coefficient of a univariate monic polynomial in terms of its roots x1,…,xn, namely ck = (-1)n-k en-k(x1,…,xn).
  • Second, the Fundamental Theorem of Symmetric Polynomials, which states that any symmetric polynomial is itself a uniquely determined polynomial combination of the elementary symmetric polynomials.
  • Third, as a corollary of the previous two, that given a polynomial over some ring R, any symmetric polynomial combination of its roots is also in R even when the roots are not.

Both the symmetry property itself and the witness for the Fundamental Theorem are executable.

BibTeX:
@article{Symmetric_Polynomials-AFP,
   author  = {Manuel Eberl},
   title   = {Symmetric Polynomials},
   journal = {Archive of Formal Proofs},
   month   = sep,
   year    = 2018,
   note    = {\url{http://isa-afp.org/entries/Symmetric_Polynomials.html},
             Formal proof development},
   ISSN    = {2150-914x},
 }
License: BSD License
Depends on: Polynomials
Used by:Pi_Transcendental
Pi_Transcendental, Power_Sum_Polynomials

\ No newline at end of file diff --git a/web/index.html b/web/index.html --- a/web/index.html +++ b/web/index.html @@ -1,4885 +1,4961 @@ Archive of Formal Proofs

 

 

 

 

 

 

Archive of Formal Proofs

 

The Archive of Formal Proofs is a collection of proof libraries, examples, and larger scientific developments, mechanically checked in the theorem prover Isabelle. It is organized in the way of a scientific journal, is indexed by dblp and has an ISSN: 2150-914x. Submissions are refereed. The preferred citation style is available [here]. We encourage companion AFP submissions to conference and journal publications.

A development version of the archive is available as well.

 

 

+ + + + + + + + + + + + + + + + + + + + + + + + + + +
2020
2020-05-13: A Formalization of Knuth–Bendix Orders
Authors: Christian Sternagel and René Thiemann
+ 2020-05-12: Irrationality Criteria for Series by Erdős and Straus +
+ Authors: + Angeliki Koutsoukou-Argyraki + and Wenda Li +
+ 2020-05-11: Recursion Theorem in ZF +
+ Author: + Georgy Dunaev +
+ 2020-05-08: An Efficient Normalisation Procedure for Linear Temporal Logic: Isabelle/HOL Formalisation +
+ Author: + Salomon Sickert +
+ 2020-05-06: Formalization of Forcing in Isabelle/ZF +
+ Authors: + Emmanuel Gunther, + Miguel Pagano + and Pedro Sánchez Terraf +
+ 2020-05-02: Banach-Steinhaus Theorem +
+ Authors: + Dominique Unruh + and Jose Manuel Rodriguez Caballero +
2020-04-27: Attack Trees in Isabelle for GDPR compliance of IoT healthcare systems
Author: Florian Kammueller
+ 2020-04-24: Power Sum Polynomials +
+ Author: + Manuel Eberl +
+ 2020-04-24: The Lambert W Function on the Reals +
+ Author: + Manuel Eberl +
+ 2020-04-24: Gaussian Integers +
+ Author: + Manuel Eberl +
+ 2020-04-19: Matrices for ODEs +
+ Author: + Jonathan Julian Huerta y Munive +
2020-04-16: Authenticated Data Structures As Functors
Authors: Andreas Lochbihler and Ognjen Marić
2020-04-10: Formalization of an Algorithm for Greedily Computing Associative Aggregations on Sliding Windows
Authors: Lukas Heimes, Dmitriy Traytel and Joshua Schneider
2020-04-09: A Comprehensive Framework for Saturation Theorem Proving
Author: Sophie Tourret
2020-04-09: Formalization of an Optimized Monitoring Algorithm for Metric First-Order Dynamic Logic with Aggregations
Authors: Thibault Dardinier, Lukas Heimes, Martin Raszyk, Joshua Schneider and Dmitriy Traytel
2020-04-07: Lucas's Theorem
Author: Chelsea Edmonds
2020-03-25: Strong Eventual Consistency of the Collaborative Editing Framework WOOT
Authors: Emin Karayel and Edgar Gonzàlez
2020-03-22: Furstenberg's topology and his proof of the infinitude of primes
Author: Manuel Eberl
2020-03-12: An Under-Approximate Relational Logic
Author: Toby Murray
2020-03-07: Hello World
Authors: Cornelius Diekmann and Lars Hupel
2020-02-21: Implementing the Goodstein Function in λ-Calculus
Author: Bertram Felgenhauer
2020-02-10: A Generic Framework for Verified Compilers
Author: Martin Desharnais
2020-02-01: Arithmetic progressions and relative primes
Author: José Manuel Rodríguez Caballero
2020-01-31: A Hierarchy of Algebras for Boolean Subsets
Authors: Walter Guttmann and Bernhard Möller
2020-01-17: Mersenne primes and the Lucas–Lehmer test
Author: Manuel Eberl
2020-01-16: Verified Approximation Algorithms
Authors: Robin Eßmann, Tobias Nipkow and Simon Robillard
2020-01-13: Closest Pair of Points Algorithms
Authors: Martin Rau and Tobias Nipkow
2020-01-09: Skip Lists
Authors: Max W. Haslbeck and Manuel Eberl
2020-01-06: Bicategories
Author: Eugene W. Stark

 

2019
2019-12-27: The Irrationality of ζ(3)
Author: Manuel Eberl
2019-12-20: Formalizing a Seligman-Style Tableau System for Hybrid Logic
Author: Asta Halkjær From
2019-12-18: The Poincaré-Bendixson Theorem
Authors: Fabian Immler and Yong Kiam Tan
2019-12-16: Poincaré Disc Model
Authors: Danijela Simić, Filip Marić and Pierre Boutry
2019-12-16: Complex Geometry
Authors: Filip Marić and Danijela Simić
2019-12-10: Gauss Sums and the Pólya–Vinogradov Inequality
Authors: Rodrigo Raya and Manuel Eberl
2019-12-04: An Efficient Generalization of Counting Sort for Large, possibly Infinite Key Ranges
Author: Pasquale Noce
2019-11-27: Interval Arithmetic on 32-bit Words
Author: Brandon Bohrer
2019-10-24: Zermelo Fraenkel Set Theory in Higher-Order Logic
Author: Lawrence C. Paulson
2019-10-22: Isabelle/C
Authors: Frédéric Tuong and Burkhart Wolff
2019-10-16: VerifyThis 2019 -- Polished Isabelle Solutions
Authors: Peter Lammich and Simon Wimmer
2019-10-08: Aristotle's Assertoric Syllogistic
Author: Angeliki Koutsoukou-Argyraki
2019-10-07: Sigma Protocols and Commitment Schemes
Authors: David Butler and Andreas Lochbihler
2019-10-04: Clean - An Abstract Imperative Programming Language and its Theory
Authors: Frédéric Tuong and Burkhart Wolff
2019-09-16: Formalization of Multiway-Join Algorithms
Author: Thibault Dardinier
2019-09-10: Verification Components for Hybrid Systems
Author: Jonathan Julian Huerta y Munive
2019-09-06: Fourier Series
Author: Lawrence C Paulson
2019-08-30: A Case Study in Basic Algebra
Author: Clemens Ballarin
2019-08-16: Formalisation of an Adaptive State Counting Algorithm
Author: Robert Sachtleben
2019-08-14: Laplace Transform
Author: Fabian Immler
2019-08-06: Linear Programming
Authors: Julian Parsert and Cezary Kaliszyk
2019-08-06: Communicating Concurrent Kleene Algebra for Distributed Systems Specification
Authors: Maxime Buyse and Jason Jaskolka
2019-08-05: Selected Problems from the International Mathematical Olympiad 2019
Author: Manuel Eberl
2019-08-01: Stellar Quorum Systems
Author: Giuliano Losa
2019-07-30: A Formal Development of a Polychronous Polytimed Coordination Language
Authors: Hai Nguyen Van, Frédéric Boulanger and Burkhart Wolff
2019-07-27: Szpilrajn Extension Theorem
Author: Peter Zeller
2019-07-18: A Sequent Calculus for First-Order Logic
Author: Asta Halkjær From
2019-07-08: A Verified Code Generator from Isabelle/HOL to CakeML
Author: Lars Hupel
2019-07-04: Formalization of a Monitoring Algorithm for Metric First-Order Temporal Logic
Authors: Joshua Schneider and Dmitriy Traytel
2019-06-27: Complete Non-Orders and Fixed Points
Authors: Akihisa Yamada and Jérémy Dubut
2019-06-25: Priority Search Trees
Authors: Peter Lammich and Tobias Nipkow
2019-06-25: Purely Functional, Simple, and Efficient Implementation of Prim and Dijkstra
Authors: Peter Lammich and Tobias Nipkow
2019-06-21: Linear Inequalities
Authors: Ralph Bottesch, Alban Reynaud and René Thiemann
2019-06-16: Hilbert's Nullstellensatz
Author: Alexander Maletzky
2019-06-15: Gröbner Bases, Macaulay Matrices and Dubé's Degree Bounds
Author: Alexander Maletzky
2019-06-13: Binary Heaps for IMP2
Author: Simon Griebel
2019-06-03: Differential Game Logic
Author: André Platzer
2019-05-30: Multidimensional Binary Search Trees
Author: Martin Rau
2019-05-14: Formalization of Generic Authenticated Data Structures
Authors: Matthias Brun and Dmitriy Traytel
2019-05-09: Multi-Party Computation
Authors: David Aspinall and David Butler
2019-04-26: HOL-CSP Version 2.0
Authors: Safouan Taha, Lina Ye and Burkhart Wolff
2019-04-16: A Compositional and Unified Translation of LTL into ω-Automata
Authors: Benedikt Seidl and Salomon Sickert
2019-04-06: A General Theory of Syntax with Bindings
Authors: Lorenzo Gheri and Andrei Popescu
2019-03-27: The Transcendence of Certain Infinite Series
Authors: Angeliki Koutsoukou-Argyraki and Wenda Li
2019-03-24: Quantum Hoare Logic
Authors: Junyi Liu, Bohua Zhan, Shuling Wang, Shenggang Ying, Tao Liu, Yangjia Li, Mingsheng Ying and Naijun Zhan
2019-03-09: Safe OCL
Author: Denis Nikiforov
2019-02-21: Elementary Facts About the Distribution of Primes
Author: Manuel Eberl
2019-02-14: Kruskal's Algorithm for Minimum Spanning Forest
Authors: Maximilian P.L. Haslbeck, Peter Lammich and Julian Biendarra
2019-02-11: Probabilistic Primality Testing
Authors: Daniel Stüwe and Manuel Eberl
2019-02-08: Universal Turing Machine
Authors: Jian Xu, Xingyuan Zhang, Christian Urban and Sebastiaan J. C. Joosten
2019-02-01: Isabelle/UTP: Mechanised Theory Engineering for Unifying Theories of Programming
Authors: Simon Foster, Frank Zeyda, Yakoub Nemouchi, Pedro Ribeiro and Burkhart Wolff
2019-02-01: The Inversions of a List
Author: Manuel Eberl
2019-01-17: Farkas' Lemma and Motzkin's Transposition Theorem
Authors: Ralph Bottesch, Max W. Haslbeck and René Thiemann
2019-01-15: IMP2 – Simple Program Verification in Isabelle/HOL
Authors: Peter Lammich and Simon Wimmer
2019-01-15: An Algebra for Higher-Order Terms
Author: Lars Hupel
2019-01-07: A Reduction Theorem for Store Buffers
Authors: Ernie Cohen and Norbert Schirmer

 

2018
2018-12-26: A Formal Model of the Document Object Model
Authors: Achim D. Brucker and Michael Herzberg
2018-12-25: Formalization of Concurrent Revisions
Author: Roy Overbeek
2018-12-21: Verifying Imperative Programs using Auto2
Author: Bohua Zhan
2018-12-17: Constructive Cryptography in HOL
Authors: Andreas Lochbihler and S. Reza Sefidgar
2018-12-11: Transformer Semantics
Author: Georg Struth
2018-12-11: Quantales
Author: Georg Struth
2018-12-11: Properties of Orderings and Lattices
Author: Georg Struth
2018-11-23: Graph Saturation
Author: Sebastiaan J. C. Joosten
2018-11-23: A Verified Functional Implementation of Bachmair and Ganzinger's Ordered Resolution Prover
Authors: Anders Schlichtkrull, Jasmin Christian Blanchette and Dmitriy Traytel
2018-11-20: Auto2 Prover
Author: Bohua Zhan
2018-11-16: Matroids
Author: Jonas Keinholz
2018-11-06: Deriving generic class instances for datatypes
Authors: Jonas Rädle and Lars Hupel
2018-10-30: Formalisation and Evaluation of Alan Gewirth's Proof for the Principle of Generic Consistency in Isabelle/HOL
Authors: David Fuenmayor and Christoph Benzmüller
2018-10-29: Epistemic Logic
Author: Asta Halkjær From
2018-10-22: Smooth Manifolds
Authors: Fabian Immler and Bohua Zhan
2018-10-19: Randomised Binary Search Trees
Author: Manuel Eberl
2018-10-19: Formalization of the Embedding Path Order for Lambda-Free Higher-Order Terms
Author: Alexander Bentkamp
2018-10-12: Upper Bounding Diameters of State Spaces of Factored Transition Systems
Authors: Friedrich Kurz and Mohammad Abdulaziz
2018-09-28: The Transcendence of π
Author: Manuel Eberl
2018-09-25: Symmetric Polynomials
Author: Manuel Eberl
2018-09-20: Signature-Based Gröbner Basis Algorithms
Author: Alexander Maletzky
2018-09-19: The Prime Number Theorem
Authors: Manuel Eberl and Lawrence C. Paulson
2018-09-15: Aggregation Algebras
Author: Walter Guttmann
2018-09-14: Octonions
Author: Angeliki Koutsoukou-Argyraki
2018-09-05: Quaternions
Author: Lawrence C. Paulson
2018-09-02: The Budan-Fourier Theorem and Counting Real Roots with Multiplicity
Author: Wenda Li
2018-08-24: An Incremental Simplex Algorithm with Unsatisfiable Core Generation
Authors: Filip Marić, Mirko Spasić and René Thiemann
2018-08-14: Minsky Machines
Author: Bertram Felgenhauer
2018-07-16: Pricing in discrete financial models
Author: Mnacho Echenim
2018-07-04: Von-Neumann-Morgenstern Utility Theorem
Authors: Julian Parsert and Cezary Kaliszyk
2018-06-23: Pell's Equation
Author: Manuel Eberl
2018-06-14: Projective Geometry
Author: Anthony Bordg
2018-06-14: The Localization of a Commutative Ring
Author: Anthony Bordg
2018-06-05: Partial Order Reduction
Author: Julian Brunner
2018-05-27: Optimal Binary Search Trees
Authors: Tobias Nipkow and Dániel Somogyi
2018-05-25: Hidden Markov Models
Author: Simon Wimmer
2018-05-24: Probabilistic Timed Automata
Authors: Simon Wimmer and Johannes Hölzl
2018-05-23: Irrational Rapidly Convergent Series
Authors: Angeliki Koutsoukou-Argyraki and Wenda Li
2018-05-23: Axiom Systems for Category Theory in Free Logic
Authors: Christoph Benzmüller and Dana Scott
2018-05-22: Monadification, Memoization and Dynamic Programming
Authors: Simon Wimmer, Shuwei Hu and Tobias Nipkow
2018-05-10: OpSets: Sequential Specifications for Replicated Datatypes
Authors: Martin Kleppmann, Victor B. F. Gomes, Dominic P. Mulligan and Alastair R. Beresford
2018-05-07: An Isabelle/HOL Formalization of the Modular Assembly Kit for Security Properties
Authors: Oliver Bračevac, Richard Gay, Sylvia Grewe, Heiko Mantel, Henning Sudbrock and Markus Tasch
2018-04-29: WebAssembly
Author: Conrad Watt
2018-04-27: VerifyThis 2018 - Polished Isabelle Solutions
Authors: Peter Lammich and Simon Wimmer
2018-04-24: Bounded Natural Functors with Covariance and Contravariance
Authors: Andreas Lochbihler and Joshua Schneider
2018-03-22: The Incompatibility of Fishburn-Strategyproofness and Pareto-Efficiency
Authors: Felix Brandt, Manuel Eberl, Christian Saile and Christian Stricker
2018-03-13: Weight-Balanced Trees
Authors: Tobias Nipkow and Stefan Dirix
2018-03-12: CakeML
Authors: Lars Hupel and Yu Zhang
2018-03-01: A Theory of Architectural Design Patterns
Author: Diego Marmsoler
2018-02-26: Hoare Logics for Time Bounds
Authors: Maximilian P. L. Haslbeck and Tobias Nipkow
2018-02-06: Treaps
Authors: Maximilian Haslbeck, Manuel Eberl and Tobias Nipkow
2018-02-06: A verified factorization algorithm for integer polynomials with polynomial complexity
Authors: Jose Divasón, Sebastiaan Joosten, René Thiemann and Akihisa Yamada
2018-02-06: First-Order Terms
Authors: Christian Sternagel and René Thiemann
2018-02-06: The Error Function
Author: Manuel Eberl
2018-02-02: A verified LLL algorithm
Authors: Ralph Bottesch, Jose Divasón, Maximilian Haslbeck, Sebastiaan Joosten, René Thiemann and Akihisa Yamada
2018-01-18: Formalization of Bachmair and Ganzinger's Ordered Resolution Prover
Authors: Anders Schlichtkrull, Jasmin Christian Blanchette, Dmitriy Traytel and Uwe Waldmann
2018-01-16: Gromov Hyperbolicity
Author: Sebastien Gouezel
2018-01-11: An Isabelle/HOL formalisation of Green's Theorem
Authors: Mohammad Abdulaziz and Lawrence C. Paulson
2018-01-08: Taylor Models
Authors: Christoph Traut and Fabian Immler

 

2017
2017-12-22: The Falling Factorial of a Sum
Author: Lukas Bulwahn
2017-12-21: The Median-of-Medians Selection Algorithm
Author: Manuel Eberl
2017-12-21: The Mason–Stothers Theorem
Author: Manuel Eberl
2017-12-21: Dirichlet L-Functions and Dirichlet's Theorem
Author: Manuel Eberl
2017-12-19: Operations on Bounded Natural Functors
Authors: Jasmin Christian Blanchette, Andrei Popescu and Dmitriy Traytel
2017-12-18: The string search algorithm by Knuth, Morris and Pratt
Authors: Fabian Hellauer and Peter Lammich
2017-11-22: Stochastic Matrices and the Perron-Frobenius Theorem
Author: René Thiemann
2017-11-09: The IMAP CmRDT
Authors: Tim Jungnickel, Lennart Oldenburg and Matthias Loibl
2017-11-06: Hybrid Multi-Lane Spatial Logic
Author: Sven Linker
2017-10-26: The Kuratowski Closure-Complement Theorem
Authors: Peter Gammie and Gianpaolo Gioiosa
2017-10-19: Transition Systems and Automata
Author: Julian Brunner
2017-10-19: Büchi Complementation
Author: Julian Brunner
2017-10-17: Evaluate Winding Numbers through Cauchy Indices
Author: Wenda Li
2017-10-17: Count the Number of Complex Roots
Author: Wenda Li
2017-10-14: Homogeneous Linear Diophantine Equations
Authors: Florian Messner, Julian Parsert, Jonas Schöpf and Christian Sternagel
2017-10-12: The Hurwitz and Riemann ζ Functions
Author: Manuel Eberl
2017-10-12: Linear Recurrences
Author: Manuel Eberl
2017-10-12: Dirichlet Series
Author: Manuel Eberl
2017-09-21: Computer-assisted Reconstruction and Assessment of E. J. Lowe's Modal Ontological Argument
Authors: David Fuenmayor and Christoph Benzmüller
2017-09-17: Representation and Partial Automation of the Principia Logico-Metaphysica in Isabelle/HOL
Author: Daniel Kirchner
2017-09-06: Anselm's God in Isabelle/HOL
Author: Ben Blumson
2017-09-01: Microeconomics and the First Welfare Theorem
Authors: Julian Parsert and Cezary Kaliszyk
2017-08-20: Root-Balanced Tree
Author: Tobias Nipkow
2017-08-20: Orbit-Stabiliser Theorem with Application to Rotational Symmetries
Author: Jonas Rädle
2017-08-16: The LambdaMu-calculus
Authors: Cristina Matache, Victor B. F. Gomes and Dominic P. Mulligan
2017-07-31: Stewart's Theorem and Apollonius' Theorem
Author: Lukas Bulwahn
2017-07-28: Dynamic Architectures
Author: Diego Marmsoler
2017-07-21: Declarative Semantics for Functional Languages
Author: Jeremy Siek
2017-07-15: HOLCF-Prelude
Authors: Joachim Breitner, Brian Huffman, Neil Mitchell and Christian Sternagel
2017-07-13: Minkowski's Theorem
Author: Manuel Eberl
2017-07-09: Verified Metatheory and Type Inference for a Name-Carrying Simply-Typed Lambda Calculus
Author: Michael Rawson
2017-07-07: A framework for establishing Strong Eventual Consistency for Conflict-free Replicated Datatypes
Authors: Victor B. F. Gomes, Martin Kleppmann, Dominic P. Mulligan and Alastair R. Beresford
2017-07-06: Stone-Kleene Relation Algebras
Author: Walter Guttmann
2017-06-21: Propositional Proof Systems
Authors: Julius Michaelis and Tobias Nipkow
2017-06-13: Partial Semigroups and Convolution Algebras
Authors: Brijesh Dongol, Victor B. F. Gomes, Ian J. Hayes and Georg Struth
2017-06-06: Buffon's Needle Problem
Author: Manuel Eberl
2017-06-01: Formalizing Push-Relabel Algorithms
Authors: Peter Lammich and S. Reza Sefidgar
2017-06-01: Flow Networks and the Min-Cut-Max-Flow Theorem
Authors: Peter Lammich and S. Reza Sefidgar
2017-05-25: Optics
Authors: Simon Foster and Frank Zeyda
2017-05-24: Developing Security Protocols by Refinement
Authors: Christoph Sprenger and Ivano Somaini
2017-05-24: Dictionary Construction
Author: Lars Hupel
2017-05-08: The Floyd-Warshall Algorithm for Shortest Paths
Authors: Simon Wimmer and Peter Lammich
2017-05-05: Probabilistic while loop
Author: Andreas Lochbihler
2017-05-05: Effect polymorphism in higher-order logic
Author: Andreas Lochbihler
2017-05-05: Monad normalisation
Authors: Joshua Schneider, Manuel Eberl and Andreas Lochbihler
2017-05-05: Game-based cryptography in HOL
Authors: Andreas Lochbihler, S. Reza Sefidgar and Bhargav Bhatt
2017-05-05: CryptHOL
Author: Andreas Lochbihler
2017-05-04: Monoidal Categories
Author: Eugene W. Stark
2017-05-01: Types, Tableaus and Gödel’s God in Isabelle/HOL
Authors: David Fuenmayor and Christoph Benzmüller
2017-04-28: Local Lexing
Author: Steven Obua
2017-04-19: Constructor Functions
Author: Lars Hupel
2017-04-18: Lazifying case constants
Author: Lars Hupel
2017-04-06: Subresultants
Authors: Sebastiaan Joosten, René Thiemann and Akihisa Yamada
2017-04-04: Expected Shape of Random Binary Search Trees
Author: Manuel Eberl
2017-03-15: The number of comparisons in QuickSort
Author: Manuel Eberl
2017-03-15: Lower bound on comparison-based sorting algorithms
Author: Manuel Eberl
2017-03-10: The Euler–MacLaurin Formula
Author: Manuel Eberl
2017-02-28: The Group Law for Elliptic Curves
Author: Stefan Berghofer
2017-02-26: Menger's Theorem
Author: Christoph Dittmann
2017-02-13: Differential Dynamic Logic
Author: Brandon Bohrer
2017-02-10: Abstract Soundness
Authors: Jasmin Christian Blanchette, Andrei Popescu and Dmitriy Traytel
2017-02-07: Stone Relation Algebras
Author: Walter Guttmann
2017-01-31: Refining Authenticated Key Agreement with Strong Adversaries
Authors: Joseph Lallemand and Christoph Sprenger
2017-01-24: Bernoulli Numbers
Authors: Lukas Bulwahn and Manuel Eberl
2017-01-17: Minimal Static Single Assignment Form
Authors: Max Wagner and Denis Lohner
2017-01-17: Bertrand's postulate
Authors: Julian Biendarra and Manuel Eberl
2017-01-12: The Transcendence of e
Author: Manuel Eberl
2017-01-08: Formal Network Models and Their Application to Firewall Policies
Authors: Achim D. Brucker, Lukas Brügger and Burkhart Wolff
2017-01-03: Verification of a Diffie-Hellman Password-based Authentication Protocol by Extending the Inductive Method
Author: Pasquale Noce
2017-01-01: First-Order Logic According to Harrison
Authors: Alexander Birch Jensen, Anders Schlichtkrull and Jørgen Villadsen

 

2016
2016-12-30: Concurrent Refinement Algebra and Rely Quotients
Authors: Julian Fell, Ian J. Hayes and Andrius Velykis
2016-12-29: The Twelvefold Way
Author: Lukas Bulwahn
2016-12-20: Proof Strategy Language
Author: Yutaka Nagashima
2016-12-07: Paraconsistency
Authors: Anders Schlichtkrull and Jørgen Villadsen
2016-11-29: COMPLX: A Verification Framework for Concurrent Imperative Programs
Authors: Sidney Amani, June Andronick, Maksym Bortin, Corey Lewis, Christine Rizkallah and Joseph Tuong
2016-11-23: Abstract Interpretation of Annotated Commands
Author: Tobias Nipkow
2016-11-16: Separata: Isabelle tactics for Separation Algebra
Authors: Zhe Hou, David Sanan, Alwen Tiu, Rajeev Gore and Ranald Clouston
2016-11-12: Formalization of Nested Multisets, Hereditary Multisets, and Syntactic Ordinals
Authors: Jasmin Christian Blanchette, Mathias Fleury and Dmitriy Traytel
2016-11-12: Formalization of Knuth–Bendix Orders for Lambda-Free Higher-Order Terms
Authors: Heiko Becker, Jasmin Christian Blanchette, Uwe Waldmann and Daniel Wand
2016-11-10: Expressiveness of Deep Learning
Author: Alexander Bentkamp
2016-10-25: Modal Logics for Nominal Transition Systems
Authors: Tjark Weber, Lars-Henrik Eriksson, Joachim Parrow, Johannes Borgström and Ramunas Gutkovas
2016-10-24: Stable Matching
Author: Peter Gammie
2016-10-21: LOFT — Verified Migration of Linux Firewalls to SDN
Authors: Julius Michaelis and Cornelius Diekmann
2016-10-19: Source Coding Theorem
Authors: Quentin Hibon and Lawrence C. Paulson
2016-10-19: A formal model for the SPARCv8 ISA and a proof of non-interference for the LEON3 processor
Authors: Zhe Hou, David Sanan, Alwen Tiu and Yang Liu
2016-10-14: The Factorization Algorithm of Berlekamp and Zassenhaus
Authors: Jose Divasón, Sebastiaan Joosten, René Thiemann and Akihisa Yamada
2016-10-11: Intersecting Chords Theorem
Author: Lukas Bulwahn
2016-10-05: Lp spaces
Author: Sebastien Gouezel
2016-09-30: Fisher–Yates shuffle
Author: Manuel Eberl
2016-09-29: Allen's Interval Calculus
Author: Fadoua Ghourabi
2016-09-23: Formalization of Recursive Path Orders for Lambda-Free Higher-Order Terms
Authors: Jasmin Christian Blanchette, Uwe Waldmann and Daniel Wand
2016-09-09: Iptables Semantics
Authors: Cornelius Diekmann and Lars Hupel
2016-09-06: A Variant of the Superposition Calculus
Author: Nicolas Peltier
2016-09-06: Stone Algebras
Author: Walter Guttmann
2016-09-01: Stirling's formula
Author: Manuel Eberl
2016-08-31: Routing
Authors: Julius Michaelis and Cornelius Diekmann
2016-08-24: Simple Firewall
Authors: Cornelius Diekmann, Julius Michaelis and Maximilian Haslbeck
2016-08-18: Infeasible Paths Elimination by Symbolic Execution Techniques: Proof of Correctness and Preservation of Paths
Authors: Romain Aissat, Frederic Voisin and Burkhart Wolff
2016-08-12: Formalizing the Edmonds-Karp Algorithm
Authors: Peter Lammich and S. Reza Sefidgar
2016-08-08: The Imperative Refinement Framework
Author: Peter Lammich
2016-08-07: Ptolemy's Theorem
Author: Lukas Bulwahn
2016-07-17: Surprise Paradox
Author: Joachim Breitner
2016-07-14: Pairing Heap
Authors: Hauke Brinkop and Tobias Nipkow
2016-07-05: A Framework for Verifying Depth-First Search Algorithms
Authors: Peter Lammich and René Neumann
2016-07-01: Chamber Complexes, Coxeter Systems, and Buildings
Author: Jeremy Sylvestre
2016-06-30: The Z Property
Authors: Bertram Felgenhauer, Julian Nagele, Vincent van Oostrom and Christian Sternagel
2016-06-30: The Resolution Calculus for First-Order Logic
Author: Anders Schlichtkrull
2016-06-28: IP Addresses
Authors: Cornelius Diekmann, Julius Michaelis and Lars Hupel
2016-06-28: Compositional Security-Preserving Refinement for Concurrent Imperative Programs
Authors: Toby Murray, Robert Sison, Edward Pierzchalski and Christine Rizkallah
2016-06-26: Category Theory with Adjunctions and Limits
Author: Eugene W. Stark
2016-06-26: Cardinality of Multisets
Author: Lukas Bulwahn
2016-06-25: A Dependent Security Type System for Concurrent Imperative Programs
Authors: Toby Murray, Robert Sison, Edward Pierzchalski and Christine Rizkallah
2016-06-21: Catalan Numbers
Author: Manuel Eberl
2016-06-18: Program Construction and Verification Components Based on Kleene Algebra
Authors: Victor B. F. Gomes and Georg Struth
2016-06-13: Conservation of CSP Noninterference Security under Concurrent Composition
Author: Pasquale Noce
2016-06-09: Finite Machine Word Library
Authors: Joel Beeren, Matthew Fernandez, Xin Gao, Gerwin Klein, Rafal Kolanski, Japheth Lim, Corey Lewis, Daniel Matichuk and Thomas Sewell
2016-05-31: Tree Decomposition
Author: Christoph Dittmann
2016-05-24: POSIX Lexing with Derivatives of Regular Expressions
Authors: Fahad Ausaf, Roy Dyckhoff and Christian Urban
2016-05-24: Cardinality of Equivalence Relations
Author: Lukas Bulwahn
2016-05-20: Perron-Frobenius Theorem for Spectral Radius Analysis
Authors: Jose Divasón, Ondřej Kunčar, René Thiemann and Akihisa Yamada
2016-05-20: The meta theory of the Incredible Proof Machine
Authors: Joachim Breitner and Denis Lohner
2016-05-18: A Constructive Proof for FLP
Authors: Benjamin Bisping, Paul-David Brodmann, Tim Jungnickel, Christina Rickmann, Henning Seidler, Anke Stüber, Arno Wilhelm-Weidner, Kirstin Peters and Uwe Nestmann
2016-05-09: A Formal Proof of the Max-Flow Min-Cut Theorem for Countable Networks
Author: Andreas Lochbihler
2016-05-05: Randomised Social Choice Theory
Author: Manuel Eberl
2016-05-04: The Incompatibility of SD-Efficiency and SD-Strategy-Proofness
Author: Manuel Eberl
2016-05-04: Spivey's Generalized Recurrence for Bell Numbers
Author: Lukas Bulwahn
2016-05-02: Gröbner Bases Theory
Authors: Fabian Immler and Alexander Maletzky
2016-04-28: No Faster-Than-Light Observers
Authors: Mike Stannett and István Németi
2016-04-27: Algorithms for Reduced Ordered Binary Decision Diagrams
Authors: Julius Michaelis, Maximilian Haslbeck, Peter Lammich and Lars Hupel
2016-04-27: A formalisation of the Cocke-Younger-Kasami algorithm
Author: Maksym Bortin
2016-04-26: Conservation of CSP Noninterference Security under Sequential Composition
Author: Pasquale Noce
2016-04-12: Kleene Algebras with Domain
Authors: Victor B. F. Gomes, Walter Guttmann, Peter Höfner, Georg Struth and Tjark Weber
2016-03-11: Propositional Resolution and Prime Implicates Generation
Author: Nicolas Peltier
2016-03-08: Timed Automata
Author: Simon Wimmer
2016-03-08: The Cartan Fixed Point Theorems
Author: Lawrence C. Paulson
2016-03-01: Linear Temporal Logic
Author: Salomon Sickert
2016-02-17: Analysis of List Update Algorithms
Authors: Maximilian P.L. Haslbeck and Tobias Nipkow
2016-02-05: Verified Construction of Static Single Assignment Form
Authors: Sebastian Ullrich and Denis Lohner
2016-01-29: Polynomial Interpolation
Authors: René Thiemann and Akihisa Yamada
2016-01-29: Polynomial Factorization
Authors: René Thiemann and Akihisa Yamada
2016-01-20: Knot Theory
Author: T.V.H. Prathamesh
2016-01-18: Tensor Product of Matrices
Author: T.V.H. Prathamesh
2016-01-14: Cardinality of Number Partitions
Author: Lukas Bulwahn

 

2015
2015-12-28: Basic Geometric Properties of Triangles
Author: Manuel Eberl
2015-12-28: The Divergence of the Prime Harmonic Series
Author: Manuel Eberl
2015-12-28: Liouville numbers
Author: Manuel Eberl
2015-12-28: Descartes' Rule of Signs
Author: Manuel Eberl
2015-12-22: The Stern-Brocot Tree
Authors: Peter Gammie and Andreas Lochbihler
2015-12-22: Applicative Lifting
Authors: Andreas Lochbihler and Joshua Schneider
2015-12-22: Algebraic Numbers in Isabelle/HOL
Authors: René Thiemann, Akihisa Yamada and Sebastiaan Joosten
2015-12-12: Cardinality of Set Partitions
Author: Lukas Bulwahn
2015-12-02: Latin Square
Author: Alexander Bentkamp
2015-12-01: Ergodic Theory
Author: Sebastien Gouezel
2015-11-19: Euler's Partition Theorem
Author: Lukas Bulwahn
2015-11-18: The Tortoise and Hare Algorithm
Author: Peter Gammie
2015-11-11: Planarity Certificates
Author: Lars Noschinski
2015-11-02: Positional Determinacy of Parity Games
Author: Christoph Dittmann
2015-09-16: A Meta-Model for the Isabelle API
Authors: Frédéric Tuong and Burkhart Wolff
2015-09-04: Converting Linear Temporal Logic to Deterministic (Generalized) Rabin Automata
Author: Salomon Sickert
2015-08-21: Matrices, Jordan Normal Forms, and Spectral Radius Theory
Authors: René Thiemann and Akihisa Yamada
2015-08-20: Decreasing Diagrams II
Author: Bertram Felgenhauer
2015-08-18: The Inductive Unwinding Theorem for CSP Noninterference Security
Author: Pasquale Noce
2015-08-12: Representations of Finite Groups
Author: Jeremy Sylvestre
2015-08-10: Analysing and Comparing Encodability Criteria for Process Calculi
Authors: Kirstin Peters and Rob van Glabbeek
2015-07-21: Generating Cases from Labeled Subgoals
Author: Lars Noschinski
2015-07-14: Landau Symbols
Author: Manuel Eberl
2015-07-14: The Akra-Bazzi theorem and the Master theorem
Author: Manuel Eberl
2015-07-07: Hermite Normal Form
Authors: Jose Divasón and Jesús Aransay
2015-06-27: Derangements Formula
Author: Lukas Bulwahn
2015-06-11: The Ipurge Unwinding Theorem for CSP Noninterference Security
Author: Pasquale Noce
2015-06-11: The Generic Unwinding Theorem for CSP Noninterference Security
Author: Pasquale Noce
2015-06-11: Binary Multirelations
Authors: Hitoshi Furusawa and Georg Struth
2015-06-11: Reasoning about Lists via List Interleaving
Author: Pasquale Noce
2015-06-07: Parameterized Dynamic Tables
Author: Tobias Nipkow
2015-05-28: Derivatives of Logical Formulas
Author: Dmitriy Traytel
2015-05-27: A Zoo of Probabilistic Systems
Authors: Johannes Hölzl, Andreas Lochbihler and Dmitriy Traytel
2015-04-30: VCG - Combinatorial Vickrey-Clarke-Groves Auctions
Authors: Marco B. Caminati, Manfred Kerber, Christoph Lange and Colin Rowat
2015-04-15: Residuated Lattices
Authors: Victor B. F. Gomes and Georg Struth
2015-04-13: Concurrent IMP
Author: Peter Gammie
2015-04-13: Relaxing Safely: Verified On-the-Fly Garbage Collection for x86-TSO
Authors: Peter Gammie, Tony Hosking and Kai Engelhardt
2015-03-30: Trie
Authors: Andreas Lochbihler and Tobias Nipkow
2015-03-18: Consensus Refined
Authors: Ognjen Maric and Christoph Sprenger
2015-03-11: Deriving class instances for datatypes
Authors: Christian Sternagel and René Thiemann
2015-02-20: The Safety of Call Arity
Author: Joachim Breitner
2015-02-12: QR Decomposition
Authors: Jose Divasón and Jesús Aransay
2015-02-12: Echelon Form
Authors: Jose Divasón and Jesús Aransay
2015-02-05: Finite Automata in Hereditarily Finite Set Theory
Author: Lawrence C. Paulson
2015-01-28: Verification of the UpDown Scheme
Author: Johannes Hölzl

 

2014
2014-11-28: The Unified Policy Framework (UPF)
Authors: Achim D. Brucker, Lukas Brügger and Burkhart Wolff
2014-10-23: Loop freedom of the (untimed) AODV routing protocol
Authors: Timothy Bourke and Peter Höfner
2014-10-13: Lifting Definition Option
Author: René Thiemann
2014-10-10: Stream Fusion in HOL with Code Generation
Authors: Andreas Lochbihler and Alexandra Maximova
2014-10-09: A Verified Compiler for Probability Density Functions
Authors: Manuel Eberl, Johannes Hölzl and Tobias Nipkow
2014-10-08: Formalization of Refinement Calculus for Reactive Systems
Author: Viorel Preoteasa
2014-10-03: XML
Authors: Christian Sternagel and René Thiemann
2014-10-03: Certification Monads
Authors: Christian Sternagel and René Thiemann
2014-09-25: Imperative Insertion Sort
Author: Christian Sternagel
2014-09-19: The Sturm-Tarski Theorem
Author: Wenda Li
2014-09-15: The Cayley-Hamilton Theorem
Authors: Stephan Adelsberger, Stefan Hetzl and Florian Pollak
2014-09-09: The Jordan-Hölder Theorem
Author: Jakob von Raumer
2014-09-04: Priority Queues Based on Braun Trees
Author: Tobias Nipkow
2014-09-03: Gauss-Jordan Algorithm and Its Applications
Authors: Jose Divasón and Jesús Aransay
2014-08-29: Vector Spaces
Author: Holden Lee
2014-08-29: Real-Valued Special Functions: Upper and Lower Bounds
Author: Lawrence C. Paulson
2014-08-13: Skew Heap
Author: Tobias Nipkow
2014-08-12: Splay Tree
Author: Tobias Nipkow
2014-07-29: Haskell's Show Class in Isabelle/HOL
Authors: Christian Sternagel and René Thiemann
2014-07-18: Formal Specification of a Generic Separation Kernel
Authors: Freek Verbeek, Sergey Tverdyshev, Oto Havle, Holger Blasum, Bruno Langenstein, Werner Stephan, Yakoub Nemouchi, Abderrahmane Feliachi, Burkhart Wolff and Julien Schmaltz
2014-07-13: pGCL for Isabelle
Author: David Cock
2014-07-07: Amortized Complexity Verified
Author: Tobias Nipkow
2014-07-04: Network Security Policy Verification
Author: Cornelius Diekmann
2014-07-03: Pop-Refinement
Author: Alessandro Coglio
2014-06-12: Decision Procedures for MSO on Words Based on Derivatives of Regular Expressions
Authors: Dmitriy Traytel and Tobias Nipkow
2014-06-08: Boolean Expression Checkers
Author: Tobias Nipkow
2014-05-28: Promela Formalization
Author: René Neumann
2014-05-28: Converting Linear-Time Temporal Logic to Generalized Büchi Automata
Authors: Alexander Schimpf and Peter Lammich
2014-05-28: Verified Efficient Implementation of Gabow's Strongly Connected Components Algorithm
Author: Peter Lammich
2014-05-28: A Fully Verified Executable LTL Model Checker
Authors: Javier Esparza, Peter Lammich, René Neumann, Tobias Nipkow, Alexander Schimpf and Jan-Georg Smaus
2014-05-28: The CAVA Automata Library
Author: Peter Lammich
2014-05-23: Transitive closure according to Roy-Floyd-Warshall
Author: Makarius Wenzel
2014-05-23: Noninterference Security in Communicating Sequential Processes
Author: Pasquale Noce
2014-05-21: Regular Algebras
Authors: Simon Foster and Georg Struth
2014-04-28: Formalisation and Analysis of Component Dependencies
Author: Maria Spichkova
2014-04-23: A Formalization of Declassification with WHAT-and-WHERE-Security
Authors: Sylvia Grewe, Alexander Lux, Heiko Mantel and Jens Sauer
2014-04-23: A Formalization of Strong Security
Authors: Sylvia Grewe, Alexander Lux, Heiko Mantel and Jens Sauer
2014-04-23: A Formalization of Assumptions and Guarantees for Compositional Noninterference
Authors: Sylvia Grewe, Heiko Mantel and Daniel Schoepe
2014-04-22: Bounded-Deducibility Security
Authors: Andrei Popescu and Peter Lammich
2014-04-16: A shallow embedding of HyperCTL*
Authors: Markus N. Rabe, Peter Lammich and Andrei Popescu
2014-04-16: Abstract Completeness
Authors: Jasmin Christian Blanchette, Andrei Popescu and Dmitriy Traytel
2014-04-13: Discrete Summation
Author: Florian Haftmann
2014-04-03: Syntax and semantics of a GPU kernel programming language
Author: John Wickerson
2014-03-11: Probabilistic Noninterference
Authors: Andrei Popescu and Johannes Hölzl
2014-03-08: Mechanization of the Algebra for Wireless Networks (AWN)
Author: Timothy Bourke
2014-02-18: Mutually Recursive Partial Functions
Author: René Thiemann
2014-02-13: Properties of Random Graphs -- Subgraph Containment
Author: Lars Hupel
2014-02-11: Verification of Selection and Heap Sort Using Locales
Author: Danijela Petrovic
2014-02-07: Affine Arithmetic
Author: Fabian Immler
2014-02-06: Implementing field extensions of the form Q[sqrt(b)]
Author: René Thiemann
2014-01-30: Unified Decision Procedures for Regular Expression Equivalence
Authors: Tobias Nipkow and Dmitriy Traytel
2014-01-28: Secondary Sylow Theorems
Author: Jakob von Raumer
2014-01-25: Relation Algebra
Authors: Alasdair Armstrong, Simon Foster, Georg Struth and Tjark Weber
2014-01-23: Kleene Algebra with Tests and Demonic Refinement Algebras
Authors: Alasdair Armstrong, Victor B. F. Gomes and Georg Struth
2014-01-16: Featherweight OCL: A Proposal for a Machine-Checked Formal Semantics for OCL 2.5
Authors: Achim D. Brucker, Frédéric Tuong and Burkhart Wolff
2014-01-11: Sturm's Theorem
Author: Manuel Eberl
2014-01-11: Compositional Properties of Crypto-Based Components
Author: Maria Spichkova

 

2013
2013-12-01: A General Method for the Proof of Theorems on Tail-recursive Functions
Author: Pasquale Noce
2013-11-17: Gödel's Incompleteness Theorems
Author: Lawrence C. Paulson
2013-11-17: The Hereditarily Finite Sets
Author: Lawrence C. Paulson
2013-11-15: A Codatatype of Formal Languages
Author: Dmitriy Traytel
2013-11-14: Stream Processing Components: Isabelle/HOL Formalisation and Case Studies
Author: Maria Spichkova
2013-11-12: Gödel's God in Isabelle/HOL
Authors: Christoph Benzmüller and Bruno Woltzenlogel Paleo
2013-11-01: Decreasing Diagrams
Author: Harald Zankl
2013-10-02: Automatic Data Refinement
Author: Peter Lammich
2013-09-17: Native Word
Author: Andreas Lochbihler
2013-07-27: A Formal Model of IEEE Floating Point Arithmetic
Author: Lei Yu
2013-07-22: Pratt's Primality Certificates
Authors: Simon Wimmer and Lars Noschinski
2013-07-22: Lehmer's Theorem
Authors: Simon Wimmer and Lars Noschinski
2013-07-19: The Königsberg Bridge Problem and the Friendship Theorem
Author: Wenda Li
2013-06-27: Sound and Complete Sort Encodings for First-Order Logic
Authors: Jasmin Christian Blanchette and Andrei Popescu
2013-05-22: An Axiomatic Characterization of the Single-Source Shortest Path Problem
Author: Christine Rizkallah
2013-04-28: Graph Theory
Author: Lars Noschinski
2013-04-15: Light-weight Containers
Author: Andreas Lochbihler
2013-02-21: Nominal 2
Authors: Christian Urban, Stefan Berghofer and Cezary Kaliszyk
2013-01-31: The Correctness of Launchbury's Natural Semantics for Lazy Evaluation
Author: Joachim Breitner
2013-01-19: Ribbon Proofs
Author: John Wickerson
2013-01-16: Rank-Nullity Theorem in Linear Algebra
Authors: Jose Divasón and Jesús Aransay
2013-01-15: Kleene Algebra
Authors: Alasdair Armstrong, Georg Struth and Tjark Weber
2013-01-03: Computing N-th Roots using the Babylonian Method
Author: René Thiemann

 

2012
2012-11-14: A Separation Logic Framework for Imperative HOL
Authors: Peter Lammich and Rene Meis
2012-11-02: Open Induction
Authors: Mizuhito Ogawa and Christian Sternagel
2012-10-30: The independence of Tarski's Euclidean axiom
Author: T. J. M. Makarios
2012-10-27: Bondy's Theorem
Authors: Jeremy Avigad and Stefan Hetzl
2012-09-10: Possibilistic Noninterference
Authors: Andrei Popescu and Johannes Hölzl
2012-08-07: Generating linear orders for datatypes
Author: René Thiemann
2012-08-05: Proving the Impossibility of Trisecting an Angle and Doubling the Cube
Authors: Ralph Romanos and Lawrence C. Paulson
2012-07-27: Verifying Fault-Tolerant Distributed Algorithms in the Heard-Of Model
Authors: Henri Debrat and Stephan Merz
2012-07-01: Logical Relations for PCF
Author: Peter Gammie
2012-06-26: Type Constructor Classes and Monad Transformers
Author: Brian Huffman
2012-05-29: Psi-calculi in Isabelle
Author: Jesper Bengtson
2012-05-29: The pi-calculus in nominal logic
Author: Jesper Bengtson
2012-05-29: CCS in nominal logic
Author: Jesper Bengtson
2012-05-27: Isabelle/Circus
Authors: Abderrahmane Feliachi, Burkhart Wolff and Marie-Claude Gaudel
2012-05-11: Separation Algebra
Authors: Gerwin Klein, Rafal Kolanski and Andrew Boyton
2012-05-07: Stuttering Equivalence
Author: Stephan Merz
2012-05-02: Inductive Study of Confidentiality
Author: Giampaolo Bella
2012-04-26: Ordinary Differential Equations
Authors: Fabian Immler and Johannes Hölzl
2012-04-13: Well-Quasi-Orders
Author: Christian Sternagel
2012-03-01: Abortable Linearizable Modules
Authors: Rachid Guerraoui, Viktor Kuncak and Giuliano Losa
2012-02-29: Executable Transitive Closures
Author: René Thiemann
2012-02-06: A Probabilistic Proof of the Girth-Chromatic Number Theorem
Author: Lars Noschinski
2012-01-30: Refinement for Monadic Programs
Author: Peter Lammich
2012-01-30: Dijkstra's Shortest Path Algorithm
Authors: Benedikt Nordhoff and Peter Lammich
2012-01-03: Markov Models
Authors: Johannes Hölzl and Tobias Nipkow

 

2011
2011-11-19: A Definitional Encoding of TLA* in Isabelle/HOL
Authors: Gudmund Grov and Stephan Merz
2011-11-09: Efficient Mergesort
Author: Christian Sternagel
2011-09-22: Pseudo Hoops
Authors: George Georgescu, Laurentiu Leustean and Viorel Preoteasa
2011-09-22: Algebra of Monotonic Boolean Transformers
Author: Viorel Preoteasa
2011-09-22: Lattice Properties
Author: Viorel Preoteasa
2011-08-26: The Myhill-Nerode Theorem Based on Regular Expressions
Authors: Chunhan Wu, Xingyuan Zhang and Christian Urban
2011-08-19: Gauss-Jordan Elimination for Matrices Represented as Functions
Author: Tobias Nipkow
2011-07-21: Maximum Cardinality Matching
Author: Christine Rizkallah
2011-05-17: Knowledge-based programs
Author: Peter Gammie
2011-04-01: The General Triangle Is Unique
Author: Joachim Breitner
2011-03-14: Executable Transitive Closures of Finite Relations
Authors: Christian Sternagel and René Thiemann
2011-02-23: Interval Temporal Logic on Natural Numbers
Author: David Trachtenherz
2011-02-23: Infinite Lists
Author: David Trachtenherz
2011-02-23: AutoFocus Stream Processing for Single-Clocking and Multi-Clocking Semantics
Author: David Trachtenherz
2011-02-07: Lightweight Java
Authors: Rok Strniša and Matthew Parkinson
2011-01-10: RIPEMD-160
Author: Fabian Immler
2011-01-08: Lower Semicontinuous Functions
Author: Bogdan Grechuk

 

2010
2010-12-17: Hall's Marriage Theorem
Authors: Dongchen Jiang and Tobias Nipkow
2010-11-16: Shivers' Control Flow Analysis
Author: Joachim Breitner
2010-10-28: Finger Trees
Authors: Benedikt Nordhoff, Stefan Körner and Peter Lammich
2010-10-28: Functional Binomial Queues
Author: René Neumann
2010-10-28: Binomial Heaps and Skew Binomial Heaps
Authors: Rene Meis, Finn Nielsen and Peter Lammich
2010-08-29: Strong Normalization of Moggis's Computational Metalanguage
Author: Christian Doczkal
2010-08-10: Executable Multivariate Polynomials
Authors: Christian Sternagel, René Thiemann, Alexander Maletzky, Fabian Immler, Florian Haftmann, Andreas Lochbihler and Alexander Bentkamp
2010-08-08: Formalizing Statecharts using Hierarchical Automata
Authors: Steffen Helke and Florian Kammüller
2010-06-24: Free Groups
Author: Joachim Breitner
2010-06-20: Category Theory
Author: Alexander Katovsky
2010-06-17: Executable Matrix Operations on Matrices of Arbitrary Dimensions
Authors: Christian Sternagel and René Thiemann
2010-06-14: Abstract Rewriting
Authors: Christian Sternagel and René Thiemann
2010-05-28: Verification of the Deutsch-Schorr-Waite Graph Marking Algorithm using Data Refinement
Authors: Viorel Preoteasa and Ralph-Johan Back
2010-05-28: Semantics and Data Refinement of Invariant Based Programs
Authors: Viorel Preoteasa and Ralph-Johan Back
2010-05-22: A Complete Proof of the Robbins Conjecture
Author: Matthew Wampler-Doty
2010-05-12: Regular Sets and Expressions
Authors: Alexander Krauss and Tobias Nipkow
2010-04-30: Locally Nameless Sigma Calculus
Authors: Ludovic Henrio, Florian Kammüller, Bianca Lutz and Henry Sudhof
2010-03-29: Free Boolean Algebra
Author: Brian Huffman
2010-03-23: Inter-Procedural Information Flow Noninterference via Slicing
Author: Daniel Wasserrab
2010-03-23: Information Flow Noninterference via Slicing
Author: Daniel Wasserrab
2010-02-20: List Index
Author: Tobias Nipkow
2010-02-12: Coinductive
Author: Andreas Lochbihler

 

2009
2009-12-09: A Fast SAT Solver for Isabelle in Standard ML
Author: Armin Heller
2009-12-03: Formalizing the Logic-Automaton Connection
Authors: Stefan Berghofer and Markus Reiter
2009-11-25: Tree Automata
Author: Peter Lammich
2009-11-25: Collections Framework
Author: Peter Lammich
2009-11-22: Perfect Number Theorem
Author: Mark Ijbema
2009-11-13: Backing up Slicing: Verifying the Interprocedural Two-Phase Horwitz-Reps-Binkley Slicer
Author: Daniel Wasserrab
2009-10-30: The Worker/Wrapper Transformation
Author: Peter Gammie
2009-09-01: Ordinals and Cardinals
Author: Andrei Popescu
2009-08-28: Invertibility in Sequent Calculi
Author: Peter Chapman
2009-08-04: An Example of a Cofinitary Group in Isabelle/HOL
Author: Bart Kastermans
2009-05-06: Code Generation for Functions as Data
Author: Andreas Lochbihler
2009-04-29: Stream Fusion
Author: Brian Huffman

 

2008
2008-12-12: A Bytecode Logic for JML and Types
Authors: Lennart Beringer and Martin Hofmann
2008-11-10: Secure information flow and program logics
Authors: Lennart Beringer and Martin Hofmann
2008-11-09: Some classical results in Social Choice Theory
Author: Peter Gammie
2008-11-07: Fun With Tilings
Authors: Tobias Nipkow and Lawrence C. Paulson
2008-10-15: The Textbook Proof of Huffman's Algorithm
Author: Jasmin Christian Blanchette
2008-09-16: Towards Certified Slicing
Author: Daniel Wasserrab
2008-09-02: A Correctness Proof for the Volpano/Smith Security Typing System
Authors: Gregor Snelting and Daniel Wasserrab
2008-09-01: Arrow and Gibbard-Satterthwaite
Author: Tobias Nipkow
2008-08-26: Fun With Functions
Author: Tobias Nipkow
2008-07-23: Formal Verification of Modern SAT Solvers
Author: Filip Marić
2008-04-05: Recursion Theory I
Author: Michael Nedzelsky
2008-02-29: A Sequential Imperative Programming Language Syntax, Semantics, Hoare Logics and Verification Environment
Author: Norbert Schirmer
2008-02-29: BDD Normalisation
Authors: Veronika Ortner and Norbert Schirmer
2008-02-18: Normalization by Evaluation
Authors: Klaus Aehlig and Tobias Nipkow
2008-01-11: Quantifier Elimination for Linear Arithmetic
Author: Tobias Nipkow

 

2007
2007-12-14: Formalization of Conflict Analysis of Programs with Procedures, Thread Creation, and Monitors
Authors: Peter Lammich and Markus Müller-Olm
2007-12-03: Jinja with Threads
Author: Andreas Lochbihler
2007-11-06: Much Ado About Two
Author: Sascha Böhme
2007-08-12: Sums of Two and Four Squares
Author: Roelof Oosterhuis
2007-08-12: Fermat's Last Theorem for Exponents 3 and 4 and the Parametrisation of Pythagorean Triples
Author: Roelof Oosterhuis
2007-08-08: Fundamental Properties of Valuation Theory and Hensel's Lemma
Author: Hidetsune Kobayashi
2007-08-02: POPLmark Challenge Via de Bruijn Indices
Author: Stefan Berghofer
2007-08-02: First-Order Logic According to Fitting
Author: Stefan Berghofer

 

2006
2006-09-09: Hotel Key Card System
Author: Tobias Nipkow
2006-08-08: Abstract Hoare Logics
Author: Tobias Nipkow
2006-05-22: Flyspeck I: Tame Graphs
Authors: Gertrud Bauer and Tobias Nipkow
2006-05-15: CoreC++
Author: Daniel Wasserrab
2006-03-31: A Theory of Featherweight Java in Isabelle/HOL
Authors: J. Nathan Foster and Dimitrios Vytiniotis
2006-03-15: Instances of Schneider's generalized protocol of clock synchronization
Author: Damián Barsotti
2006-03-14: Cauchy's Mean Theorem and the Cauchy-Schwarz Inequality
Author: Benjamin Porter

 

2005
2005-11-11: Countable Ordinals
Author: Brian Huffman
2005-10-12: Fast Fourier Transform
Author: Clemens Ballarin
2005-06-24: Formalization of a Generalized Protocol for Clock Synchronization
Author: Alwen Tiu
2005-06-22: Proving the Correctness of Disk Paxos
Authors: Mauro Jaskelioff and Stephan Merz
2005-06-20: Jive Data and Store Model
Authors: Nicole Rauch and Norbert Schirmer
2005-06-01: Jinja is not Java
Authors: Gerwin Klein and Tobias Nipkow
2005-05-02: SHA1, RSA, PSS and more
Authors: Christina Lindenberg and Kai Wirt
2005-04-21: Category Theory to Yoneda's Lemma
Author: Greg O'Keefe

 

2004
2004-12-09: File Refinement
Authors: Karen Zee and Viktor Kuncak
2004-11-19: Integration theory and random variables
Author: Stefan Richter
2004-09-28: A Mechanically Verified, Efficient, Sound and Complete Theorem Prover For First Order Logic
Author: Tom Ridge
2004-09-20: Ramsey's theorem, infinitary version
Author: Tom Ridge
2004-09-20: Completeness theorem
Authors: James Margetson and Tom Ridge
2004-07-09: Compiling Exceptions Correctly
Author: Tobias Nipkow
2004-06-24: Depth First Search
Authors: Toshiaki Nishihara and Yasuhiko Minamide
2004-05-18: Groups, Rings and Modules
Authors: Hidetsune Kobayashi, L. Chen and H. Murao
2004-04-26: Topology
Author: Stefan Friedrich
2004-04-26: Lazy Lists II
Author: Stefan Friedrich
2004-04-05: Binary Search Trees
Author: Viktor Kuncak
2004-03-30: Functional Automata
Author: Tobias Nipkow
2004-03-19: Mini ML
Authors: Wolfgang Naraschewski and Tobias Nipkow
2004-03-19: AVL Trees
Authors: Tobias Nipkow and Cornelia Pusch
\ No newline at end of file diff --git a/web/rss.xml b/web/rss.xml --- a/web/rss.xml +++ b/web/rss.xml @@ -1,588 +1,574 @@ Archive of Formal Proofs https://www.isa-afp.org The Archive of Formal Proofs is a collection of proof libraries, examples, and larger scientific developments, mechanically checked in the theorem prover Isabelle. 13 May 2020 00:00:00 +0000 A Formalization of Knuth–Bendix Orders https://www.isa-afp.org/entries/Knuth_Bendix_Order.html https://www.isa-afp.org/entries/Knuth_Bendix_Order.html Christian Sternagel, René Thiemann 13 May 2020 00:00:00 +0000 We define a generalized version of Knuth&ndash;Bendix orders, including subterm coefficient functions. For these orders we formalize several properties such as strong normalization, the subterm property, closure properties under substitutions and contexts, as well as ground totality. + Irrationality Criteria for Series by Erdős and Straus + https://www.isa-afp.org/entries/Irrational_Series_Erdos_Straus.html + https://www.isa-afp.org/entries/Irrational_Series_Erdos_Straus.html + Angeliki Koutsoukou-Argyraki, Wenda Li + 12 May 2020 00:00:00 +0000 + +We formalise certain irrationality criteria for infinite series of the form: +\[\sum_{n=1}^\infty \frac{b_n}{\prod_{i=1}^n a_i} \] +where $\{b_n\}$ is a sequence of integers and $\{a_n\}$ a sequence of positive integers +with $a_n >1$ for all large n. The results are due to P. Erdős and E. G. Straus +<a href="https://projecteuclid.org/euclid.pjm/1102911140">[1]</a>. +In particular, we formalise Theorem 2.1, Corollary 2.10 and Theorem 3.1. +The latter is an application of Theorem 2.1 involving the prime numbers. + + + Recursion Theorem in ZF + https://www.isa-afp.org/entries/Recursion-Addition.html + https://www.isa-afp.org/entries/Recursion-Addition.html + Georgy Dunaev + 11 May 2020 00:00:00 +0000 + +This document contains a proof of the recursion theorem. This is a +mechanization of the proof of the recursion theorem from the text <i>Introduction to +Set Theory</i>, by Karel Hrbacek and Thomas Jech. This +implementation may be used as the basis for a model of Peano arithmetic in +ZF. While recursion and the natural numbers are already available in Isabelle/ZF, this clean development +is much easier to follow. + + + An Efficient Normalisation Procedure for Linear Temporal Logic: Isabelle/HOL Formalisation + https://www.isa-afp.org/entries/LTL_Normal_Form.html + https://www.isa-afp.org/entries/LTL_Normal_Form.html + Salomon Sickert + 08 May 2020 00:00:00 +0000 + +In the mid 80s, Lichtenstein, Pnueli, and Zuck proved a classical +theorem stating that every formula of Past LTL (the extension of LTL +with past operators) is equivalent to a formula of the form +$\bigwedge_{i=1}^n \mathbf{G}\mathbf{F} \varphi_i \vee +\mathbf{F}\mathbf{G} \psi_i$, where $\varphi_i$ and $\psi_i$ contain +only past operators. Some years later, Chang, Manna, and Pnueli built +on this result to derive a similar normal form for LTL. Both +normalisation procedures have a non-elementary worst-case blow-up, and +follow an involved path from formulas to counter-free automata to +star-free regular expressions and back to formulas. We improve on both +points. We present an executable formalisation of a direct and purely +syntactic normalisation procedure for LTL yielding a normal form, +comparable to the one by Chang, Manna, and Pnueli, that has only a +single exponential blow-up. + + + Formalization of Forcing in Isabelle/ZF + https://www.isa-afp.org/entries/Forcing.html + https://www.isa-afp.org/entries/Forcing.html + Emmanuel Gunther, Miguel Pagano, Pedro Sánchez Terraf + 06 May 2020 00:00:00 +0000 + +We formalize the theory of forcing in the set theory framework of +Isabelle/ZF. Under the assumption of the existence of a countable +transitive model of ZFC, we construct a proper generic extension and +show that the latter also satisfies ZFC. + + + Banach-Steinhaus Theorem + https://www.isa-afp.org/entries/Banach_Steinhaus.html + https://www.isa-afp.org/entries/Banach_Steinhaus.html + Dominique Unruh, Jose Manuel Rodriguez Caballero + 02 May 2020 00:00:00 +0000 + +We formalize in Isabelle/HOL a result +due to S. Banach and H. Steinhaus known as +the Banach-Steinhaus theorem or Uniform boundedness principle: a +pointwise-bounded family of continuous linear operators from a Banach +space to a normed space is uniformly bounded. Our approach is an +adaptation to Isabelle/HOL of a proof due to A. Sokal. + + Attack Trees in Isabelle for GDPR compliance of IoT healthcare systems https://www.isa-afp.org/entries/Attack_Trees.html https://www.isa-afp.org/entries/Attack_Trees.html Florian Kammueller 27 Apr 2020 00:00:00 +0000 In this article, we present a proof theory for Attack Trees. Attack Trees are a well established and useful model for the construction of attacks on systems since they allow a stepwise exploration of high level attacks in application scenarios. Using the expressiveness of Higher Order Logic in Isabelle, we develop a generic theory of Attack Trees with a state-based semantics based on Kripke structures and CTL. The resulting framework allows mechanically supported logic analysis of the meta-theory of the proof calculus of Attack Trees and at the same time the developed proof theory enables application to case studies. A central correctness and completeness result proved in Isabelle establishes a connection between the notion of Attack Tree validity and CTL. The application is illustrated on the example of a healthcare IoT system and GDPR compliance verification. + Power Sum Polynomials + https://www.isa-afp.org/entries/Power_Sum_Polynomials.html + https://www.isa-afp.org/entries/Power_Sum_Polynomials.html + Manuel Eberl + 24 Apr 2020 00:00:00 +0000 + +<p>This article provides a formalisation of the symmetric +multivariate polynomials known as <em>power sum +polynomials</em>. These are of the form +p<sub>n</sub>(<em>X</em><sub>1</sub>,&hellip;, +<em>X</em><sub><em>k</em></sub>) = +<em>X</em><sub>1</sub><sup>n</sup> ++ &hellip; + +X<sub><em>k</em></sub><sup>n</sup>. +A formal proof of the Girard–Newton Theorem is also given. This +theorem relates the power sum polynomials to the elementary symmetric +polynomials s<sub><em>k</em></sub> in the form +of a recurrence relation +(-1)<sup><em>k</em></sup> +<em>k</em> s<sub><em>k</em></sub> += +&sum;<sub>i&isinv;[0,<em>k</em>)</sub> +(-1)<sup>i</sup> s<sub>i</sub> +p<sub><em>k</em>-<em>i</em></sub>&thinsp;.</p> +<p>As an application, this is then used to solve a generalised +form of a puzzle given as an exercise in Dummit and Foote's +<em>Abstract Algebra</em>: For <em>k</em> +complex unknowns <em>x</em><sub>1</sub>, +&hellip;, +<em>x</em><sub><em>k</em></sub>, +define p<sub><em>j</em></sub> := +<em>x</em><sub>1</sub><sup><em>j</em></sup> ++ &hellip; + +<em>x</em><sub><em>k</em></sub><sup><em>j</em></sup>. +Then for each vector <em>a</em> &isinv; +&#x2102;<sup><em>k</em></sup>, show that +there is exactly one solution to the system p<sub>1</sub> += a<sub>1</sub>, &hellip;, +p<sub><em>k</em></sub> = +a<sub><em>k</em></sub> up to permutation of +the +<em>x</em><sub><em>i</em></sub> +and determine the value of +p<sub><em>i</em></sub> for +i&gt;k.</p> + + + The Lambert W Function on the Reals + https://www.isa-afp.org/entries/Lambert_W.html + https://www.isa-afp.org/entries/Lambert_W.html + Manuel Eberl + 24 Apr 2020 00:00:00 +0000 + +<p>The Lambert <em>W</em> function is a multi-valued +function defined as the inverse function of <em>x</em> +&#x21A6; <em>x</em> +e<sup><em>x</em></sup>. Besides numerous +applications in combinatorics, physics, and engineering, it also +frequently occurs when solving equations containing both +e<sup><em>x</em></sup> and +<em>x</em>, or both <em>x</em> and log +<em>x</em>.</p> <p>This article provides a +definition of the two real-valued branches +<em>W</em><sub>0</sub>(<em>x</em>) +and +<em>W</em><sub>-1</sub>(<em>x</em>) +and proves various properties such as basic identities and +inequalities, monotonicity, differentiability, asymptotic expansions, +and the MacLaurin series of +<em>W</em><sub>0</sub>(<em>x</em>) +at <em>x</em> = 0.</p> + + + Gaussian Integers + https://www.isa-afp.org/entries/Gaussian_Integers.html + https://www.isa-afp.org/entries/Gaussian_Integers.html + Manuel Eberl + 24 Apr 2020 00:00:00 +0000 + +<p>The Gaussian integers are the subring &#8484;[i] of the +complex numbers, i. e. the ring of all complex numbers with integral +real and imaginary part. This article provides a definition of this +ring as well as proofs of various basic properties, such as that they +form a Euclidean ring and a full classification of their primes. An +executable (albeit not very efficient) factorisation algorithm is also +provided.</p> <p>Lastly, this Gaussian integer +formalisation is used in two short applications:</p> <ol> +<li> The characterisation of all positive integers that can be +written as sums of two squares</li> <li> Euclid's +formula for primitive Pythagorean triples</li> </ol> +<p>While elementary proofs for both of these are already +available in the AFP, the theory of Gaussian integers provides more +concise proofs and a more high-level view.</p> + + + Matrices for ODEs + https://www.isa-afp.org/entries/Matrices_for_ODEs.html + https://www.isa-afp.org/entries/Matrices_for_ODEs.html + Jonathan Julian Huerta y Munive + 19 Apr 2020 00:00:00 +0000 + +Our theories formalise various matrix properties that serve to +establish existence, uniqueness and characterisation of the solution +to affine systems of ordinary differential equations (ODEs). In +particular, we formalise the operator and maximum norm of matrices. +Then we use them to prove that square matrices form a Banach space, +and in this setting, we show an instance of Picard-Lindelöf’s +theorem for affine systems of ODEs. Finally, we use this formalisation +to verify three simple hybrid programs. + + Authenticated Data Structures As Functors https://www.isa-afp.org/entries/ADS_Functor.html https://www.isa-afp.org/entries/ADS_Functor.html Andreas Lochbihler, Ognjen Marić 16 Apr 2020 00:00:00 +0000 Authenticated data structures allow several systems to convince each other that they are referring to the same data structure, even if each of them knows only a part of the data structure. Using inclusion proofs, knowledgeable systems can selectively share their knowledge with other systems and the latter can verify the authenticity of what is being shared. In this article, we show how to modularly define authenticated data structures, their inclusion proofs, and operations thereon as datatypes in Isabelle/HOL, using a shallow embedding. Modularity allows us to construct complicated trees from reusable building blocks, which we call Merkle functors. Merkle functors include sums, products, and function spaces and are closed under composition and least fixpoints. As a practical application, we model the hierarchical transactions of <a href="https://www.canton.io">Canton</a>, a practical interoperability protocol for distributed ledgers, as authenticated data structures. This is a first step towards formalizing the Canton protocol and verifying its integrity and security guarantees. Formalization of an Algorithm for Greedily Computing Associative Aggregations on Sliding Windows https://www.isa-afp.org/entries/Sliding_Window_Algorithm.html https://www.isa-afp.org/entries/Sliding_Window_Algorithm.html Lukas Heimes, Dmitriy Traytel, Joshua Schneider 10 Apr 2020 00:00:00 +0000 Basin et al.'s <a href="https://doi.org/10.1016/j.ipl.2014.09.009">sliding window algorithm (SWA)</a> is an algorithm for combining the elements of subsequences of a sequence with an associative operator. It is greedy and minimizes the number of operator applications. We formalize the algorithm and verify its functional correctness. We extend the algorithm with additional operations and provide an alternative interface to the slide operation that does not require the entire input sequence. A Comprehensive Framework for Saturation Theorem Proving https://www.isa-afp.org/entries/Saturation_Framework.html https://www.isa-afp.org/entries/Saturation_Framework.html Sophie Tourret 09 Apr 2020 00:00:00 +0000 This Isabelle/HOL formalization is the companion of the technical report “A comprehensive framework for saturation theorem proving”, itself companion of the eponym IJCAR 2020 paper, written by Uwe Waldmann, Sophie Tourret, Simon Robillard and Jasmin Blanchette. It verifies a framework for formal refutational completeness proofs of abstract provers that implement saturation calculi, such as ordered resolution or superposition, and allows to model entire prover architectures in such a way that the static refutational completeness of a calculus immediately implies the dynamic refutational completeness of a prover implementing the calculus using a variant of the given clause loop. The technical report “A comprehensive framework for saturation theorem proving” is available <a href="http://matryoshka.gforge.inria.fr/pubs/satur_report.pdf">on the Matryoshka website</a>. The names of the Isabelle lemmas and theorems corresponding to the results in the report are indicated in the margin of the report. Formalization of an Optimized Monitoring Algorithm for Metric First-Order Dynamic Logic with Aggregations https://www.isa-afp.org/entries/MFODL_Monitor_Optimized.html https://www.isa-afp.org/entries/MFODL_Monitor_Optimized.html Thibault Dardinier, Lukas Heimes, Martin Raszyk, Joshua Schneider, Dmitriy Traytel 09 Apr 2020 00:00:00 +0000 A monitor is a runtime verification tool that solves the following problem: Given a stream of time-stamped events and a policy formulated in a specification language, decide whether the policy is satisfied at every point in the stream. We verify the correctness of an executable monitor for specifications given as formulas in metric first-order dynamic logic (MFODL), which combines the features of metric first-order temporal logic (MFOTL) and metric dynamic logic. Thus, MFODL supports real-time constraints, first-order parameters, and regular expressions. Additionally, the monitor supports aggregation operations such as count and sum. This formalization, which is described in a <a href="http://people.inf.ethz.ch/trayteld/papers/ijcar20-verimonplus/verimonplus.pdf"> forthcoming paper at IJCAR 2020</a>, significantly extends <a href="https://www.isa-afp.org/entries/MFOTL_Monitor.html">previous work on a verified monitor</a> for MFOTL. Apart from the addition of regular expressions and aggregations, we implemented <a href="https://www.isa-afp.org/entries/Generic_Join.html">multi-way joins</a> and a specialized sliding window algorithm to further optimize the monitor. Lucas's Theorem https://www.isa-afp.org/entries/Lucas_Theorem.html https://www.isa-afp.org/entries/Lucas_Theorem.html Chelsea Edmonds 07 Apr 2020 00:00:00 +0000 This work presents a formalisation of a generating function proof for Lucas's theorem. We first outline extensions to the existing Formal Power Series (FPS) library, including an equivalence relation for coefficients modulo <em>n</em>, an alternate binomial theorem statement, and a formalised proof of the Freshman's dream (mod <em>p</em>) lemma. The second part of the work presents the formal proof of Lucas's Theorem. Working backwards, the formalisation first proves a well known corollary of the theorem which is easier to formalise, and then applies induction to prove the original theorem statement. The proof of the corollary aims to provide a good example of a formalised generating function equivalence proof using the FPS library. The final theorem statement is intended to be integrated into the formalised proof of Hilbert's 10th Problem. Strong Eventual Consistency of the Collaborative Editing Framework WOOT https://www.isa-afp.org/entries/WOOT_Strong_Eventual_Consistency.html https://www.isa-afp.org/entries/WOOT_Strong_Eventual_Consistency.html Emin Karayel, Edgar Gonzàlez 25 Mar 2020 00:00:00 +0000 Commutative Replicated Data Types (CRDTs) are a promising new class of data structures for large-scale shared mutable content in applications that only require eventual consistency. The WithOut Operational Transforms (WOOT) framework is a CRDT for collaborative text editing introduced by Oster et al. (CSCW 2006) for which the eventual consistency property was verified only for a bounded model to date. We contribute a formal proof for WOOTs strong eventual consistency. Furstenberg's topology and his proof of the infinitude of primes https://www.isa-afp.org/entries/Furstenberg_Topology.html https://www.isa-afp.org/entries/Furstenberg_Topology.html Manuel Eberl 22 Mar 2020 00:00:00 +0000 <p>This article gives a formal version of Furstenberg's topological proof of the infinitude of primes. He defines a topology on the integers based on arithmetic progressions (or, equivalently, residue classes). Using some fairly obvious properties of this topology, the infinitude of primes is then easily obtained.</p> <p>Apart from this, this topology is also fairly ‘nice’ in general: it is second countable, metrizable, and perfect. All of these (well-known) facts are formally proven, including an explicit metric for the topology given by Zulfeqarr.</p> An Under-Approximate Relational Logic https://www.isa-afp.org/entries/Relational-Incorrectness-Logic.html https://www.isa-afp.org/entries/Relational-Incorrectness-Logic.html Toby Murray 12 Mar 2020 00:00:00 +0000 Recently, authors have proposed under-approximate logics for reasoning about programs. So far, all such logics have been confined to reasoning about individual program behaviours. Yet there exist many over-approximate relational logics for reasoning about pairs of programs and relating their behaviours. We present the first under-approximate relational logic, for the simple imperative language IMP. We prove our logic is both sound and complete. Additionally, we show how reasoning in this logic can be decomposed into non-relational reasoning in an under-approximate Hoare logic, mirroring Beringer’s result for over-approximate relational logics. We illustrate the application of our logic on some small examples in which we provably demonstrate the presence of insecurity. Hello World https://www.isa-afp.org/entries/Hello_World.html https://www.isa-afp.org/entries/Hello_World.html Cornelius Diekmann, Lars Hupel 07 Mar 2020 00:00:00 +0000 In this article, we present a formalization of the well-known "Hello, World!" code, including a formal framework for reasoning about IO. Our model is inspired by the handling of IO in Haskell. We start by formalizing the 🌍 and embrace the IO monad afterwards. Then we present a sample main :: IO (), followed by its proof of correctness. Implementing the Goodstein Function in λ-Calculus https://www.isa-afp.org/entries/Goodstein_Lambda.html https://www.isa-afp.org/entries/Goodstein_Lambda.html Bertram Felgenhauer 21 Feb 2020 00:00:00 +0000 In this formalization, we develop an implementation of the Goodstein function G in plain &lambda;-calculus, linked to a concise, self-contained specification. The implementation works on a Church-encoded representation of countable ordinals. The initial conversion to hereditary base 2 is not covered, but the material is sufficient to compute the particular value G(16), and easily extends to other fixed arguments. A Generic Framework for Verified Compilers https://www.isa-afp.org/entries/VeriComp.html https://www.isa-afp.org/entries/VeriComp.html Martin Desharnais 10 Feb 2020 00:00:00 +0000 This is a generic framework for formalizing compiler transformations. It leverages Isabelle/HOL’s locales to abstract over concrete languages and transformations. It states common definitions for language semantics, program behaviours, forward and backward simulations, and compilers. We provide generic operations, such as simulation and compiler composition, and prove general (partial) correctness theorems, resulting in reusable proof components. Arithmetic progressions and relative primes https://www.isa-afp.org/entries/Arith_Prog_Rel_Primes.html https://www.isa-afp.org/entries/Arith_Prog_Rel_Primes.html José Manuel Rodríguez Caballero 01 Feb 2020 00:00:00 +0000 This article provides a formalization of the solution obtained by the author of the Problem “ARITHMETIC PROGRESSIONS” from the <a href="https://www.ocf.berkeley.edu/~wwu/riddles/putnam.shtml"> Putnam exam problems of 2002</a>. The statement of the problem is as follows: For which integers <em>n</em> > 1 does the set of positive integers less than and relatively prime to <em>n</em> constitute an arithmetic progression? A Hierarchy of Algebras for Boolean Subsets https://www.isa-afp.org/entries/Subset_Boolean_Algebras.html https://www.isa-afp.org/entries/Subset_Boolean_Algebras.html Walter Guttmann, Bernhard Möller 31 Jan 2020 00:00:00 +0000 We present a collection of axiom systems for the construction of Boolean subalgebras of larger overall algebras. The subalgebras are defined as the range of a complement-like operation on a semilattice. This technique has been used, for example, with the antidomain operation, dynamic negation and Stone algebras. We present a common ground for these constructions based on a new equational axiomatisation of Boolean algebras. Mersenne primes and the Lucas–Lehmer test https://www.isa-afp.org/entries/Mersenne_Primes.html https://www.isa-afp.org/entries/Mersenne_Primes.html Manuel Eberl 17 Jan 2020 00:00:00 +0000 <p>This article provides formal proofs of basic properties of Mersenne numbers, i. e. numbers of the form 2<sup><em>n</em></sup> - 1, and especially of Mersenne primes.</p> <p>In particular, an efficient, verified, and executable version of the Lucas&ndash;Lehmer test is developed. This test decides primality for Mersenne numbers in time polynomial in <em>n</em>.</p> Verified Approximation Algorithms https://www.isa-afp.org/entries/Approximation_Algorithms.html https://www.isa-afp.org/entries/Approximation_Algorithms.html Robin Eßmann, Tobias Nipkow, Simon Robillard 16 Jan 2020 00:00:00 +0000 We present the first formal verification of approximation algorithms for NP-complete optimization problems: vertex cover, independent set, load balancing, and bin packing. The proofs correct incompletenesses in existing proofs and improve the approximation ratio in one case. Closest Pair of Points Algorithms https://www.isa-afp.org/entries/Closest_Pair_Points.html https://www.isa-afp.org/entries/Closest_Pair_Points.html Martin Rau, Tobias Nipkow 13 Jan 2020 00:00:00 +0000 This entry provides two related verified divide-and-conquer algorithms solving the fundamental <em>Closest Pair of Points</em> problem in Computational Geometry. Functional correctness and the optimal running time of <em>O</em>(<em>n</em> log <em>n</em>) are proved. Executable code is generated which is empirically competitive with handwritten reference implementations. Skip Lists https://www.isa-afp.org/entries/Skip_Lists.html https://www.isa-afp.org/entries/Skip_Lists.html Max W. Haslbeck, Manuel Eberl 09 Jan 2020 00:00:00 +0000 <p> Skip lists are sorted linked lists enhanced with shortcuts and are an alternative to binary search trees. A skip lists consists of multiple levels of sorted linked lists where a list on level n is a subsequence of the list on level n − 1. In the ideal case, elements are skipped in such a way that a lookup in a skip lists takes O(log n) time. In a randomised skip list the skipped elements are choosen randomly. </p> <p> This entry contains formalized proofs of the textbook results about the expected height and the expected length of a search path in a randomised skip list. </p> Bicategories https://www.isa-afp.org/entries/Bicategory.html https://www.isa-afp.org/entries/Bicategory.html Eugene W. Stark 06 Jan 2020 00:00:00 +0000 Taking as a starting point the author's previous work on developing aspects of category theory in Isabelle/HOL, this article gives a compatible formalization of the notion of "bicategory" and develops a framework within which formal proofs of facts about bicategories can be given. The framework includes a number of basic results, including the Coherence Theorem, the Strictness Theorem, pseudofunctors and biequivalence, and facts about internal equivalences and adjunctions in a bicategory. As a driving application and demonstration of the utility of the framework, it is used to give a formal proof of a theorem, due to Carboni, Kasangian, and Street, that characterizes up to biequivalence the bicategories of spans in a category with pullbacks. The formalization effort necessitated the filling-in of many details that were not evident from the brief presentation in the original paper, as well as identifying a few minor corrections along the way. The Irrationality of ζ(3) https://www.isa-afp.org/entries/Zeta_3_Irrational.html https://www.isa-afp.org/entries/Zeta_3_Irrational.html Manuel Eberl 27 Dec 2019 00:00:00 +0000 <p>This article provides a formalisation of Beukers's straightforward analytic proof that ζ(3) is irrational. This was first proven by Apéry (which is why this result is also often called ‘Apéry's Theorem’) using a more algebraic approach. This formalisation follows <a href="http://people.math.sc.edu/filaseta/gradcourses/Math785/Math785Notes4.pdf">Filaseta's presentation</a> of Beukers's proof.</p> - - Formalizing a Seligman-Style Tableau System for Hybrid Logic - https://www.isa-afp.org/entries/Hybrid_Logic.html - https://www.isa-afp.org/entries/Hybrid_Logic.html - Asta Halkjær From - 20 Dec 2019 00:00:00 +0000 - -This work is a formalization of soundness and completeness proofs -for a Seligman-style tableau system for hybrid logic. The completeness -result is obtained via a synthetic approach using maximally -consistent sets of tableau blocks. The formalization differs from -the cited work in a few ways. First, to avoid the need to backtrack in -the construction of a tableau, the formalized system has no unnamed -initial segment, and therefore no Name rule. Second, I show that the -full Bridge rule is admissible in the system. Third, I start from rules -restricted to only extend the branch with new formulas, including only -witnessing diamonds that are not already witnessed, and show that -the unrestricted rules are admissible. Similarly, I start from simpler -versions of the @-rules and show the general ones admissible. Finally, -the GoTo rule is restricted using a notion of coins such that each -application consumes a coin and coins are earned through applications of -the remaining rules. I show that if a branch can be closed then it can -be closed starting from a single coin. These restrictions are imposed -to rule out some means of nontermination. - - - The Poincaré-Bendixson Theorem - https://www.isa-afp.org/entries/Poincare_Bendixson.html - https://www.isa-afp.org/entries/Poincare_Bendixson.html - Fabian Immler, Yong Kiam Tan - 18 Dec 2019 00:00:00 +0000 - -The Poincaré-Bendixson theorem is a classical result in the study of -(continuous) dynamical systems. Colloquially, it restricts the -possible behaviors of planar dynamical systems: such systems cannot be -chaotic. In practice, it is a useful tool for proving the existence of -(limiting) periodic behavior in planar systems. The theorem is an -interesting and challenging benchmark for formalized mathematics -because proofs in the literature rely on geometric sketches and only -hint at symmetric cases. It also requires a substantial background of -mathematical theories, e.g., the Jordan curve theorem, real analysis, -ordinary differential equations, and limiting (long-term) behavior of -dynamical systems. - - - Poincaré Disc Model - https://www.isa-afp.org/entries/Poincare_Disc.html - https://www.isa-afp.org/entries/Poincare_Disc.html - Danijela Simić, Filip Marić, Pierre Boutry - 16 Dec 2019 00:00:00 +0000 - -We describe formalization of the Poincaré disc model of hyperbolic -geometry within the Isabelle/HOL proof assistant. The model is defined -within the extended complex plane (one dimensional complex projectives -space &#8450;P1), formalized in the AFP entry “Complex Geometry”. -Points, lines, congruence of pairs of points, betweenness of triples -of points, circles, and isometries are defined within the model. It is -shown that the model satisfies all Tarski's axioms except the -Euclid's axiom. It is shown that it satisfies its negation and -the limiting parallels axiom (which proves it to be a model of -hyperbolic geometry). - - - Complex Geometry - https://www.isa-afp.org/entries/Complex_Geometry.html - https://www.isa-afp.org/entries/Complex_Geometry.html - Filip Marić, Danijela Simić - 16 Dec 2019 00:00:00 +0000 - -A formalization of geometry of complex numbers is presented. -Fundamental objects that are investigated are the complex plane -extended by a single infinite point, its objects (points, lines and -circles), and groups of transformations that act on them (e.g., -inversions and Möbius transformations). Most objects are defined -algebraically, but correspondence with classical geometric definitions -is shown. - - - Gauss Sums and the Pólya–Vinogradov Inequality - https://www.isa-afp.org/entries/Gauss_Sums.html - https://www.isa-afp.org/entries/Gauss_Sums.html - Rodrigo Raya, Manuel Eberl - 10 Dec 2019 00:00:00 +0000 - -<p>This article provides a full formalisation of Chapter 8 of -Apostol's <em><a -href="https://www.springer.com/de/book/9780387901633">Introduction -to Analytic Number Theory</a></em>. Subjects that are -covered are:</p> <ul> <li>periodic arithmetic -functions and their finite Fourier series</li> -<li>(generalised) Ramanujan sums</li> <li>Gauss sums -and separable characters</li> <li>induced moduli and -primitive characters</li> <li>the -Pólya&mdash;Vinogradov inequality</li> </ul> - - - An Efficient Generalization of Counting Sort for Large, possibly Infinite Key Ranges - https://www.isa-afp.org/entries/Generalized_Counting_Sort.html - https://www.isa-afp.org/entries/Generalized_Counting_Sort.html - Pasquale Noce - 04 Dec 2019 00:00:00 +0000 - -Counting sort is a well-known algorithm that sorts objects of any kind -mapped to integer keys, or else to keys in one-to-one correspondence -with some subset of the integers (e.g. alphabet letters). However, it -is suitable for direct use, viz. not just as a subroutine of another -sorting algorithm (e.g. radix sort), only if the key range is not -significantly larger than the number of the objects to be sorted. -This paper describes a tail-recursive generalization of counting sort -making use of a bounded number of counters, suitable for direct use in -case of a large, or even infinite key range of any kind, subject to -the only constraint of being a subset of an arbitrary linear order. -After performing a pen-and-paper analysis of how such algorithm has to -be designed to maximize its efficiency, this paper formalizes the -resulting generalized counting sort (GCsort) algorithm and then -formally proves its correctness properties, namely that (a) the -counters' number is maximized never exceeding the fixed upper -bound, (b) objects are conserved, (c) objects get sorted, and (d) the -algorithm is stable. - - - Interval Arithmetic on 32-bit Words - https://www.isa-afp.org/entries/Interval_Arithmetic_Word32.html - https://www.isa-afp.org/entries/Interval_Arithmetic_Word32.html - Brandon Bohrer - 27 Nov 2019 00:00:00 +0000 - -Interval_Arithmetic implements conservative interval arithmetic -computations, then uses this interval arithmetic to implement a simple -programming language where all terms have 32-bit signed word values, -with explicit infinities for terms outside the representable bounds. -Our target use case is interpreters for languages that must have a -well-understood low-level behavior. We include a formalization of -bounded-length strings which are used for the identifiers of our -language. Bounded-length identifiers are useful in some applications, -for example the <a href="https://www.isa-afp.org/entries/Differential_Dynamic_Logic.html">Differential_Dynamic_Logic</a> article, -where a Euclidean space indexed by identifiers demands that identifiers -are finitely many. - - - Zermelo Fraenkel Set Theory in Higher-Order Logic - https://www.isa-afp.org/entries/ZFC_in_HOL.html - https://www.isa-afp.org/entries/ZFC_in_HOL.html - Lawrence C. Paulson - 24 Oct 2019 00:00:00 +0000 - -<p>This entry is a new formalisation of ZFC set theory in Isabelle/HOL. It is -logically equivalent to Obua's HOLZF; the point is to have the closest -possible integration with the rest of Isabelle/HOL, minimising the amount of -new notations and exploiting type classes.</p> -<p>There is a type <em>V</em> of sets and a function <em>elts :: V =&gt; V -set</em> mapping a set to its elements. Classes simply have type <em>V -set</em>, and a predicate identifies the small classes: those that correspond -to actual sets. Type classes connected with orders and lattices are used to -minimise the amount of new notation for concepts such as the subset relation, -union and intersection. Basic concepts — Cartesian products, disjoint sums, -natural numbers, functions, etc. — are formalised.</p> -<p>More advanced set-theoretic concepts, such as transfinite induction, -ordinals, cardinals and the transitive closure of a set, are also provided. -The definition of addition and multiplication for general sets (not just -ordinals) follows Kirby.</p> -<p>The theory provides two type classes with the aim of facilitating -developments that combine <em>V</em> with other Isabelle/HOL types: -<em>embeddable</em>, the class of types that can be injected into <em>V</em> -(including <em>V</em> itself as well as <em>V*V</em>, etc.), and -<em>small</em>, the class of types that correspond to some ZF set.</p> -extra-history = -Change history: -[2020-01-28]: Generalisation of the "small" predicate and order types to arbitrary sets; -ordinal exponentiation; -introduction of the coercion ord_of_nat :: "nat => V"; -numerous new lemmas. (revision 6081d5be8d08) - - - Isabelle/C - https://www.isa-afp.org/entries/Isabelle_C.html - https://www.isa-afp.org/entries/Isabelle_C.html - Frédéric Tuong, Burkhart Wolff - 22 Oct 2019 00:00:00 +0000 - -We present a framework for C code in C11 syntax deeply integrated into -the Isabelle/PIDE development environment. Our framework provides an -abstract interface for verification back-ends to be plugged-in -independently. Thus, various techniques such as deductive program -verification or white-box testing can be applied to the same source, -which is part of an integrated PIDE document model. Semantic back-ends -are free to choose the supported C fragment and its semantics. In -particular, they can differ on the chosen memory model or the -specification mechanism for framing conditions. Our framework supports -semantic annotations of C sources in the form of comments. Annotations -serve to locally control back-end settings, and can express the term -focus to which an annotation refers. Both the logical and the -syntactic context are available when semantic annotations are -evaluated. As a consequence, a formula in an annotation can refer both -to HOL or C variables. Our approach demonstrates the degree of -maturity and expressive power the Isabelle/PIDE sub-system has -achieved in recent years. Our integration technique employs Lex and -Yacc style grammars to ensure efficient deterministic parsing. This -is the core-module of Isabelle/C; the AFP package for Clean and -Clean_wrapper as well as AutoCorres and AutoCorres_wrapper (available -via git) are applications of this front-end. - diff --git a/web/statistics.html b/web/statistics.html --- a/web/statistics.html +++ b/web/statistics.html @@ -1,303 +1,307 @@ Archive of Formal Proofs

 

 

 

 

 

 

Statistics

 

Statistics

- - - - + + + +
Number of Articles:531
Number of Authors:350
Number of lemmas:~143,700
Lines of Code:~2,493,800
Number of Articles:540
Number of Authors:356
Number of lemmas:~145,300
Lines of Code:~2,518,000

Most used AFP articles:

+ + + +
NameUsed by ? articles
1. List-Index 14
2. Coinductive 12
Collections 12
Regular-Sets 12
3. Landau_Symbols 11
4. Show 10
5. Abstract-Rewriting 9
Automatic_Refinement 9
Deriving 9
Polynomial_Factorization9
6. Jordan_Normal_Form 8
Native_Word 8

Growth in number of articles:

Growth in lines of code:

Growth in number of authors:

Size of articles:

\ No newline at end of file diff --git a/web/topics.html b/web/topics.html --- a/web/topics.html +++ b/web/topics.html @@ -1,872 +1,884 @@ Archive of Formal Proofs

 

 

 

 

 

 

Index by Topic

 

Computer science

Automata and formal languages

Algorithms

Knuth_Morris_Pratt   Probabilistic_While   Comparison_Sort_Lower_Bound   Quick_Sort_Cost   TortoiseHare   Selection_Heap_Sort   VerifyThis2018   CYK   Boolean_Expression_Checkers   Efficient-Mergesort   SATSolverVerification   MuchAdoAboutTwo   First_Order_Terms   Monad_Memo_DP   Hidden_Markov_Models   Imperative_Insertion_Sort   Formal_SSA   ROBDD   Median_Of_Medians_Selection   Fisher_Yates   Optimal_BST   IMP2   Auto2_Imperative_HOL   List_Inversions   IMP2_Binary_Heap   MFOTL_Monitor   Adaptive_State_Counting   Generic_Join   VerifyThis2019   Generalized_Counting_Sort   MFODL_Monitor_Optimized   Sliding_Window_Algorithm   Graph: DFS_Framework   Prpu_Maxflow   Floyd_Warshall   Roy_Floyd_Warshall   Dijkstra_Shortest_Path   EdmondsKarp_Maxflow   Depth-First-Search   GraphMarkingIBP   Transitive-Closure   Transitive-Closure-II   Gabow_SCC   Kruskal   Prim_Dijkstra_Simple   Distributed: DiskPaxos   GenClock   ClockSynchInst   Heard_Of   Consensus_Refined   Abortable_Linearizable_Modules   IMAP-CRDT   CRDT   OpSets   Stellar_Quorums   WOOT_Strong_Eventual_Consistency   Concurrent: ConcurrentGC   Online: List_Update   Geometry: Closest_Pair_Points   Approximation: Approximation_Algorithms   Mathematical: FFT   Gauss-Jordan-Elim-Fun   UpDown_Scheme   Polynomials   Gauss_Jordan   Echelon_Form   QR_Decomposition   Hermite   Groebner_Bases   Diophantine_Eqns_Lin_Hom   Taylor_Models   LLL_Basis_Reduction   Signature_Groebner   Optimization: Simplex  

Concurrency

Data structures

Functional programming

Hardware

SPARCv8  

Machine learning

Networks

Programming languages

Clean   Decl_Sem_Fun_PL   Language definitions: CakeML   WebAssembly   pGCL   GPU_Kernel_PL   LightweightJava   CoreC++   FeatherweightJava   Jinja   JinjaThreads   Locally-Nameless-Sigma   AutoFocus-Stream   FocusStreamsCaseStudies   Isabelle_Meta_Model   Simpl   Complx   Safe_OCL   Isabelle_C   Lambda calculi: Higher_Order_Terms   Launchbury   PCF   POPLmark-deBruijn   Lam-ml-Normalization   LambdaMu   Binding_Syntax_Theory   LambdaAuth   Type systems: Name_Carrying_Type_Inference   MiniML   Possibilistic_Noninterference   SIFUM_Type_Systems   Dependent_SIFUM_Type_Systems   Strong_Security   WHATandWHERE_Security   VolpanoSmith   Logics: ConcurrentIMP   Refine_Monadic   Automatic_Refinement   MonoBoolTranAlgebra   Simpl   Separation_Algebra   Separation_Logic_Imperative_HOL   Relational-Incorrectness-Logic   Abstract-Hoare-Logics   Kleene_Algebra   KAT_and_DRA   KAD   BytecodeLogicJmlTypes   DataRefinementIBP   RefinementReactive   SIFPL   TLA   Ribbon_Proofs   Separata   Complx   Differential_Dynamic_Logic   Hoare_Time   IMP2   UTP   QHLProver   Differential_Game_Logic   Compiling: CakeML_Codegen   Compiling-Exceptions-Correctly   NormByEval   Density_Compiler   VeriComp   Static analysis: RIPEMD-160-SPARK   Program-Conflict-Analysis   Shivers-CFA   Slicing   HRB-Slicing   InfPathElimination   Abs_Int_ITP2012   Transformations: Call_Arity   Refine_Imperative_HOL   WorkerWrapper   Monad_Memo_DP   Formal_SSA   Minimal_SSA   Misc: JiveDataStoreModel   Pop_Refinement   Case_Labeling  

Security

Semantics

System description languages

Logic

Philosophical aspects

General logic

Computability

Set theory

Proof theory

Rewriting

Mathematics

Order

Algebra

Analysis

Probability theory

Number theory

Games and economics

Geometry

Topology

Graph theory

Combinatorics

Category theory

Physics

Misc

Tools

\ No newline at end of file